[ Information ] [ Publications ] [Signal processing codes] [ Signal & Image Links ]
[ Main blog: A fortunate hive ] [ Blog: Information CLAde ] [ Personal links ]
[ SIVA Conferences ] [ Other conference links ] [ Journal rankings ]
[ Tutorial on 2D wavelets ] [ WITS: Where is the starlet? ]
If you cannot find anything more, look for something else (Bridget Fountain)
Si vous ne trouvez plus rien, cherchez autre chose (Brigitte Fontaine)
Google
 
Web www.laurent-duval.eu lcd.siva.free.fr

Have a look at projects BIFROST and MajIC for related contents

http://tinyurl.com/sp-ac-ogst
Special issue: "Advances in signal processing and image analysis for physicochemical, analytical chemistry and chemical sensing"

OGST: Oil & Gas Science and Technology

Papers (titles and abstracts)

Editorial: Advances in Signal Processing and Image Analysis for Physico-Chemical, Analytical Chemistry and Chemical Sensing (Progrès en traitement des signaux et analyse des images pour les analyses physico-chimiques et la détection chimique) [pdf], Fabio Rocca and Laurent Duval
Abstract:
[...] Signal processing in analytical chemistry is intended for the qualitative (detection: what compound is present?) and quantitative (estimation: how much of it?) analyses, to study physical and chemical properties of compounds and mixtures of natural or artificial materials. It relies on many chemical-physical interactions of atoms and molecules. Its specificity, with respect to routine chemical analysis, resides in the continuous improvement of analytical methods, experimental designs and chemometrics, i.e. "the art of extracting chemically relevant information from data produced in chemical experiments". The latter essentially borrows methods from multivariate analysis and statistics. A typical one-dimensional chemical signal (the spectrum, as often called) is characterized by amplitude at each point related to the proportion of a certain component. The ordinal variable is not restricted to time or space. It represents a physical-chemical property, which performs the separation between elementary components, e.g. boiling point (temperature), migration (molecular mass), sensitivity to electro-magnetic fields (mass-to-charge ratio), etc. The resulting chemical signal is approximately composed of a linear combination of a sum of peaks of different signals and noise. Hence, the simplest model is a linear mixture. Elementary spectra are typically non-negative and take into account the stoichiometric constants of balanced chemical equations (conservation of mass, charge or atoms). Recently, sparsity constraints on chemical species have come into play. The need for a separation based on two or more chemical properties (e.g. boiling point and electronic structure) has emerged. This has given birth to hyphenated techniques, combining techniques in pair, triple, etc. For instance, two-dimensional or comprehensive chromatography generates a two-dimensional signal. Hyphenation may be extended to higher dimensions, providing an enhancement of resolution at the costs of more drastic data management problems [...]

NMR data analysis: A time-domain parametric approach using adaptive subband decomposition [pdf], E.-H. Djermoune, M. Tomczak and D. Brie
Abstract:
This paper presents a fast time-domain data analysis method for one- and two-dimensional Nuclear Magnetic Resonance (NMR) spectroscopy, assuming Lorentzian lineshapes, based on an adaptive spectral decomposition. The latter is achieved through successive filtering and decimation steps ending up in a decomposition tree. At each node of the tree, the parameters of the corresponding subband signal are estimated using some high-resolution method. The resulting estimation error is then processed through a stopping criterion which allows one to decide whether the decimation should be carried on or not. Thus the method leads to an automated selection of the decimation level and consequently to a signal-adaptive decomposition. Moreover, it enables one to reduce the processing time and makes the choice of usual free parameters easier, comparatively to the case where the whole signal is processed at once. The efficiency of the method is demonstrated using 1-D and 2-D 13C NMR signals.

Inverse Problem Approach for Alignment of Electron Tomographic series [pdf], V.-D. Tran, M. Moreaud, É. Thiébaut, L. Denis and J.-M. Becker
Abstract:
In the refining industry, morphological measurements of particles have become an essential part in the characterization catalyst supports. Through these parameters, one can infer the specific physicochemical properties of the studied materials. One of the main acquisition techniques is electron tomography (or nanotomography). 3D volumes are reconstructed from sets of projections from different angles made by a Transmission Electron Microscope (TEM). This technique provides a real three-dimensional information at the nanometric scale. A major issue in this method is the misalignment of the projections that contributes to the reconstruction. The current alignment techniques usually employ fiducial markers such as gold particles for a correct alignment of the images. When the use of markers is not possible, the correlation between adjacent projections is used to align them. However, this method sometimes fails. In this paper, we propose a new method based on the inverse problem approach where a certain criterion is minimized using a variant of the Nelder and Mead simplex algorithm. The proposed approach is composed of two steps. The first step consists of an initial alignment process, which relies on the minimization of a cost function based on robust statistics measuring the similarity of a projection to its previous projections in the series. It reduces strong shifts resulting from the acquisition between successive projections. In the second step, the pre-registered projections are used to initialize an iterative alignment-refinement process which alternates between (i) volume reconstructions and (ii) registrations of measured projections onto simulated projections computed from the volume reconstructed in (i). At the end of this process, we have a correct reconstruction of the volume, the projections being correctly aligned. Our method is tested on simulated data and shown to estimate accurately the translation, rotation and scale of arbitrary transforms. We have successfully tested our method with real projections of different catalyst supports.
Keywords:

Morphological Component Analysis for the Inpainting of Grazing Incidence X-Ray Diffraction Images Used for the Structural Characterization of Thin Films [pdf], George Tzagkarakis, E. Pavlopoulou, J. Fadili, G. Hadziioannou and J.-L. Starck
Abstract:
Grazing Incidence X-ray Diffraction (GIXD) is a widely used characterization technique, applied for the investigation of the structure of thin films. As far as organic films are concerned, the confinement of the film to the substrate results in anisotropic 2-dimensional GIXD patterns, such those observed for polythiophene-based films, which are used as active layers in photovoltaic applications. Potential malfunctions of the detectors utilized may distort the quality of the acquired images, affecting thus the analysis process and the structural information derived. Motivated by the success of Morphological Component Analysis (MCA) in image processing, we tackle in this study the problem of recovering the missing information in GIXD images due to potential detector's malfunction. First, we show that the geometrical structures which are present in the GIXD images can be represented sparsely by means of a combination of over-complete transforms, namely, the curvelet and the undecimated wavelet transform, resulting in a simple and compact description of their inherent information content. Then, the missing information is recovered by applying MCA in an inpainting framework, by exploiting the sparse representation of GIXD data in these two over-complete transform domains. The experimental evaluation shows that the proposed approach is highly efficient in recovering the missing information in the form of either randomly burned pixels, or whole burned rows, even at the order of 50 % of the total number of pixels. Thus, our approach can be applied for healing any potential problems related to detector performance during acquisition, which is of high importance in synchrotron-based experiments, since the beamtime allocated to users is extremely limited and any technical malfunction could be detrimental for the course of the experimental project. Moreover, the non-necessity of long acquisition times or repeating measurements, which stems from our results adds extra value to the proposed approach.

Multivariate Data Analysis for the Processing of Signals [pdf] J. Renwick Beattie
Abstract:
Real-world experiments are becoming increasingly more complex, needing techniques capable of tracking this complexity. Signal based measurements are often used to capture this complexity, where a signal is a record of a sample's response to a parameter (e.g. time, displacement, voltage, wavelength) that is varied over a range of values. In signals the responses at each value of the varied parameter are related to each other, depending on the composition or state sample being measured. Since signals contain multiple information points, they have rich information content but are generally complex to comprehend. Multivariate Analysis (MA) has profoundly transformed their analysis by allowing gross simplification of the tangled web of variation. In addition MA has also provided the advantage of being much more robust to the influence of noise than univariate methods of analysis. In recent years, there has been a growing awareness that the nature of the multivariate methods allows exploitation of its benefits for purposes other than data analysis, such as pre-processing of signals with the aim of eliminating irrelevant variations prior to analysis of the signal of interest. It has been shown that exploiting multivariate data reduction in an appropriate way can allow high fidelity denoising (removal of irreproducible non-signals), consistent and reproducible noise-insensitive correction of baseline distortions (removal of reproducible non-signals), accurate elimination of interfering signals (removal of reproducible but unwanted signals) and the standardisation of signal amplitude fluctuations. At present, the field is relatively small but the possibilities for much wider application are considerable. Where signal properties are suitable for MA (such as the signal being stationary along the x-axis), these signal based corrections have the potential to be highly reproducible, and highly adaptable and are applicable in situations where the data is noisy or where the variations in the signals can be complex. As science seeks to probe datasets in less and less tightly controlled situations the ability to provide high-fidelity corrections in a very flexible manner is becoming more critical and multivariate based signal processing has the potential to provide many solutions.
Keywords:

Design of Smart Ion-selective Electrode Arrays based on Source Separation through Nonlinear Independent Component Analysis [pdf] Leonardo T. Duarte and Christian Jutten
Abstract:
The development of chemical sensor arrays based on Blind Source Separation (BSS) provides a promising solution to overcome the interference problem associated with Ion-Selective Electrodes (ISE). The main motivation behind this new approach is to ease the time-demanding calibration stage. While the first works on this problem only considered the case in which the ions under analysis have equal valences, the present work aims at developing a BSS technique that works when the ions have different charges. In this situation, the resulting mixing model belongs to a particular class of nonlinear systems that have never been studied in the BSS literature. In order to tackle this sort of mixing process, we adopted a recurrent network as separating system. Moreover, concerning the BSS learning strategy, we develop a mutual information minimization approach based on the notion of the differential of the mutual information. The method works requires a batch operation, and, thus, can be used to perform off-line analysis. The validity of our approach is supported by experiments where the mixing model parameters were extracted from actual data.
Keywords:

Unsupervised segmentation of hyperspectral images with spatialized Gaussian mixture model and model selection [pdf] Serge Cohen, Erwan Le Pennec
Abstract:
In this article, we describe a novel unsupervised spectral image segmentation algorithm. This algorithm extends the classical Gaussian Mixture Model-based unsupervised classification technique by incorporating a spatial flavor into the model: the spectra are modelized by a mixture of K classes, each with a Gaussian distribution, whose mixing proportions depend on the position. Using a piecewise constant structure for those mixing proportions, we are able to construct a penalized maximum likelihood procedure that estimates the optimal partition as well as all the other parameters, including the number of classes. We provide a theoretical guarantee for this estimation, even when the generating model is not within the tested set, and describe an efficient implementation. Finally, we conduct some numerical experiments of unsupervised segmentation from a real dataset.
Keywords:

Call for papers (reminders)

[pdf]

Motivations (special issue March/April 2014)

With the advent of more affordable, higher resolution or innovative data acquisition techniques (for instance hyphenated instrumentation such as two-dimensional chromatography), the need for advanced signal and image processing tools has grown in physico-chemical analysis, together with the quantity and complexity of acquired measurements.

Either with mono- (signals) or two-dimensional (from hyphenated techniques to standard images) data, processing generally aims at improving quality and at providing more precise quantitative assessment of measurements of materials and products, to yield insight or access to information, chemical properties, reactive dynamics or textural properties, to name a few (for instance). Although chemometrics embrace from experimental design to calibration, more interplay between physico-chemical analysis and generic signal and image processing is believed to strengthen the two disciplines. Indeed, although they strongly differ in background and vocabulary, both specialities share similar values of best practice in carrying out identifications and comprehensive characterizations, albethey of samples or of numerical data.

The present call for papers aims at gathering contributions on recent progresses performed and emerging trends concerning (but not limited to):

pertaining to the improvement of physico-chemical analysis techniques, including (not exclusively): in the following proposed domains:

Editors:

Many thanks to Thierry Gallouët for redistributing, Igor Carron for mentionning it at Nuit Blanche in Around the blogs in 80 hours, Harris Georgiou, GdR ISIS. Short link: http://tinyurl.com/ogst-signal-chemical-analysis