Chemometrics - Analytical Chemistry - ACS Publications - American


Chemometrics - Analytical Chemistry - ACS Publications - American...

0 downloads 127 Views 145KB Size

Anal. Chem. 2006, 78, 4137

Chemometrics Barry Lavine*,† and Jerry Workman‡

Department of Chemistry, Oklahoma State University, Stillwater, Oklahoma 74078, and Thermo Electron Corporation, Madison, Wisconsin 53711 Review Contents Library Searching Data Preprocessing and Feature Selection Calibration Pattern Recognition Image Analysis Literature Cited

4138 4139 4139 4141 4143 4143

This review, the sixteenth of the series, and the fourteenth with the title of “Chemometrics”, covers the most significant developments in the field from January 2004 to December 2005. As in the previous review (A1), breakthroughs and advances in the field are highlighted, and trends within the field are evaluated. The current review is limited to less than 200 references, which continues to pose a challenge since the number of citations on chemometrics continues to show steady growth. Over 1270 citations, for example, appear when the terms “chemometric” or “chemometrics” are used as keywords in a Chemical Abstracts search from 2004 through 2005. This comes as no surprise since many areas of chemometrics have been assimilated into such disciplines as informatics, chemoinformatics, bioinformatics, and the like. During this reporting period, a large number of review articles on chemometrics and applications of chemometrics to fields other than chemistry have been published. A few of the more interesting articles are summarized for the convenience of the readers. Chemometric analysis of the results from analytical laboratory methods provides new possibilities to support diagnostic decisions in laboratory medicine and clinical chemistry. This application area has been the subject of a recent review (A2). Utilization of multivariate statistical methods and optimal combination of the original measurement variables increases the diagnostic value of laboratory assays. Principal component analysis when combined with other statistical techniques allows the user to quantify variable interactions and identify hidden structures in complex data sets. Metabolomics, which is a combination of data-rich analytical chemical measurements and chemometrics, has been the subject of a review by Nicholson (A3). The types of analytical data, mainly from NMR and mass spectrometry, for sample classification offer new and possibly exciting opportunities for the field of chemometrics. Geladi (A4) has reviewed the role of chemometrics in spectroscopy using principal component analysis, PLS, and multivariate image analysis to demonstrate the power of chemometric thinking. The application of image analysis and pattern recognition techniques to the extraction of information from two-dimensional chromatographic experiments has been reviewed by Reichenbach † ‡

Oklahoma State University. Thermo Electron Corp.

10.1021/ac060717q CCC: $33.50 Published on Web 00/00/0000

© 2006 American Chemical Society

(A5). Brereton (A6) has discussed the interplay between chemometrics in process analytical technology. The importance of advanced multivariate analysis techniques in data-driven applications such as lead development in drug discovery has been the subject of a recent review (A7). The huge volumes and complex dependencies of data produced by such large-scale experiments have led to a reassessment of the approaches used for data analysis in this field. Opportunities in system biology for both chemists and chemometricians have also received attention in the analytical literature in the past two years (A8, A9). The role that chemometrics is playing or can play in the design and development of new sensor systems has been the subject of a recent review by Booksh (A10). The multivariate extraction of information from chemical data drives research in many areas of science and engineering including but not limited to subtopics such as the following: in silico methods, biosensor data analysis, data preprocessing, genetic algorithms, hyperspectral imaging data analysis, image processing, advanced regression methods, calibration transfer between instrument systems and types, mapping approaches, genetic algorithms, signal processing, data compression and filtering, neural networks, proteomics, spectral searching and matching algorithms, and limit of detection testing. These will be important challenges for multivariate approaches as long as practitioners of chemometrics continue to solve problems that need to be solved as opposed to solving problems that can be done, simply because the software tools make it simple. If chemometrics is to be integrated in problem-solving and to reach its potential as a “data microscope”, probing the information content within complex multivariate data sets and relating it to “real” demonstrable phenomena, it must advance in terms of the capability to solve real-world application challenges. This review will focus on new algorithmic approaches, image analysis, and the new scientific paradigm involving data mining methods. The “hot” topics for chemometrics include the data mining theme as well as the new paradigm theme for scientific discovery. The New Paradigm theme is defined by a methodology where many experiments are performed under known experimental conditions followed by careful chemometric analysis. The analysis is performed using a rich toolbox of chemometric-like methods for analysis of the multidimensional inner relationships found between the variables within the measured or theoretical data. As we have noted in previous installments of this review, the classical scientific experiment has always involved an experimenter assuming that cause-and-effect was simply observable to the trained eye and was in most cases univariate (i.e., one dependent variable changing in proportion to the change in an independent variable). Unfortunately, where multivariate systems are involved, Analytical Chemistry, Vol. 78, No. 12, June 15, 2006 4137

which are more common in reality, the univariate approach is limited to the powers of human observation. As formidable as these have been to experimentalists of the past they are limited to a very few variables and then cease in power to interpret relationships between variables. Gleaning an understanding of the underlying inner-relationships between variables associated with physical and biological phenomena dictates that multivariate phenomenology is closer to reality than univariate relationships for many, if not most, natural phenomena, and certainly for the more complex phenomena of current exploration and interest. Solving such problems involves information management; a glut of experimental (measured) datassome with information contents some with no relevant informationsand powerful chemometric tools for extracting information related to cause-and-effect between variables. Thus, there is an acute need for the tools comprising a “data microscope” and information management system. The work of this new paradigm could also be termed “causeand-effect information discovery from data streams”. It all sounds elementary and intuitively attractive but significant experimental and mathematical rigor is required for successful discovery work, since powerful data analysis tools combined with a glut of data produces the “ink blot” effect (The analogy of the ink blot effect is that when observing a random ink blot on paper, one person may see a man with a hat, while another sees a cowboy on a horse, etc.). In other words, if one is not careful, disciplined, and skilled in the art, one can find many relationships within data (defined by high correlation under special, but not general, conditions) that are sufficient to fit one’s own personal presuppositions or perspectives. With enough processing and a little “creative” interpretation, one can derive nearly any conclusion from large quantities of data; thus, one “sees” relationships in the data that do not generally exist. So the challenge is to find real cause-andeffect between variables and derive predictive models that hold under the harsh realities of prescribed and reproducible experimental conditions or for more general described situations. Closed-system modeling is acceptable when predicting within the carefully prespecified confines of the closed system; however, models developed in a closed system cannot be assumed to extrapolate to a universal or even broad regional case. Another problem is rationalizing (in a psychological sense) a series of inner relationships in data that do not exist. This is illustrated in some cases where modeling is ex post facto and where pseudorelationships are derived in the data that are not predictive for slight changes in experimental conditions, this problem being severe in several such real-life examples as noninvasive glucose analysis using vibrational spectroscopy (where the net analyte signal is orders of magnitude below the noise level) or in deriving “predictions” of the past from ancient texts. (Note: The phrase “predictions of the past” sounds much like a Yogi Berra-ism.) However, many dollars and much time can be wasted chasing such pseudophenomena when the reasonable interpretation of experimental data dictate no discernible cause-and-effect between variables under a prescribed set of experimental conditions. In summary, caution is warranted and skill required to derive causeand-effect-based predictive models. The new “cause-and-effect information discovery from data streams” paradigm is being used somewhat in areas of in silico computational methods, such as QSAR (quantitative structure4138

Analytical Chemistry, Vol. 78, No. 12, June 15, 2006

activity relationship) or SAR (structure-activity relationship) and more modern variants. In these methods, a series of known data relative to molecular structural descriptors is applied to efficacy, binding efficiencies, toxicity, fluorescence response, oncogenesis, teratogenesis, mutagenesis, and other molecular “performance” indicators (as derived through experimental evidence). Rather than synthesizing and testing literally millions of compounds, given the data from representative molecular systems, one can use in silico methods to virtually predict molecular structure-function properties and then synthesize the final or target candidate compounds for experimental verification and fine-tuning using in vitro or in vivo experiments. This is a form of multivariate experimentation using computers as data microscopes to study inner relationships in data for scientifically verifiable phenomena. However, the tools are still not optimized for data exploration and could use continuous improvement strategies. LIBRARY SEARCHING Library searching is an important tool in the identification of unknowns and the qualitative analysis of mixtures. It is also crucial in data-driven research and the new paradigm theme for scientific discovery. During this review period, many citations on applications of library searching were found. Many but not all of these applications involved proteomics or genomics. For the purpose of this review, the ensuing discussion will emphasize novel procedures or the enhancement of existing methods as well as unusual applications and new methodology. The concept of similarity and its implementation in the context of libraries for chemoinformatics has been the subject of a recent review (B1). For effective searching, similarity and dissimilarity scores should be constructed using both local and global data constructs. The recognition of chemical structure information from spectral data continues to challenge chemometricians. An algorithm based on binary substructure descriptors has been developed by Varmuza (B2) for evaluating the structural similarity between an unknown and a hit list of compounds. This method has been shown to improve the performance of the search for IR and MS databases for both similarity and identity searches. Search filters utilizing molecular descriptors to characterize the druglike properties of commercially available chemical compounds were used to search a library of 2.7 million compounds for the purpose of identifying potential leads (B3). One filter was based on the modeling of aqueous solubility, and the other was based on the modeling of Casco-2 passive membrane permeability. The use of visual data mining tools to extract information from large chemical libraries has also been reported in the literature. Of particular interest is InfVis, which allows the user to represent multidimensional data sets using 3D glyph information visualization techniques (B4). Wold (B5) has shown that balanced sampling of data from large, complex, and unbalanced data repositories can ensure that representative information will be extracted by latent structure (PLS) modeling. Library searching is an essential element of large-scale proteomics. Currently, four approaches are used to determine a match between a mass spectrum and a sequence. These approaches are summarized in reviews by Sadygov (B6) and Yoshino (B7). The success of any search is dependent on the quality of the spectra in the library, the infrastructure used to store the mass spectra,

and the search results. To assess spectral quality, Yates (B8) has developed a filter that eliminates poor-quality spectra from the library, which significantly improves the throughput and robustness of any mass spectral search. To facilitate the manipulation of individual library entries, Yates (B9) has developed a series of unified text formats for storing spectral data and search results that are compact, easily parsed by machines, and readily accessible by data mining algorithms. Smith (B10) has focused on the liquid chromatographic component of the LC/MS/MS via elution times using peptide retention time prediction models to improve the quality of mass spectral matches. Kaliszan (B11) has used a neural network to investigate the data generated by the widely used protein identification program Sequest. An appropriate trained neural network has proved to be a high-throughput tool able to process large amounts of MS/MS data and to correctly classify as good or bad peptide MS/MS spectra, which is usually analyzed manually. Yates (B12) has taken a different approach to the problem of matching: he has applied hypothesis-driven models to database results to validate protein identifications. DATA PREPROCESSING AND FEATURE SELECTION The objective of data preprocessing and feature selection is the development or use of methods to enhance the data with respect to chemically or physically relevant information. In the course of the two years covered by this review, work in this area has mainly involved wavelets. Because of their localization properties, wavelets are preferred to the Fourier transform in applications involving data compression, resolution, and signal enhancement of instrumental data. Brown (C1) has written a review of wavelets from the perspective of data fusion. The advantages of using wavelets for modern multivariate calibration and transfer are demonstrated. A new introductory text on wavelet theory and practice has recently been published (C2). Karger (C3) has developed a new procedure for identifying peptides from tandem mass spectra that utilizes wavelets to denoise the spectra prior to fragment ion selection. After application of the new procedure, he reported an increase of 33% in the amount of information extracted from the LC/MS analysis. A wavelet-neural network signal processing method demonstrated ∼10-fold improvement over traditional signal processing methods for the detection limit of various N and P compounds from the output of a thermionic detector attached to a gas chromatograph (C4). A simple and fast method to measure hydroxyl number of polyols was developed using near-infrared spectroscopy (C5). The method utilized the wavelet transform to denoise the spectra prior to the application of PLS to develop a suitable calibration curve. This was an especially challenging problem because the samples were measured with a narrow aperature, which caused attenuation of the NIR signal and increased the spectral noise in the collected spectra. The wavelet transform has also been shown to be effective for normalizing cDNA microarray data by removing the intensitydependent bias across different slides within a group or between experimental groups (C6). Booksh (C7) has shown that wavelet transforms when applied to hyperspectral image cubes can dramatically reduce their storage space, facilitating the use of PCA and other data mining algorithms for extracting information from data image cubes. Galvao (C8) has developed a strategy for wavelet filter optimization that aims at directly minimizing the

prediction error of a PLS model that uses wavelets as its predictor variables. Feature selection has also been an active area of research in the past two years. For averaging techniques such as PLS and K-NN, feature selection is crucial since signal is averaged with noise over a large number of variables with a loss of discernible signal amplitude when noisy features have not been removed from the data. Feature selection is a difficult problem because of collinearities among the measurement variables and the presence of data points with high leverage (C9, C10). Using a method called recursive feature elimination method (C11), the performance of support vector machines and probabilistic neural networks can be enhanced. A new method for the selection of spectral variables using linear regression or neural networks and based on an external validation of the calibration model has been developed (C12). Recently, a genetic algorithm for feature selection and classification has been developed that selects features that optimize the separation of the classes in a plot of the two or three largest principal components of the data. Because the largest principal components capture the bulk of the variance in the data, the features chosen by the GA contain information primarily about differences between the classes in a data set. The principal component analysis routine embedded in the fitness function of the GA acts as an information filter, significantly reducing the size of the search space since it restricts the search to feature sets whose principal component plots show clustering on the basis of class. The pattern recognition GA integrates aspects of artificial intelligence and evolutionary computations to yield a smart onepass procedure for feature selection, classification, and prediction. The advantages of this procedure have been demonstrated in two recently published studies (C13, C14). It is inevitable that relationships exist among sets of conditions used to generate data and the patterns that result. The existence of these complicating relationships is an inherent part of fingerprinttype data. Problems can arise in the interpretation of PCA/PLS results when one fails to take into account these complicating factors (C15). This has been an especially challenging problem in fiber probe diffuse reflectance NIR spectroscopy when a small change in the position of the sample produces spectral artifacts that are as large as the physical or chemical effect that we seek to measure. The impact of these confounding relationships on qualitative and quantitative analysis using PCA and PLS has been recently evaluated through experimental and theoretical simulations (C16). To desensitize calibration models to baseline variations, wavelength shifts, or trace contaminants, a variation of the Lagrange multiplier equation for a regression subject to constraints has been formulated (C17). Distorted copies of the original spectra can also be prepared in a counterbalanced manner and added to the training set to ensure desensitization. The advantage of this approach is that accidental correlations between the added distortions and the y-block variables will not be introduced during the calibration. CALIBRATION Multivariate calibration refers to the process of relating, correlating, or modeling analyte concentration or the measured value of a physical or chemical property to a measured response, e.g., near-IR spectra of wheat or multicomponent mixtures. It is Analytical Chemistry, Vol. 78, No. 12, June 15, 2006

4139

the fastest growing area of chemometrics as evidenced by the large number of papers that have appeared in the literature in the past two years on PLS or PCR regression, which has become the defacto standard for multivariate calibration because of the quality of the calibration models produced and the ease of their implementation due to the availability of user-friendly software. The great diversity in the application of these calibration methods indicates that multivariate calibration has been adopted by research groups far removed from the field of chemometrics, a consequence of the interest that chemometrics is attracting from other fields such as genomics or proteomics. Applications of multivariate calibration have become commonplace, and only a few reviews on methods of multivariate calibration (D1) have been published in the literature during the past two years. PLS regression has become routine in pharmaceutical analysis, with conventional multivariate modeling having been successfully applied to the analysis of the active principles in tablets, syrups, and drops (D2, D3). Improving the methodology used for multivariate calibration continues to be an active area of research in chemometrics. Marbach (D4) has developed a new method for multivariate calibration that combines the best features of K-matrix calibration and P-matrix calibration. The advantage offered by this method is that standards with laboratory reference values are not necessary to develop a calibration. Independent component analysis, a technique related to projection pursuit, can be used to obtain interpretable latent variables in multivariate regression. Two calibration algorithms utilizing independent component analysis have been developed (D5) and tested using simulated data. When compared with PCR or PLS, the advantage of the independent component analysis-based methods were latent variables that were chemically interpretable and produced accurate predictions. Detection of outliers is often crucial to ensure a successful calibration. Geladi (D6) has proposed a calibration method for PLS that utilizes replicates and duplicates for detection of outliers and has demonstrated the efficacy of this method by predicting the moisture content of biofuels using NIR data. Massart (D7) has developed a new method to update PLS calibration models based on the Delaunay triangulation method. The updating leads to the expansion of the original calibration set or to the creation of a new calibration model. Poppi (D8) has developed a new feature selection method that allows him to identify and delete unimportant PLS regression coefficients from the model without loss of prediction capability. Indahl (D9) has implemented a modification of the PLS algorithm that generates calibration models with fewer and more interpretable components when good linear predictions can be made. Van Espen (D10) has developed a new robust PLS algorithm called robust M-regression that outperforms existing methods for robust PLS regressions. Massart (D11) has combined PLS with boosting to minimize overfitting. Although the number of PLS components in the model does not have to be specified, the user must provide the shrinkage value and the number of iterations to be performed. Ergon (D12) has demonstrated that orthogonal signal correction can also be performed using an ordinary PLS model as the starting point and applying the appropriate similarity transformation. This approach has several advantages including the ability to tune the influence of orthogonal 4140

Analytical Chemistry, Vol. 78, No. 12, June 15, 2006

signal correction to the rotation of the eigenvectors used for the model. The removal of undesirable background effects from spectral data can enhance the performance of PLS regression models. Feudale (D13) has discussed a novel signal preprocessing technique that combines the local and multiscale properties of the wavelet prism with the global filtering capability of orthogonal signal correction to enhance the signal-to-noise ratios of NIR spectra. Esteben-Diez (D14) has also investigated an implementation of the wavelet transform that allows for signal correction and data compression to be performed prior to PLS regression. Orthogonal signal correction can also be used to transfer PLS calibration models among NIR spectrometers. A comparative study (D15) of second-derivative, multiplicative scatter correction, finite impulse response filtering, slope and bias correction, model updating, and orthogonal signal correction was conducted to determine which method was best for model transfer. Orthogonal signal correction and model updating were determined to be the best. Both methods produced robust PLS models. Orthogonal signal correction has also been used as a preprocessing technique to remove information unrelated to the target variables using a constrained principal component analysis for improving the prediction accuracy of calibration models. This has been demonstrated for the determination of nitroaniline isomer mixtures by UV/visible absorption spectrometry (D16). Model selection is an important issue when developing calibration models using latent variables. It is important to select the appropriate number of latent variables to build accurate and precise PLS or PCR models. Commonly used metrics to select the number of latent variables are based on the cross-validated predicted error sum of squares. A new approach has been developed by Thomas (D17) based on the use of nonparametric statistical methods to analyze the cross-validated prediction errors of individual observations across models incorporating varying numbers of latent variables in order to identify the model with the fewest number of latent variables that provides precision indistinguishable from more complex PLS or PCR models. A sample selection strategy based on the successive projection algorithm for selection of variables has been used to select a subset of samples that are representative of the data set and are minimally redundant (D18). The selection takes into account both the X and Y block, thereby tailoring the choice of samples according to the spectral profile of the chemical species involved in the calibration. Calibration models developed from these selected samples contained the same amount of information as PLS models built with the full calibration set. Applications of PLS continue to abound in the literature. Use of PLS has become commonplace. Noteworthy or novel applications of PLS are summarized below. Vibrational spectroscopy has long been an area where PLS methods are embraced, and it is no surprise that many of the applications appeared in analyses using near-IR and mid-IR spectroscopy. Tran (D19) used spectroscopic measurements and PLS to develop a sensitive and accurate method to determine the composition of fullerenes in samples. Orthogonal signal correction and PLS have been used to calibrate data generated in open-path FT-IR experiments (D20). The antioxidant capacity of fruit extracts has been determined using FT-IR spectroscopy and PLS (D21). A good calibration model for

antioxidant capacity was obtained. Partial least-squares regression was used to develop a calibration model for deconvoluting the spectral overlap of five fluorophores (D22). In this study, the effects of single and combined types of noise were investigated. Inferential sensors have played an important role in control strategies to improve the quality of petrochemical products. Due to unforeseen technical problems, the data sets often contain missing data. Using PLS, it is possible to develop a good calibrations of the data even if 20% of the data is missing (D23). Analysis of glucose in various media has been the subject of attention from many research groups. The lower levels of glucose found in physiological samples and the high background presents a special challenge to the multivariate calibration. Nevertheless, the ability to measure glucose noninvasively in human subjects remains an active area of research (D24-D30) because success in this area would revolutionize the treatment of diabetes. The challenges from the perspective of spectroscopy and chemometrics to tackle this problem have been the subject of a review by Arnold and Small (D31). Nonlinear multivariate regression methods experienced substantial growth in the past two years. A simple and efficient method for modeling nonlinear spectral responses was developed using artificial neural networks (D32). In this model, the spectral data were subjected to the net analyte signal calculation, and the norm of the net analyte signal vectors was used as the input of a multilayer feed forward neural network trained by back-propagation to process the data. The performance of the model was evaluated using simulated and real data. Booksh (D33) used locally weighted regression to calibrate micro hot plate conductometric sensors. The conductometric sensor arrays have a marked nonlinear profile. Several nonlinear regression techniques including locally weighted regression, alternating conditional expectations, and projection pursuit were used to calibrate the responses to analyte concentration. Least-squares support vector machines, which are a relatively new multivariate calibration method, were compared to other approaches for removal of nonlinear spectral interferences from NIR data and were found to perform better (D34). A problem with support vector machines is defining its set of parameters. Buydens (D35) has formulated an optimization scheme based on genetic algorithms and simplex optimization to determine in an automated fashion the values for these parameters. Weighted support vector machines, which involve modification of the risk function of the standard support vector machine, were able to model polymerization processes that are highly nonlinear and have a large number of input variables. Case studies for poly(vinyl butyrate) suggest that weighted support vector machines perform better than standard support vector machines (D36). PATTERN RECOGNITION The overall goal of pattern recognition is classification. Developing a classifier from spectral or chromatographic data may be desirable for any number of reasons including source identification, sample quality, and the detection of a specific analyte to name a few. The classification step is often accomplished using one or several techniques that are now fairly well-established including statistical discriminant analysis, principal component analysis, K-NN, SIMCA, regularized discriminant analysis, and

hierarchical clustering. Classification of data is an important subject in chemometrics as evidenced by the large number of citations appearing in the Chemical Abstracts database on pattern recognition. Most of the chemical pattern recognition literature in the past two years has focused on novel and not so novel applications. Nevertheless, there were several articles published on new classification methods. Brown (E1) has proposed decision pathway modeling, which decomposes the classification problem into simpler binary discrimination tasks that are assembled in a single hierarchical architecture for multigroup classification. To minimize error propagation through the hierarchical architecture, the classification of new samples was directed using dynamic pathway selection. Brereton (E2) has investigated the classification of pyrolysis gas chromatography-mass spectrometry data by support vector machines. By using the appropriate kernels, classifiers of diverse complexity can be developed including those able to generate nonlinear decision boundaries. In the data sets investigated, support vector machines performed better than discriminant analysis. A new approach to chemometric modeling based on orthogonal projections to latent structures, which takes into account shifting peaks, was investigated. By using combined back scale loading plots and variable weights, Trygg (E3) was able to show that peak position variation can be successfully handled and provide information on physicochemical variations in metabolonic data sets that otherwise would go undetected. A novel chemical taste sensor that mimics the behavior of the human gustatory system has been developed (E4). The taste sensor consists of an array of electrochemical sensors. The detected signals are introduced into a two-phase radial basis neural network. The first phase of the network quantifies the amount of taste-causing substances in food samples from the responses of the electrodes. These results are introduced into the second phase of the neural network, which correlates the amount of substances with the overall taste. The final output is scored on a scale of 1-5 for each of the five basic tastes sensed by the human gustatory system. The network scores were similar to human scores for 30 drink varieties. Furthermore, the network could successfully predict the interaction of different tastes. Classification methods based on PLS have also been developed. Weighted penalized PLS (E5) builds a classifier by combining multiple data sets and then weighing the individual data sets depending on their relevance to the current study. By borrowing information from the other data set, the performance of the algorithm is superior to a classifier built on only a single data set. PLS and ridge penalized logistic regression have been combined to develop a classification method for data sets with few observations and many predictor variables (E6). One of the more interesting applications of pattern recognition methods reported during this recent review period is the detection of targeted analytes in the environment. Brown (E7, E8) has developed a novel implementation of PLS to automatically detect dimethylmethylphosphonate vapor from remotely sensed hyper spectral passive IR image data. Prior knowledge of the target signature is used to extract analyte information directly from the data. Various unknown and interfering signatures are implicitly modeled by the PLS algorithm, eliminating the step of performing a separate background subtraction. Brereton (E9) has demonstrated that support vector machines can be used to detect Analytical Chemistry, Vol. 78, No. 12, June 15, 2006

4141

hydrocarbons in soils from mass spectral data. The advantage of using support vector machines to classify data is that only a small fraction of the training set samples need to be labeled, i.e., have a class tag. Many of the reported applications of pattern recognition techniques used cluster analysis to organize the data. Visualization of the clusters is often crucial to understanding the structure of a data set. Kibbey (E10) describes a novel approach to visualize clustered SAR data using tree maps and heat maps, which provides simultaneous representation of cluster members along with their associated assay values. A two-step clustering method based on principal component analysis has been developed to visualize information contained in large historic data sets used in process control (E11). Process states are first classified into modes corresponding to quasi-steady states and transitions using a multivariate algorithm. Principal component analysis-based similarity measures are then used in the second phase to compare the different modes and different transitions and to cluster them. The effectiveness of the proposed method was demonstrated using simulated data. Kohonen self-organizing maps have also been used to visualize multivariate chemical data in the past two years. The classification of photochemical reactions was investigated with molecular descriptors used to characterize both the reactants and the products (E12). Classifications were made based on differences between the descriptors of the products and reactants. Using an independent test set to validate training set results, the classification success rate for the reactions by type was 90%. Kohonen neural network maps were also used to differentiate bacterial communities by their carbon source utilization profile obtained from spectroscopic data (E13). The self-organizing maps were able to visualize a large volume of data and were easier to interpret than plots obtained by principal component analysis. Lavine (E14) has also shown that Kohonen self-organizing maps have advantages over principal component analysis. Data preprocessing is usually minimal, and outliers are less of a problem since they only affect one map unit and its neighborhood. These advantages were demonstrated in two studies. In the first study, Raman spectroscopy and self-organizing maps were used to differentiate six common household plastics by type for recycling purposes. The second study involved the development of a potential method to differentiate acceptable lots from unacceptable lots of Avicel using diffuse reflectance near-infrared spectroscopy and self-organizing maps. Another interesting application of pattern recognition methods reported in the literature is the sensor array, which allow for the identification, classification, and in some cases quantification of organic compounds or ions. Unlike traditional chemical sensing, an individual sensor is not highly selective toward the analyte of interest but the pattern of the array’s response can be used to differentiate the targeted moiety. When coupled with pattern recognition techniques, arrays of broadly cross reactive sensors can provide a man-made implementation of an olfactory or gustatory system. The applications of sensor arrays to vapor, solution, and food analyses have been the subject of several recent reviews (E15-E17). Because of the large number of studies on applications of pattern recognition methods to sensor array data, only the more interesting and novel applications are cited here. Perhaps, the most 4142

Analytical Chemistry, Vol. 78, No. 12, June 15, 2006

interesting study reported in the literature in the past two years on sensor arrays involved a colorimetric array. When coupled with hierarchical clustering, the colorimetric sensor array for detection of organics in water, which consisted of hydrophobic dyes on a hydrophobic membrane, could readily distinguish subtle structural features, e.g., primary versus secondary versus cyclic amines, as well as differentiating Coca Cola from Pepsi and diet Coca Cola (E18). The feasibility of discriminating between the different rinses in a household washing machine was investigated using a voltammetric electronic tongue and pattern recognition techniques (E19). The rinses from 20 machine wash runs with four different prerequisites were investigated with all of the rinses correctly classified when SIMCA pattern recognition was applied to the data. A quartz crystal microbalance sensor array was developed to evaluate volatile degradation compounds found in used engine oil (E20). The headspace of new and used petroleum products was sampled by the sensor array. Hierarchical clustering of the data revealed patterns that were characteristic of new and used oil. Furthermore, the new oils clustered into groups separated by mileage. A nanostructured sensor array, which consisted of thin-film assemblies of alkanethiolate-monolayer capped gold nanoparticles, was developed for detection of volatile organic compounds (E21). Each array element displayed linear responses to the vapor concentration. Using principal component analysis, a satisfactory classification of the data was achieved for a set of vapor response. An electronic nose has been investigated as a potential on-line monitor for hemodialysis (E22). Blood samples were analyzed using an array of 14 conducting polymer sensors. Principal component analysis and hierarchical clustering were used to evaluate the data, and each demonstrated the ability to distinguish between pre- and postdialysis patients. An electronic tongue and pattern recognition techniques have been used to differentiate various brands of orange juice, milk, and tonic (E23, E24). Methodology has been developed by Zellers (E25, E26) for determining the limits of recognition for sensor arrays, which is defined as the minimum concentration at which reliable individual vapor recognition can be achieved. Monte Carlo simulation techniques were used to formulate the necessary probabilistic statements. The combination of sensor arrays and pattern recognition techniques is an active area of research. Many research groups have directed their attention toward the development of sensor arrays in which neural networks play a vital role. Schiffman (E27) has investigated the appropriateness of using the LevenbergMarquardt neural network training algorithm to recognize odor patterns associated with an electronic nose. The odor recognition system used was composed of a Karhunen-Loeve-based preprocessing unit and a feed forward neural network. The results of the experiments indicate that feed forward neural networks provide high classification success rates. Zanchettin (E28) used a wavelet filter to preprocess odor signals and a multilayer perceptron algorithm for classification and odor recognition. Zuppa (E29) has developed a self-organizing map neural network methodology to improve the classification of odorants sensed by a multisensor system. The self-organizing map was able to recognize the response pattern of specific odorants, which allowed it to adapt to changes in the input data due to drift effects by a repetitive self-training process. A novel two-stage data analysis

procedure based on principal component analysis, PLS discriminant analysis, and artificial neural networks was used to classify juices using data from an ion-selective electronic tongue (E30). A key step in the study of any sensor array data set is the preprocessing of the data. Using the discrete wavelet transform to compact voltammograms from an electronic tongue, an artificial neural network was trained by a Bayesian regularization algorithm. The proposed preprocessing procedure was superior to more conventional treatments of downsampling the voltammogram or extracting features from the voltammogram using principal component analysis (E31). IMAGE ANALYSIS Chemical imaging is a combination of molecular spectroscopy and digital imaging. Data sets generated by chemical imaging are large, multivariate, and require significant processing. Many applications of imaging analysis have focused on pharmaceutical analysis in the past two years. A review of the field with emphasis on the contributions made by chemometrics has recently been published by Tauler (F1). Reich (F2) has published a review on the contributions made by near-infrared spectroscopy and chemometrics to imaging for both qualitative and quantitative analyses. A direct comparison of univariate and multivariate methods for improving the quality of spectral images has been undertaken, and the results of the study demonstrate that multivariate analysis produces significantly better quality chemical images than univariate approaches (F3). Roggo (F4) demonstrated the potential of multispectral imaging NIR spectrometers and principal component analysis to detect impurities on the surface of tablets and ascertain the reason for the dissolution problem of intact tablets. Geladi (F5) has demonstrated the potential of NIR spectroscopy and hyperspectral imaging as a diagnostic tool to detect counterfeit drugs. The advantages of applying multivariate statistical methods such as principal component analysis to XPS spectral image data include chemical component determination and signal-to-noise enhancement (F6, F7). The potential of multispectral-based fluorescence imaging to detect on-line fecal contamination of cantaloupes was demonstrated with principal component analysis, which was not plagued by problems of false positives, which was the case when simple band ratios were used to develop a classifier from the image data (F8). Results from principal component analysis of TOF-SIMS data comprising positive and negative ion spectral images on the same region of the sample were intuitive and fully described the sample (F9). Many of the citations in the past two years on applications of multivariate analysis to spectral imaging focused on the use of self-modeling curve resolution to convert spectral images into chemical images that show the spatial location of various chemical components. However, there are problems in processing IR spectral images due to large pixel-to-pixel base variations. Brown (F10) has developed a method to minimize baseline interference using fast Fourier filtering in both the spectral and spatial domains. This methodology has been successfully demonstrated on a crosssectional sample of rabbit aorta containing plaque. Maeder (F11) has investigated the use of fixed size window evolving factor analysis to assess the compositional complexity of spectral images using pharmaceutical products composed of emulsion-based formulations as his test system. Booksh (F12) has developed a

novel approach for rapid multivariate curve resolution, which involves applying local models to limited data segments where the rank of each can be more readily determined. Gemperline (F13) has investigated the use of nonnegativity constraints in multivariate curve resolution to obtain more reliable estimates of pure spectra. Turner (F14) has developed a multivariate technique called spectral identity mapping that reduces the dependence of spectral image analysis on the use of a priori information about the system under investigation. The proposed method, which provides improved chemical image contrast, is closely related to spectral angle mapping and cosine correlation analysis. Barry K. Lavine is an Associate Professor of Chemistry at Oklahoma State University in Stillwater, OK. He has published approximately 90 papers in chemometrics and is on the editorial board of several journals including the Journal of Chemometrics, Microchemical Journal, and Chemoinformatics. He is the Assistant Editor of Chemometrics for Analytical Letters. Lavine’s research interests encompass many aspects of the applications of computers in chemical analysis including pattern recognition, multivariate curve resolution, and multivariate calibration using genetic algorithms and other evolutionary techniques. Jerome (Jerry) Workman, Jr. is Director of Research & Technology for Molecular Spectroscopy & Microanalysis at the Thermo Electron Corporation, Madison, WI. This chemometrics review article constitutes the third in this series he has coauthored. In his career, Workman has focused on molecular and electronic spectroscopy, process analysis, and chemometrics and has received many key awards for his work. Over the past twenty-five years he has published widely, including numerous tutorials, scientific papers and book chapters, individual text volumes, software programs, and inventions.

LITERATURE CITED (A1) Lavine, B. K.; Workman, J. Anal. Chem. 2004, 76 (12), 33653371. (A2) Veuthey, J.-L.; Rudaz, S, Switz. Chim. 2005, 59 (6), 326-330. (A3) Lindon, J. C.; Holmes, E.; Nicholson, J. K. Curr. Opin. Mol. Ther. 2004, 6 (3), 265-272. (A4) Geladi, P.; Sethson, B.; Nystroem, J.; Lillhonga, T.; Lestander, T.; Burger, J. At. Spectrosc. 2004, 59B (9), 1347-1357. (A5) Reichenbach, S. E.; Ni, M.; Kottapalli, V.; Visvanathan, A. Chemolab 2004, 71 (2), 107-120. (A6) Brereton, R. G. J. P. A. T. 2005, 2 (3), 8-11. (A7) Fischer, H. P.; Heyse, S. Curr. Opin. Drug Discovery Dev. 2005, 8 (3), 334-346. (A8) Cottingham, K. Anal. Chem. 2005, 77 (9), 197A-200A (A9) Wentzell, P. D.; Karakach, T. Analyst 130 (10), 1331-1336. (A10) Vogt, F.; Dabe, B.; Cramer, J.; Booksh, K. Analyst 2004, 129 (6), 492-502. LIBRARY SEARCHING (B1) Monev, V. Match 2004, 51, 7-38. (B2) Varmuza, K. in Progress in Chemometrics Research; Pomerantsev, A. L., Ed.; Nova Science Publishers: Hauppauge, NY, 2005. (B3) Baurin, N.; Baker, R.; Richardson, C.; Chen, I.; Foloppe, N.; Potter, A.; Jordan, A.; Roughley, S.; Parratt, M.; Greaney, P.; Morley, D.; Hubbard, R. E. J. Chem. Inf. Comput. Sci. 2004, 44 (2), 643-651. (B4) Oellien, F.; Ihlenfeldt, W.-D.; Gasteiger, J. J. Chem. Inf. Model. 2005, 45 (5), 1456-1467. (B5) Wold, S.; Josefson, M.; Gottfries, J.; Linusson, A. J. Chem. 2004, 18 (3-4), 156-165. (B6) Sadygov, R. G.; Cociorva, D.; Yates, J. R. Nat. Methods 2004, 1 (3), 195-202. (B7) Yoshino, K.; Oshiro, N.; Tokunaga, C.; Yonezawa, K. J. Mass Spectrom. Soc. Jpn. 2004, 52 (3), 106-129. (B8) Bern, M.; Goldberg, D.; McDonald, W.; Yates, J. R. Bioinformatics 2004, 20 (1), i49-i54. (B9) McDonald, W.; Tabb, D. L.; Sadygov, R. G.; MacCoss, M. J.; Venable, J.; Graumann, J.; Johnson, J.; Cociorva, D.; Yates, J. R. Rapid Commun. Mass Spectrom. 2004, 18 (18), 2162-2168. (B10) Strittmatter, E. F.; Kangas, L. J.; Petritis, K.; Mottaz, H. M.; Anderson, G. A.; Shen, Yufeng, Jacobs, J. M.; Camp, D. G.; Smith, R. D. J. Proteome Res. 2004, 3 (4), 760-769. (B11) Baczek, T.; Bucinski, A.; Ivanov, A. R.; Kaliszan, R. Anal. Chem. 2004, 76 (6), 1726-1732.

Analytical Chemistry, Vol. 78, No. 12, June 15, 2006

4143

(B12) Sadygov, R.; Liu, H.; Yates, J. R., III. Anal. Chem. 2004, 76 (6), 1664-1671. DATA PREPROCESSING AND FEATURE SELECTION (C1) Liu, Y.; Brown, S. D. Anal. Bioanal. Chem. 2004, 380 (3), 445-452. (C2) Chau, F.-T.; Liang, Y.-Z.; Gao, J.; Shao, X.-G. Chemometrics: From Basics to Wavelet Transform; John Wiley & Son: New York, 2004. (C3) Rejtar, T.; Chen, H. S.; Andreev, V.; Moskovets, E.; Karger, B. L. Anal. Chem. 2004, 76 (20), 6017-6028. (C4) Loren, I.; Daley, P. F.; Burnham, A. K. Anal. Chem. 2005, 77 (13), 4051-4057. (C5) Cho, S.; Chung, H.; Lee, Youngil Microchem. J. 2005, 80 (2), 189-193. (C6) Wang, J.; Ma, J.; Li, M. D. Comb. Chem. High Throughput Screening 2004, 7 (8), 783-791. (C7) Vogt, F. B.; Banerji, S.; Booksh, K. J. Chemom. 2004, 18 (78), 350-362. (C8) Galvao, R. K. H.; Jose, G. E.; Dantas-Filho, H. A.; Araujo, M. C. U.; Cirino da Silva, E.; Paiva, H. M.; Saldanha, T. C. B.; Nunes de Souza, E. S. O. Chemolab 2004, 70 (1), 1-10. (C9) Chong, II-G.; Jun, C. H. Chemolab 2005, 78 (1-2), 103-112. (C10) Mager, P.; Sanchez, L. Curr. Comput. Aided Drug Des. 2005, 1 (2), 163-177. (C11) Li, H.; Ung, C. Y.; Yapp, C. W.; Xue, Y.; Li, Z. R.; Cao, Z. W.; Chen, Y. Z. Chem. Res. Toxicol. 2005, 18 (6), 1071-1080. (C12) Benoudjit, N.; Cools, E.; Meurens, M.; Verleysen, M. Chemolab 2004, 70 (1), 47-53. (C13) Lavine, B. K.; Davidson, C. E.; Rayens, W. T.; Rayens, W. T. Comb. Chem. High Throughput Screening 2004, 7 (2), 115131. (C14) Karasinski, J.; Andreescu, S.; Sadik, O. A.; Lavine, B. K.; Vora, M. N. Anal. Chem. 2005, 77 (24), 7941-7949. (C15) Holmes, D. S.; Mergen, A. E. AAPS 2005, 7 (1), E106-E117. (C16) Andersson, M.; Svensson, O.; Folestad, S.; Josefson, M.; Wahlund, K.-G. Chemolab 2005, 75 (1), 1-11. (C17) DiFoggio, R. J. Chem. 2005, 19 (4), 203-215. CALIBRATION (D1) Kalivas, J. Anal. Lett. 2005, 38 (14), 2259-2279. (D2) Damiani, P. C.; Escandare, G. M.; Olivieri, A. C.; Goicoechea, H. C. Curr. Pharm. Anal.2005, 1 (2), 145-154. (D3) Codgill, R. P.; Anderson, C. A. J. N. I. R. Spec. 2005, 13 (3), 119-131. (D4) Marbach, R. J. Near Infrared Spectrosc. 2005, 13 (5), 241254. (D5) Gustafsson, M. G. J. Chem. Inf. Model. 2005, 45 (5), 12441255. (D6) Lillhonga, T.; Geladi, P. Anal. Chim. Acta 2005, 544 (1-2), 177-183. (D7) Jin, L.; Xu, Q. S.; Smeyers-Verbeke, J.; Massart, D. L. Appl. Spectrosc. 2005, 59 (9), 1125-1135. (D8) Lima, S.; Mello, C.; Poppi, R. J. Chemolab 2005, 76 (1), 7378. (D9) Indahl, U. J. Chem. 2005, 19 (1), 32-44. (D10) Sernees, S.; Croux, C.; Filzmoser, P.; Van Espen, P. Chemolab 2005, 79 (1-2), 55-64. (D11) Zhang, M. H.; Xu, Q. S.; Massart, D. L. Anal. Chem. 2005, 77 (5), 1423-1431. (D12) Ergon, R. J. Chem. 2005, 19 (1), 1-4. (D13) Feudale, R. N.; Liu, Y.; Woody, N. J. Chem. 2005, 19 (1), 5563. (D14) Esteban-Diez, I.; Gonzalez-Saiz, J. M.; Pizarro, C. Anal. Chim. Acta 2004, 515 (1), 31-41. (D15) Woody, N.; Feudale, R. N.; Myles, A. J.; Brown, S. D. Anal. Chem. 2004, 76 (9), 2595-2600. (D16) Ghasemi, J.; Niazi, A. Talanta 2005, 65 (5), 1168-1173. (D17) Thomas, E. V. J. Chem. 2004, 17 (12), 653-659. (D18) Heronides, A.; Galvao, R. K. H.; Araujo, M. C. U.; Cirino da Silva, E.; Saldanha, T. C. B.; Jose, G. E.; Pasquini, C.; Raimundo, I, M.; Rohwedder, J. J. R. Chemolab 2004, 72 (1), 83-91. (D19) Tran, C. D.; Grishko, V. I. Spectrochim, Acta, Part A 2005, 62A (1-3), 38-41. (D20) Zhang, L.; Zhang, L.; Yan, W. J. J. Environ. Sci. Health, Part A: Toxic/Hazard. Subst. Environ. Eng. 2005, 40 (5), 10691079. (D21) Lam, H.; Proctor, A.; Howard, L.; Cho, M. J. J. Food Sci. 2005, 70 (9), C545-C549. (D22) Pomerleau-Dalcourt, N.; Weersink, R.; Lilge, L. Appl. Spectrosc. 2005, 59 (11), 1406-1414. (D23) Lopes, V.; Menezes, J. C. Chemolab 2005, 78 (1-2), 1-10. (D24) Arnold, M. A.; Small, G. W.; Xiang, D.; Qui, J.; Murhammer, D. W. Anal. Chem. 2004, 76 (9), 2583-2590. (D25) Chen, J.; Arnold, M. A.; Small, G. W. Anal. Chem. 2004, 76 (18), 5405-5413. (D26) Amerov, A. K.; Chen, J.; Small, G. W. Anal. Chem. 2005, 77 (14), 4587-4594. 4144

Analytical Chemistry, Vol. 78, No. 12, June 15, 2006

(D27) Amerov, A. K.; Chen, J.; Small, G. W.; Arnold, M. A. Proc. SPIE-Int. Soc. Opt. Eng. 2004, 5330, 101-111 (Complex Dynamics, Fluctuations, Chaos, and Fractals in Biomedical Photonics). (D28) Amerov, A. K.; Small, G. W.; Arnold, M. A. Proc. SPIE-Int. Soc. Opt. Eng. 2004, 6007, 60070U/1-60070U/10 (Smart Medical and Biomedical Sensor Technology III). (D29) Yonzon, C. R.; Haynes, C. L..; Zhang, X.; Walsh, J. T.; Van Duyne, R. P. Anal. Chem. 2004, 76 (1), 78-85. (D30) Stuart, D. A.; Yonzon, C. R.; Zhang, X.; Lyandres, O.; Shah, N. C.; Glucksberg, M. R.; Walsh, J. T.; Van Duyne, R. P. Anal. Chem. 2005, 77 (13), 4013-4019. (D31) Arnold, M. A.; Small, G. W. Anal. Chem. 2005, 77 (17), 54295439. (D32) Hemmateenejad, B.; Safarpour, M. A.; Mohammad Mehranpour, A. Anal. Chim. Acta 2005, 535 (1-2), 275-285. (D33) Dable, B. K.; Booksh, K. S., Cavicchi, R.; Semancik, S. Sens. Actuators, B 2004, B101 (3), 284-294. (D34) Thissen, U.; Uestuen, B.; Melssen, W. J.; Buydens, L. M. C. Anal. Chem. 2004, 76 (11), 3099-3105. (D35) Uestuen, B.; Melssen, W. J.; Oudenhuijzen, M.; Buydens, L. Anal. Chim. Acta 2005, 544 (1-2), 292-305. (D36) Lee, D. E.; Song, J.-H.; Song, Sang-Oak, Yoon, E. Sup. Ind. Eng. Chem. Res. 2005, 44 (7), 2101-2105. PATTERN RECOGNITION (E1) Myles, A. J.; Brown, S. D. J. Chem. 2004, 18 (6), 286-293. (E2) Zomer, S.; Brereton, R.; Carter, J. F.; Eckers, C. Analyst 2004, 129 (2), 175-181. (E3) Cloarec, O.; Dumas, M. E.; Trygg, J.; Craig, A.; Barton, R. H.; Lindon, J. C.; Nicholson, J. K.; Holmes, E. Anal. Chem. 2005, 77 (2), 517-526. (E4) Ishihara, S.; Ikeda, A.; Citterio, D.; Maruyama, K.; Hagiwara, M.; Suzuki, K. Anal. Chem. 2005, 77 (24), 7908-7915. (E5) Huang, X. P.; Han, X.; Chen, Y.; Miller, L. W.; Hall, J. Comput. Biol. Chem. 2005, 29 (3), 204-211. (E6) Fort, Gersende; Lambert-Lacrois, S. Bioinformatics 2005, 21 (7), 1104-1111. (E7) Feudale, R. N.; Brown, S. D. Proc. SPIE-Int. Soc. Opt. Eng. 2004, 5269, 243-251 (Chemical and Biological Point Sensors for Homeland Defense). (E8) Feudale, R. N.; Brown, S. D. Chemolab 2005, 77 (1-2), 7584. (E9) Zomer, S.; Del Nogal Sanchez, M.; Brereton, R. G.; Perez Pavon, J. L. J. Chem. 2004, 18 (6), 294-305. (E10) Kibbey, C.; Calvert, A. J. Chem. Inf. Mod. 2005, 45 (2), 523532. (E11) Srinivasan, R.; Wang, C.; Ho, W. K.; Lim, K. W. Ind. Eng. Chem. Res. 2004, 43 (9), 2123-2139. (E12) Zhang, Q.-Y.; Aires-de-Sousa, J. J. Chem. Inf. Model. 2005, 45 (2), 1775-1783 (E13) Leflaive, J.; Cereghino, R.; Danger, M.; Lacroix, G.; Ten-Hage, L. J. Microchem. Methods 2005, 62 (1), 89-102. (E14) Lavine, B. K.; Davidson, C. E.; Westover, D. J. J. Chem. Inf. Comput. Sci. 2004, 44 (3), 1056-1064. (E15) Lewis, N. Acc. Chem. Res. 2004, 37 (9), 663-672. (E16) Vlasov, Y.; Legin, A.; Rudnitskaya, A.; Di Natale, C.; D’Amico, A. Pure Appl. Chem. 2005, 77 (11), 1965-1983. (E17) Deisingh, A. K.; Stone, D. C.; Thompson, M. Int. J. Food Sci. Technol. 2004, 39 (6), 587-604. (E18) Zheng C.; Suslick, K. S. J. Am. Chem. Soc. 2005, 127 (10), 11548-11549. (E19) Ivarsson, P.; Johansson, M.; Hoejer, N.-E.; Krantz-Ruelcker, C.; Winquist, F.; Lundstroem, I. Sens. Actuators, B 2005, B108 (1-2), 851-857. (E20) Sepcic, K.; Josowicz, M.; Janata, J.; Selby, T. Analyst 2004, 129 (8): 1070-1075. (E21) Han, L.; Shi, X.; Wu, W.; Kirk, F. L.; Luo, J.; Wang, L.; Mott, D.; Cousineau, Lim, S.; Lu. S.; Zhong, C.-J. Sens. Actuators, B 2005, B106 (1), 431-441. (E22) Fend, R.; Bessant, C.; Williams, A. J.; Woodman, A. C. Biosens. Bioelectron. 2004, 19 (12), 1581-1590. (E23) Ciosek, P.; Augustyniak, E.; Wroblewski, W. Analyst 2004, 129 (7), 639-644. (E24) Ciosek, P.; Brzozka, Z.; Wroblewski, W. Sens. Actuators, B 2004, 103 (1-2), 76-83. (E25) Hsieh, M.-D.; Zellers, E. T. Anal. Chem. 2004, 76 (7), 18851895. (E26) Hsieh, M.-D.; Zellers, E. T. J. Occup. Environ, Hyg. 2004, 1 (3), 149-160. (E27) Kermani, B. G.; Schiffman, S. S.; Nagle, H. T. Sens. Actuators, B 2005, B110 (1), 13-22. (E28) Zanchettin, C.; Ludermir, T. B. Int. J. Neural Syst. 2005, 15 (1-2), 137-149. (E29) Zuppa, M.; Distante, C.; Sicilano, P.; Persaud, K. C. Sens. Actuators, B 2004, B98 (2-3), 305-317. (E30) Ciosek, P.; Brzozka, Z.; Wroblewski, W.; Martinelli, E.; Di Natale, C.; D’Amico, A. Talanta 2005, 67 (3), 590-596.

(E31) Moreno-Baron, L.; Cartas, R.; Merkoci, A.; Arben, A.; Alegret, S.; Guiterrez, J.; Leija, L.; Hernandez, P.; Munoz, R.; del Valle, M. Anal. Lett. 2005, 38 (13), 2189-2206. IMAGE ANALYSIS (F1) de Juan, A.; Tauler, R.; Dyson, R.; Marcolli, C.; Rault, M.; Maeder, M. TrAC, Trends Anal. Chem. 2004, 23 (1), 70-79. (F2) Reich, G. Adv. Drug Delivery Rev. 2005, 57 (8), 1109-1143. (F3) Sasic, S.; Clark, D. A.; Mitchell, J. C., Snowden, M. J. Analyst 2004, 129 (11), 1001-1007. (F4) Roggo, Y.; Edmond, A.; Chalus, P.; Ulmschneider, M. Anal. Chim. Acta 2005, 535 (1-2), 79-87. (F5) Rodionova, O.; Houmoller, L. P.; Pomerantsev, A. L.; Geladi, P.; Burger, J., Dorofeyev, V. L.; Arzamastsev, A. P. Anal. Chim. Acta 2005, 549 (1-2), 151-158. (F6) Peebles, D. E.; Ohlhausen, J. A.; Kotula, P. G.; Hutton, S.; Blomfield, C. J. Vac. Sci. Technol. 2004, 22 (4), 1579-1586. (F7) Artyushkova, K.; Fulghum, J. E. Surf. Interface Anal. 2004, 36 (9), 1304-1313.

(F8) Vargas, A. M.; Kim, M.; Tao, Y.; Lefcourt, A. M.; Chen, Y. R.; Luo, Y.; Song, Y.; Buchanan, R. J. Food Sci. 2005, 70 (8), E471E476. (F9) Smentkowski, V. S.; Keenan, M. R.; Ohlhausen, J. A.; Kotula, P. G. Anal. Chem. 2005, 77 (5), 1530-1536. (F10) Bu, D.; Huffman, S. W.; Seelenbinder, J. A.; Brown, C. W. Appl. Spectrosc. 2005, 59 (5), 575-583. (F11) de Juan, A.; Maeder, M.; Hancewicz, T.; Tauler, R. Chemolab 2005, 77 (1-2), 64-74. (F12) Dable, B. K.; Marqurdt, B. J.; Booksh, K. S. Anal. Chim. Acta 2005, 544 (1-2), 71-81. (F13) Jaumot, J.; Gemperline, P. J.; Stang, A. J. Chem. 2005, 19 (2), 97-106. (F14) Turner, J. F.; Zhang, J.; O’Connor, A. Appl. Spectrosc. 2004, 58 (11), 1308-1317.

AC060717Q

Analytical Chemistry, Vol. 78, No. 12, June 15, 2006

4145