Statistical and mathematical methods in analytical chemistry


Statistical and mathematical methods in analytical chemistry...

3 downloads 113 Views 1MB Size

Chem. Soc., 51, 522 (1974). (AB85) Ogura, K., Jap. J. Limnol., 34, 129 (1973). (AB86) Ogura, K., Hanya, T., Mass Specfrosc., 22, 281 (1974). (AB87) Campos, R., Ed., “Advances in Organic Geochemistry 1975”, Proceedings of the 7th International Meeting on Organic Geochemistry, Madrid, 1975, in press. (AB88) Oswald, E. O., Albro, P. W., McKinney, J. D.. J. Chromatoor.. 98. 363 (1974). - ~. (AB89) 0swald:E. 0.:Levy, L., Corbett, B. J., Walker, M. P., J. Chromatogr., 93, 63 (1974). (AB90) Pedersen, K. R., Lam, J.. Groenl. Geol. Unders., Bull., 5 (1975). (AB91) Pereira, W. E., Summons, R. E., Rindfleisch, T. C., Duffield, A. M., Zeitman. B., Lawless, J. G., Geochim. Cosmochim. Acta, 39, 163 (1975). (AB92) Perry, R., Twibell. J. D., Biomed. Mass Spectrom., 1, 73 (1974). (AB93) Pierce, R. C., Katz, M., fnviron. Sci. Techno/., IO, 45 (1976). (AB94) Pitt, W. W.,Joliey, R. L.. Katz, S., fnviron. Prot. Agency Rep., EPA-66012-74-076, August 1974 . .. (AB95) Price, P., Martinsen, D. P., Upham, R. A,, Swofford, H. S.,Jr., Buttrill, S.E., Jr., Anal. Chem., 47. 190 (1975). iAB96) Rasmussen. D. V., Anal. Chem., 46,602 (1974). (AB97) Reinard, M., Drevenkar, V., Giger, W., J. Chromatogr., 116, 43 (1975).

(AB98) Reynolds, W. D., Delongchamp, R.. Biomed. Mass Spectrum., 2, 276 (1975). (AB99) Rook, J. J., Water Treat. Exam.. 23,234 (1974). (ABlOO) Rosen, J. D.,Morano. J. R., Pareles, S. R.. Giacin, J. R., Gilbert, S.G.. J. Am. Offic. Anal. Chem., 58, 700 (1975). (ABlOl) Rubinstein, I., Sieskind, O., Albrecht, P., J. Chem. Soc., Perkin Trans. 1, 1833 (1975). (AB102) Ruzicka, J. H. A,, Abbott, D.C., Talanta, 20, 1261 (1973). (AB103) Safe, S., Jamieson. W. D.. Hutzinger, O., Pohland, A. E., Anal. Chem., 47, 327 (1975). (AB104) Safe, S.,Platonow, N., Hutzinger, O., Jamieson, W. D., Biomed. Mass Spectrom., 2, 201 (1975). (AB105) Scheiman, M. A,. Saunders, R. A,, Saalfeld. F. E., Biomed. Mass Spectrom., 1, 209 (1974). (AB106) Schnitzer, M., Skinner, S. I. M., Can. J. Chem., 52, 1072 (1974). (AB107) Schuetzle, D., Biomed. Mass Spectrom., 3. 131 119761. in oress. ’ (ABlb8) SchuiienTH. R., Schurath, U.. J. Phys. Chem.. 79. 51 (1975) (ABlO9)’ Schultz, J: L., Sharkey, A. G., Jr.. Friedel, R. A,, Nathanson, B., Biomed. Mass Spectrom., I, 137 (1974). ( A B l l O ) Seifert, W. K., Prog. Chem. Org. Nat. Prod., 32, l(1975). (AB1 11) Sigmond. T., Vacuum, 25, 239 (1975).

(AB112) Skinner, S. I. M., Schnitzer, M., Anal. Chim. Acta, 75, 207 (1975). (AB113) Swansiger, J. T., Dickson, F. E., Best, H. T., Anal. Chem., 46, 730 (1974). (AB114) Tennakoon, D. T. B.. Thomas, J. M., Tricker, M. J., Graham, S.H., J. Chem. Soc., Chem. Commun., 124 (1974). (AB1 15) Tissot, B., Bienner, F., Ed., “Advances in Organic Geochemistry 1973”. Editions Technip, Paris, 1974. ( A B l l 6 ) Tricker, M. J., Tennakoon, D. T. B., Thomas, J. M., Heald, J., Clays Clay Miner., 23, 77 (1975). (AB117) Tyson, B. J., Carle, G. C., Anal. Chem., 46, 610 (1974). (AB1 18) van Dorssaelaer, A,, Ensminger, A,, Spyckerelle, C., Dastillung, M., Sieskind, O., Arpino, P., Albrecht, P., Ourisson. G., Brooks. P. W.. Gaskeli, S. J., Kimble. B. J., Philp, R . P., Maxwell, J. R., Eglinton, G., Tetrahedron Lett.. 1349 (1974). (AB119) Webb, R. G., Environ. Prot. Agency Rep., 660/4-75-003, June 1975. (AB120) Webb, R. G., Garrison, A. W., Keith, L. H., McGuire, J. M., Gov. Rep. Announce. (USA), 74, 57 (1974). (AB121) Williams, D. T.. Miles, W. F., J. Assoc. Offic. Anal. Chem., 58, 272 (1975). (AB122) Youngblood, W. W.. Blumer, M., Geecbim. Cosmochim. Acta, 39, 1303 (1975).

Statistical and Mathematical Methods in Analytical Chemistry P. S. Shoenfeld and J. R. DeVoe‘ National Bureau of Standards, Washington, D.C. 20234

Significant changes have occurred since the last review on this topic ( C 1 ) .This is probably due to the fact that the convenient use of the computer in the analytical laboratory a t reasonable cost has opened the possibility for using complex numerical data processing routines. However, it is apparent that even though many analytical chemists have access to such facilities, we are only a t the threshold of realizing the importance of statistical and mathematical techniques that have been reasonably well understood in the field of applied mathematics for decades. This can he partially understood because many of the concepts that are important to our field require significant study before their importance can be fully realized. Moreover, with the myriad of statistical and mathematical techniques available to the analytical chemist, it requires careful study to select that technique which will provide the most accurate result from the analytical data. This review covers the period October 1971 to January 1976 and is rather limited in its scope. The coverage is almost exclusively that which is written in English. In view of the literature explosion, the use of computer searching was considered mandatory (see Table I for list of key words). The efficiency of such a search is essentially unknown, and although the application of pattern recognition techniques (many aspects of which will be described in this review) to the field of literature searching is not analytical chemistry, it proves to be a very important part, since duplication of research effort is always to be avoided. In view of the relatively all-inclusive and broad coverage given by the authors of this last review, this review will focus on only those subjects which have experienced a considerable increase in activity in addition to one or two subjects which

the authors consider to be promising for the future. Contrasted to the previous coverage only those applications which have a direct and demonstrated application to analytical chemistry will be presented here. Table Ib lists the most important journals that were scanned (a majority of papers came from the journal, Anal. Chem.). Attempts to classify the applications of statistical and numerical methods to analytical chemistry resulted in much frustration caused to a significant extent by the nonuniformity of nomenclature, but also from a lack of fundamental understanding (in our opinion) of the basic principles that underlie such apparently diverse applications as pattern recognition, correlation techniques, curve fitting, optimization procedures, etc. From one standpoint it might be considered that pattern recognition is the all-inclusive term for the understanding of all data produced in analytical chemistry because those who use the quantitative results from a chemical analysis draw a conclusion from them based upon a “determined” (which may not be rigorously based) phenomenological effect correlating with the analytical data. As a hypothetical example, suppose the presence of an infrared absorption in C, H, N, 0, S compounds a t 623.01 cm-I always identifies a cancer-producing material. Such a general term has recently been superseded by that called “chemometrics” ( K 1), which presumably will be the title of the next review in this journal. Of course, the basic purpose of providing quantitative analytical data on the (for example) composition of a material is to relate some desired property of the material to its composition. One has variables of composition which interact in complex physicochemical ways in accordance with some specific functional mathematical form which is often unknown ANALYTICAL CHEMISTRY, VOL. 48, NO. 5, APRIL 1976

403R

Table I. List of Key Words Used in a Computer Search of the Tapes Produced by the Chemical Abstracts Servicea Mathematical analysis Statistics Pattern recognition Data reduction Experimental design Curve fitting Spectral resolution Deconvolution Factor analysis

Principal components Feature selection Fourier transforms Information theory Simplex Signal processing Peak fitting Digital filtering Least squares

List of Journals Analytical Chemistry Applied Spectroscopy Journal of Chemical Education Journal of Physical Chemistry Nuclear Instruments and Methods Information retrieval service used was Lockheed Retrieval Services. In no case does the identification of trade names imply recommendation or endorsement by the National Bureau of Standards nor does it imply that the equipment or service identified is necessarily the best available for the purpose. or known in part with empirical components used to explain the desired property. Combine this uncertainty in the mathematical model of interaction with the uncertainty in the value of the variables and one has a task that requires solution by a host of techniques described as multivariate analysis. It is a testimony to the practitioners in this field that they have accomplished so much with so little to work with. Of an identical nature is the evaluation of a chemical analysis system which produces the quantitative data on a given material. Variables which affect the result interact in an unknown mathematical functional form and a variety of similar techniques are used to provide insight into the physics and chemistry of the measurement system that is used. Multivariate analysis and other statistical techniques are used to determine the importance (or effect) of certain variables on the end result by regression techniques, statistical experimental design, principal components, and factor analysis. Other approaches look a t what might be considered a subset of variables in the measurement system. For example, the instrumental parameters, the sampling problem, or the interpretation of the system output signal with time (or other variables), e.g., the spectral data. These subsets are functionally interdependent, but much valuable information can be derived about the measurement system as a whole by studying the separate parts. The following paragraphs describe in a brief and undoubtedly biased manner some of the most active applications. SPECTRAL RESOLUTION By a spectrum, we generally mean the output of a spectroscopic instrument, Le., a function of one variable (wavelength, mass, etc.) which would ideally consist of a series of narrow peaks. This ideal function is always distorted by the measurement system. By spectral resolution we mean the removal of such distortion and the recovery of the ideal spectrum. An interesting brief summary of some of the techniques used in the resolution of spectral data is given by Horlick (H2). An effect which is frequently present is a smearing of the spectrum due to the convolution of the ideal spectrum with an instrument function. In such a situation we have F ( x ) = j",l(x - T ) F ( T )d r where I-is the instrument function, F is the original spectrum, and F is the smeared spectrum. If an estimate of I is known, several numerical methods are available for deconvolution (determining F given F and I ) . There are always difficulties, due in large part to the finite record length. An excellent discussion of the general problem and some methods of solution is given by Wertheim ( W l ) and others ( D l , P2). The Fourier Transform (FT) of the convolution of two functions is the product of their individual Fourier Transforms. This result is known as the convolution theorem, and 404R

ANALYTICAL CHEMISTRY, VOL. 48, NO. 5, APRIL 1976

thereby reduces deconvolution to division of Fourier Transforms. Actually, it is not in general possible to compute meaningfully the inverse F T of the quotient of two FT's. Thus some frequency band limitation must be introduced. The question of how best to do this is related to smoothing, and the power spectra of both the signal and noise. Wertheim discusses this in some detail. Some recent applications of this technique combined with a description of the above indicated problems are given for applications in gas chromatography ( K 2 ) , emission spectroscopy ( H 2 ) , and polarography ( H 3 ) . An example of an analogous procedure using LaPlace rather than Fourier transforms is given in Ref H I . A good exposition of several uses of Fourier transforms in spectral data handling is given in Ref H6. Topics discussed include deconvolution, smoothing, and differentiation of spectra. Van Cittert's (BI ) iterative method is also frequently used for deconvolution. Successive approximations are obtained by starting with the observed spectrum, convolving the last approximation with the instrument function, taking the difference between that convolution and the observed spectrum, and using it as a correction which is added to the last approximation to obtain the new approximation. This is discussed by Wertheim ( W l ) and others ( S l ,E l , E2). Computational experiments on the effect of noise in this method are discussed in Ref 21. Defining convolution for discrete data by replacing the integral by a sum leads to a system of linear equations which must be solved for deconvolution. This involves inversion of a very large matrix and requires exorbitant amounts of computer time. However, such methods were used in Ref P3 and K4. The K t h moment of a function f ( x ) is defined by p~ = Jrm x K f ( x ) dx. The moment generating function corresponding to f ( x ) is m ( s ) = J!zesxf(x) dx. The moments are related to The formal similarity the derivatives of m ( s )by p~ = m(K)(0). of m (s) to a Laplace transform leads to a convolution theorem like that for Laplace and Fourier transforms. Consequently, the first N moments of F may be determined fr_oma system of 2N linear equations using the moments of F and I . This method was discussed and applied to problems involving the fitting of a multiexponential decay curve and its deconvolution from a lamp function in a series of papers by Isenberg, Dyson, Schuyler, Hanson, and Mullooly (S2,02,14,15).Some recent improvements are given in Ref I 4 together with an excellent exposition of the entire procedure. The effect of counting errors in this method is analyzed in Mullooly's Appendix to Ref 14. A similar method is used for removing the effect of the lamp function without attempting to estimate meaningfully decay parameters in Ref W4. A good discussion of deconvolution in fluorescent decay problems is given in Ref K6. Hardy and Young ( H 5 ) derived a deconvolution method, similar to the method of moments, in which a point in the deconvolved spectrum is determined from the corresponding point in the observed spectrum by application of a simple polynomial function whose coefficients are related to the moments of the instrument function. This method appears attractive for real-time use and was evaluated with that end in mind in Ref Dl. Another approach to fitting multiexponential decay is given in Ref G I . A weighted least-squares fit is performed, making full use of any known constraints. The results are then evaluated by testing the residuals for randomness by using the autocorrelation function. If the residuals fail to be random, the procedure is repeated using a new model, generally one with more terms. Many papers were written on problems of curve fitting, peak location, and distinguishing between nearly coincident peaks. A novel approach was presented in Kelly and Horlick's excellent paper (K3)using Bayes theorem to directly calculate the a posterior: probability that the height of the secondary peak in a doublet is greater than zero. Cross-correlation methods for locating and fitting peaks were considered by several authors. Such an approach was taken to the resolution of Gaussian peaks by Brouwer and Jansen ( B 3 ) .They work with the derivatives of the observed spectrum and the assumed profile in order to reduce the effect of background. This paper also contains a useful review of other peak-finding methods, A good description of the use of cross correlation to detect specific spectral features is given in ( H 7 ) . Many papers appeared on nonlinear least-squares fitting

James R. DeVoe is Chief of the Special Analytical Instrumentation Section, Analytical Chemistry Division of the National Bureau of Standards. He received his B.S. (1950) degree from the Universityof Illinois, his M.S. (1952) from the University of Minnesota, and his PhD. (1959) from the University of Michigan. His research interests include photoelectron, Mossbauer effect, emission and absorption spectroscopy, general spectroscopy as applied to analytical measurements, laser characteristics and their application as excitation sources, laboratory automation, and numerical methods in analytical chemistry.

Peter S. Shoenfeld is a mathematician with the Special Analytical Instrumentation Section, Analytical Chemistry Division of the National Bureau of Standards. He received the B.S. (1960) degree in engineering physics from Lehigh University, the M.S. (1968) in mathematics from Howard University, and the Ph.D. (1974) in mathematics from the University of Maryland. His research interests include laboratory automation and the numerical processing and reduction of analytical chemistry data.

procedures. Only a few will be mentioned. Wilson and Swartzendruber ( W 6 ) wrote a general package using the method of linearization which they used to fit a series of Lorentzians to Mossbauer data. In this procedure one starts with an estimate of the undetermined parameters and interatively makes successive estimates. Linear least squares and a linearized Taylor expansion of the fitting function about the last estimate are used to obtain the next estimate. An application of the same technique to cholesteryl nonanoate data is given in Ref W7. Roberts ( R 2 ) fits sums of Gaussians to chromatographic data by adjusting estimates until the residuals are within an acceptable bound as a time saving alternative to a full-least-squares analysis. Other chromatographic applications of least-squares fitting are Ref M I , C4, and G2. The moments of chromatograms are frequently of direct physical interest. These may be calculated via numerical Fourier or Laplace transforms ( Y l , G3). Moment analysis of nonlinear gas chromatography by computer simulation is discussed in Ref Y2. Problems of fitting combinations of Gaussians and Lorentzians to ir spectra are analyzed carefully in a recent paper by Vandenginste and DeGalan ( V l ) .Least-squares determination of absorption line frequencies is discussed in Ref 52. In a series of papers by Meites and others (C6, M11, M12) the relative success of a series of nonlinear-least-squares fits is used in classifying chemical processes. The techniques of multivariate analysis are finding increased application in analytical chemistry, both for spectral analysis and for pattern recognition and classification. Such techniques are concerned with analyzing an n X p data matrix, X. In spectroscopic applications, one might have n spectra (observations) each containing values for p channels (variwould be the value for the j t h channel in the i t h ables). spectrum, and each spectrum can be represented as a point in p-dimensional space. The techniques of principal components and factor analysis are concerned with finding an mdimensional subspace ( m < p ) in which the spectra can be approximately represented. The principal components technique yields that m-dimensional subspace which gives a least-squares best fit to the data when the data are projected onto it. The m-normalized eigenvectors of the matrix XTX with the largest corresponding eigenvalues are taken as basis vectors and are called principal components. These will be orthogonal. The total squared error resulting from the representation will equal the sum of the remaining eigenvalues. If means are subtracted from the data

x,,

for each variable, XTX will be a scalar multiple of the sample covariance matrix. If the data are also normalized to have a common variance for each variable, XTX will be a scalar multiple of the sample correlation matrix. The adjustments made before computing XTX depend on the type of fit desired. Principal components may be viewed as a multivariate fitting procedure. Its use does not necessarily require statistical assumptions or lead to statistical conclusions. It is used in factor analysis and in the dimensional compression of multivariate data. Sometimes it is used merely to determine the effective dimension of a data set. In the literature of pattern recognition, the representation of data in terms of principal components is often referred to as the Karhunen-Loeve transform. Although factor analysis uses principal components as a tool, its aims are subtly different. Factor analysis attempts to express the observations as linear combinations of a new set of variables, called factors, smaller in number than the original set, in such a way that the residuals are uncorrelated. This leads t o the conclusion that dependence on common factors “explains” the correlation between observed variables. Principal components, on the other hand, only asks that the sum of the squared residuals be small. However, principal components are often applied both for finding candidate factors and for determining the number of factors to be sought. Lawton and Sylvestre’s well-known paper on self-modeling curve resolution (L5)used principal components to attack the problem of determining the shapes of two overlapping functions from an observed set of additive mixtures. A unique solution obtains when the overlap is incomplete, Le., each function is zero a t some point where the other is nonzero. Applications arise in chromatography and spectrophotometry ( 0 2 )in analyzing multicomponent systems in situations where different samples have unknown but different relative concentrations. In a related paper ( L 4 )on self-modeling nonlinear regression, curves of the form y = 80 + B l g [ ( x - 82)/83] are treated. I t is assumed that one has available a set of curves with a common “shape function”, g, but differing parameters 8,.A procedure is provided for estimating both the shape and the parameters. Typically, g might be a peak shape function, 00 the baseline, and 01,82, and 83 the peak height, location, and full-width-half-maximum, respectively. Applications to spectrophotometric curves are given. In a third paper ( S 7 ) , the two techniques are combined in chemical kinetic applications. Principal components were applied to determining the presence of a second chromatographic peak component in GC-MS spectrographic data in Ref 0 5 . Other chromatographic applications are discussed in Ref MlO. An application to determination of the number of absorbing components in ir spectra taken a t different concentrations is given in Ref B5. An application to analytical chemistry method comparison studies without referee methods is given in Ref C7.

CHARACTERIZATION AND EVALUATION OF THE MEASUREMENT PROCESS This subject was quite adequately treated in the last review ( C l ) ,and there is little point in repeating it here. There have been a few approaches that just began to be discussed in the chemical analysis literature some five years ago which need to be emphasized because of their apparent increase in interest. The multivariate technique called factor analysis (described in the above section on spectral analysis), which was used as a procedure to evaluate the performance of spectrophotometers by Wernimont (W2),is being used for a variety of other studies such as the selection of liquid phases in chromatography (“5). Chemical equilibria can be evaluated by assuming that a Beer’s law relation holds for measuring absorbance vs. concentration. One can estimate from an array of spectral data a t different concentrations, the number of independently varying components; their individual contributions to the total variance of the measurement; the molar absorptivities of the individual component; and the absorbance spectra of each individual component while requiring only initial estimates of the equilibrium constants and the stoichiometry of the chemical constituents (K5,B2). There appears to have been a great interest in evaluating the precision of spectrophotometers since the last review (11, 12, L l , R l , 1 6 ) . No new techniques have been described, but ANALYTICAL CHEMISTRY, VOL. 48, NO.

5, APRIL 1976

405R

the sources of imprecision seem to be isolated to variability in source intensity, wavelength adjustment, sample positioning, and detector response. Particularly gratifying is the interest of the clinical chemist in this subject (P5, W3). Certain of the fundamental processes for the statistical treatment of data have been described in a particularly interesting and practical way. Currie warns that precision estimates must take into consideration all forms of variability not just those due to counting statistics for low photon or nuclear radiation detection (C2).Another important area that appears to be often overlooked is the confusing lack of standardization in nomenclature for detection limits (C3). This reference describes a fundamental approach to the determination of detection limits, It is receiving acceptance ( F I ) and needs reemphasizing. Possibly because such limits imply necessarily a binary decision (there is or is not a substance detected in the analyzed sample) it may be more important that we all agree on a term “level of quantitation” which might be described to be n times the standard deviation of the mean above zero net signal (C3). In addition, a very useful study of the best methods for reporting numerical data and experimental procedures has been done a t NBS by Garvin (G5).The minicomputer in the laboratory is making it possible for many analysts to utilize classical statistical tests for the first time and to discover new methods that increase the power of their analysis. For example, interferences can be determined in ion selective electrode studies (13), and error propagation can occur if calibration curves are not properly done ( L 2 ) . Finally, statistical sampling, which is one of the most important aspects of all chemical analysis, appears to be often overlooked, and the lack of references during this period reinforces that belief. A current problem associated with environmental protection is related to the meaningful sampling of particulate matter and the appropriate correlation of trace analysis with particle size ( H 4 ) .

values of the parameters. Convergence criteria can be affected by the random error in the experimentally measured response function, and it is possible for the simplex technique to converge to a false minimum when one has a “multi-dimpled” and/or deep- and narrow-dimpled response surface, A uniplex technique has been suggested (K9)which is claimed to be less sensitive to these problems for the univariate case. A major need in analytical chemistry is the effective, efficient separation and analysis of bioorganic compounds in complex mixtures. Most often some form of chromatography is used, but in order to minimize the number of separation steps for a complex mixture, thereby reducing separation time (and therefore cost) along with increasing the accuracy of quantitative analysis, optimization techniques are beginning to be used. The analyst needs to identify a distinct and meaningful set of criteria that will produce an optimal set of separation procedures when a particular optimization technique is applied to the data. Information theory provides a means to identify the meaningful set of criteria. In its most elementary sense one can use Shannon’s basic definition [information = -xr=1pk In Pk where Pk is the probability of result k occurring a t any measurement with m possible results]. One can apply this to separation by chromatography ( M 3 , M4, E4, G4, 0 3 , B4, M 5 ) . Once one has identified the appropriate criteria there are a number of techniques for determining the optimal pathway for a separation. Massart and Kaufman discuss several ways to do this ( M 6 ) .Graph theory can be used to optimize the pathway for chemical separations in column chromatography ( M 6 ) .Other techniques such as integer programming, minimal spanning tree algorithms, heuristic and branch and bound methods, and cluster analysis can also be used. The latter can be considered to be a form of pattern recognition and is sometimes called numerical taxonomy (M8,M 9 ) .

PATTERN RECOGNITION OPTIMIZATION TECHNIQUES I t has only been in the last few years that optimization techniques have been applied to many problems of analytical chemistry. The techniques are often borrowed from operations research which deals with the application of mathematics to a variety of managerial decision problems. The most well-known application is the use of experimental design (Latin squares, factorial design, etc.) coupled with analysis of variance. Very little activity has been detected in this most important field, and in fact Rubin et al. ( R 3 )pointed out this lack and described the proper way to implement such an experimental design. Although there are many optimization techniques that could be used, there is one approach that is being used in analytical chemistry which is called the simplex technique ( S 4 ) . The technique was originally used to find iteratively the parameters of a nonlinear function yielding a minimum or maximum. A simplex is an n-dimensional polyhedron with n 1 vertices. The technique operates by comparing the function values a t the vertices of the simplex defined by the last n 1 function evaluations in order to determine where to next evaluate the function. The procedure halts when a minimum (or maximum) is reached. The analytical chemist can make use of this technique without knowing the functional form of interaction of the variables. For example, effect of temperature on the response of a photon detector is functionally complex, but if a criterion for optimal performance of the detector can be determined, one can optimize the value of temperature to maximize the detector response by experimentally measuring response with varying temperature. This of course can be extended to n variables. An interesting response criterion was used in 5’5 where “informing power” related to a specially designed resolution function for gas chromatography was used. A recent improvement by Nelder and Mead ( N I ) in the convergence criteria of the simplex technique has caused considerable interest, and a number of works (M2,S5, K8, C5, 0 1 1 , KIO) have shown how it can be used in analytical chemistry. An introductory review of the simplex technique is given by Deming ( 0 4 ) . This improvement also enhanced the optimization of nonlinear mathematical functions ( 0 1). There are a few limitations to the method. A major one is that no error estimates can be determined on the optimum

+

+

406R

ANALYTICAL CHEMISTRY, VOL. 48, NO. 5, APRIL 1976

In the past four years a considerable increase in interest in the use of pattern recognition techniques has been observed. In fact, the most frequently appearing papers on data processing techniques have been in this area. Probably the simplest statement of the pattern recognition process is the transformation of patterns from measurement space into classification space (17). The first work done in 1964 on the use of pattern recognition in chemical analysis was the identification of organic substances from their mass spectral lines by Tal’roze ( T I ) .Five years later, there was a significant increase in activity and interest which has been sustained by several workers: T. L. Isenhour, University of North Carolina, P. C. Jurs, Pennsylvania State University; B. R. Kowalski, University of Washington, C. F. Bender and S. L. Grotch, UCRL, Livermore; C. W. Wilkins, University of Nebraska; and others. An information retrieval search on these names can currently be expected to identify over half of the papers published in this field. The problem of feature selection (via key words and the like) determines how efficiently one can retrieve all of the journals of interest. Presumably, one is limited by the accuracy in classifying any given document in the first place. This constitutes a situation similar to that experienced in chemical analysis where one must recognize that the efficiency of any pattern recognition technique using quantitative analytical data is limited by the accuracy of the analytical measurement system. Alas, if one had better classifiers and feature selection systems for information retrieval using pattern recognition, this review would be more efficient which in turn might improve future information retrieval systems, etc. The term “pattern recognition” refers to automatic procedures for classifying observed individuals into discrete groups on the basis of a multivariate data matrix. In a typical chemical application, the individuals would be samples, and the data matrix would be their spectra with the groups delineated by the chemical properties of interest. Generally, the relationship between the sample’s true classification and its probable spectrum is too complex to be readily defined. Consequently, procedures are needed to find empirical relationships. The terms “learning machine”, “supervised learning”, “training”, and “artificial- intelligence” are descriptive of such procedures. We let Y denote the vector of observations for a given sample and let C1, Cp, . . . , Cavdenote

the possible classifications. Generally P contains too much information to be numerically tractable, and most of it is irrelevant to the classification. Thus by “pre-processing” or “feature-selection” we reduce the dimension or otherwise compress the data to produce a new “feature vector”, X,which is put into the decision and learning algorithms. The training and evaluation of a recognition algorithm requires two sets of known samples, the “training set”, and the “evaluation” set. “Recognition” and “prediction” abilities are generally defined as the percentage of correct classifications achieved by the completed algorithm on these two sets, respectively. Several review articles have appeared ( K l ,K12,17). Jurs and Isenhour have recently published a book ( J 4 ) on the subject which restates most of the more important results t o date on pattern recognition in chemistry. This book should be the first and principal reference of those seeking an introduction to the field. While there has been a great deal of effort expended on specialized sets of test data, several very important applications have shown the potential usefulness of the pattern recognition technique. For example, from the mass spectrum of a series of hydrocarbons, Tunnicliff and Wadsworth were able t o identify types of gasolines ( T 2 ) .Kuo and Jurs (K13) describe a means to determine the amount of chlorine to be added to water for purification based upon the pattern of data produced from 17 different types of measurement regarding the pretreated properties of the water. The trace compositions of elements establish a pattern that is indicative of the origin of the material. Recognition techniques have been shown t o be highly useful in product identification (K14,DIO, B12), oil spill identification (C10, D8), and tracing of the origin of archeological artifacts (K15).Pattern recognition has also been used to identify specific types of biological activity with a variety of measurements of chemical compounds. For example, specific features of mass spectra have been associated with pharmacological activity (K16), and molecular structural features have been identified with cancer-reducing activity ( K l 7 ) ,and other types of biological activity (olfactory sensing ( S 3 ) ) .Along with this work is the prediction of general chemical properties from the structure of the molecule by pattern recognition ( A 4 ) .Conversely, a test of how effective these techniques are is to synthesize a mass spectrum from the structure of a given molecule ( 2 2 ) . How successful this will be probably depends on how effective is our understanding of the multiple interactions of the variables in mass spectrometry. Discriminant functions are widely used in formalizing classification procedures. A function g, is associated with each possible classification Ci. The decision algorithm operates by selecting the Classification CK such t h a t &(x)is maximal among the g i ( X ) .Frequently one requires that the discriminant functions belinear. In that case is characterized by a = Ei.9 The “linear learning “weight vector” Wi and g, machine” operates by starting with an arbitrary set of weight vectors and successively attempts to classify members of the training set, adjusting the weight vectors each time misclassification occurs. The adjustment operates by adding and subtracting positive multiples of the offending training vector to the two weight vectors which yielded discriminant values which were too low and too high, respectively. Geometrically, this is analogous to seeking a set of hyperplanes, called linear decision surfaces, each of which separates a pair of classifications for the training set. The training set is said to be “linearly separable” if a complete set of separating hyperplanes exists for the training set. In this event, a set of weight vectors will evolve which yield 1ooOh recognition, a result which is known as the perceptron convergence theorem. A good description of linear training algorithms is found in Ref I7 and J4. Some typical recent applications are given in Ref L7, WIO, and W l 1 . A convenient simplification is often used in chemical applications. The decision is made by one or more binary classifiers each of which makes a binary decision by observing whether a single discriminant function yields a positive or negative result for a given pattern. Such binary classifiers are often called “threshold logic units”. Another technique in which a number of binary classifiers “vote” for multicategory classification is discussed in Ref B16. In Ref L3 and F3, arrays of binary classifiers each corresponding to a bit position in a Hamming error correcting code are used for multicategory classification.

(x)

An interesting generalization of the multicategory linear learning machine is discussed in Ref W13. Here the weight vectors for each classification are trained as usual. However, classification is performed using the ranking of the N largest discriminants, rather than just the one largest. A priori probabilities are assumed and a Bayesian optimal decision is reached. I t is possible to improve the reliability of a linear classifier by defining “dead zones” consisting of points for which no decision will be reached. Generally these zones are defined as fixed width slabs on either side of the decision surface. This is equivalent to requiring discriminant function inequalities to hold by a certain fixed amount. This is discussed in Ref W12 and LIO. Dead zones are also useful in applying linear learning to linearly inseparable data. There is an obvious trade off between dead zone size and reliability-a machine will make fewer mistakes if it makes fewer decisions in difficult cases. More research is needed in determining the dead zone necessary to achieve a specified reliability. Modifications to linear training algorithms exist which allow complete separation of linearly inseparable data. One way to do this is by linearly training polynomial discriminant functions (A3, 54). The resulting weight vectors then define polynomial decision surfaces. Another is by using extra linear discriminant functions (54, F2). This is equivalent to dissecting classification regions into convex subregions and is sometimes referred to as “piecewise-linear classification”. Least-squares minimization can be used to obtain weight vectors and discriminant functions. Here we seek parameters so that the discriminants they define will give a response to the known training set which is as close as possible to a known desired responsp in terms of total squared error. In Ref P7, a weight vector W for a single bigary classification is optimized to give the best fi_t to tanh (WTX) = f l , depending on the classification of X . I t is also possible to use least squares to obtain optimal parameters to define a single function to be used for classification instead of multiple discriminants ( J 4 ) . The range of this function is partitioned into intervals corresponding to the possible classifications, and an unknown is classified by observing into which of these intervals the value of the function falls for the unknown. Simplex-search optimization, also discussed in the optimization section of this review, is used for training weight vectors in Ref R4. Here the search is directed by ranking vectors primarily by number of misclassifications and secondarily by the sum of distances from the decision surface for the misclassified training set members. A probabilistic approach to discriminant function determination is taken in Ref L9. Infrared spectra are represented by a series of binary variables, each corresponding to the presence or absence of a peak in a particular channel. Conditional probabilities for these are calculated from the training set. Statistical independence of channels is assumed, allowing the calculation of the conditional probabilities of an entire spectrum given a particular classification and a corresponding discriminant function. The KNN (K Nearest Neighbor) rule does not use weight vectors, but instead classifies an unknown by majority vote of its k nearest neighbors in the training set. The notion of “nearest” is formalized by a metric or distance function. Perhaps the most popular is the N-dimensional Euclidean metric which is the exact analogue of ordinary two- or threedimensional distance. Somewhat more rational is the Mahalanobis distance which is normalized to eliminate covariance and equalize variance between dimensions (D9, KI ). I t can be shown that if all classifications are a priori equiprobable, and if the within classification covariances are all the same, then a Bayesian minimum error rate discriminant function for each classification is given by minus the Mahalanobis distance to the classification mean ( R 9 ) .The Hamming metric which counts agreeing and disagreeing coordinates can be used instead of the Euclidean metric for binary data (W14).The KNN technique is applied and discussed in Ref K17, K18, L8,J5, and P6 as well as in the general references. I t has performed well in most applications and has a more sound theoretical basis than most recognition methods (C8); however, it is much more costly than the weight vector technique in terms of computer time and storage. The application of preprocessing, feature selection, and feature extraction techniques is essential to pattern recogniANALYTICAL CHEMISTRY, VOL. 48, NO.

5, APRIL 1976

407R

tion applications. Jurs and Isenhour (54)define preprocessing as the rescaling of variables, feature selection as picking the more relevant variables, and feature extraction as the generation of new variables from sets of old variables. We consider these techniques together as they have a common goal of transforming observed patterns into data vectors with properties which make them desirable as input to recognition and training routines. Some of these properties are: (1)The data representation should be concise and of low dimension. (2) Classes should cluster well, Le., patterns in the same class should be represented by vectors which are close together, and patterns from different classes should be represented by vectors which are far apart. (3) If a linear learning machine is to be used, the vectors should be linearly separable, or close to it. Ad hoc methods based on intelligent observations of the important properties of the patterns under observation are frequently used. Logarithmic and other one-to-one functional transformations are frequently used. Spectral data are frequently represented by a peak vs. no peak binary code. Frequently, the variables are scaled so as to equalize variance or dynamic range; this is particularly important for multi-source data. A number of qualitative features were defined for stationary electrode polarograms in Ref S11. Intelligent qualitative choice of representation is of obvious importance in applications where chemical structure is the input (SlO,22). There are several feature selection techniques associated with the training of binary linear classifiers. One is the weight-sign method (54, S1 I ) which eliminates those final weight vector components whose sign changes in response to a certain change in the initial weight vector in the training routine. A more sophisticated method, based on a similar principle, is presented in a recent article (23).This method utilizes weight vector component response to a whole range of initial vector values. The authors of that article feel that their method is superior to all methods of feature selection for linearly separable data. Another method, used with early experiments on mass spectral data, eliminated those components whose total contribution to discriminant values was small. Another method ( P l )measures the distance from the decision surface to the nearest training set vector and eliminates those features whose omission reduces this distance by a small amount. The Karhunen-Loeve transform (see above section on principal components) can be used to find eigenvector features accounting for a large proportion of the total variance (K19). The features thus defined can be shown to be optimal under certain circumstances (A3).Eigenvectors and other projection techniques can be used to obtain a variety of two- and threedimensional data representations on an interactive graphics terminal which help in isolating meaningful and important features (KI 7, K20, S12). ARTHUR (K20)is an interactive pattern recognition system utilizing a computer and an interactive graphics terminal which allows the user to apply most of the more popular recognition and feature extraction techniques. One newer method is SIMCA (statistical isolinear multicomponent analysis) which used both principal components and least-squares techniques (K1). Fourier transform coding was applied to mass spectral data in Ref W15. Feature selection was applied to Fourier transformed mass spectral data in Ref 56. Hadamard transforms were applied to mass spectral data in Ref K21 and to 13C NMR data in Ref B17 and B13. In all of these applications there seems to be little motivation for the use of the transform and none of them have claimed dramatic successes. It has been pointed out (54) that the Fourier transform tends to mitigate certain types of errors; a single channel error will be gently smeared across the entire spectrum in the transform domain. A feature selection technique, closely related to transform techniques, is the use of moments as new features. This was done for mass spectral data in Ref B15, with good results. The use of cross-terms in mass spectral pattern recognition is investigated in Ref J7. Cross-terms are defined as normalized products of two-channel intensities. The relatedness between two channels is computed as the ratio of the number of samples with nonzero intensities for both channels to the number with such intensities for one or the other but not both and is used in feature selecting relevant cross-terms. A “complex valued nonlinear discriminant function” is applied to mass spectra and their Fourier transforms in Ref 408R

ANALYTICAL CHEMISTRY, VOL. 48, NO. 5, APRIL 1976

58. This technique maps each normalized and quantized variable isometrically onto the unit circle in the complex plane and used these complex numbers as input to the decision algorithm (again with no expressed apparent justification). In Ref P6, a computerized trial-and-error procedure is used to find an optimal combination of a minimal number of features for use in the KNN recognition method. An interesting method of developing features for KNN recognition is discussed in Ref B14. A number of binary classification weight vectors are linearly trained and the values of the corresponding discriminant function are used as features. Factor analysis was applied to mass spectra in Ref 59. Principal components are applied to the correlation matrix. The eigenvectors are then rotated so as to maximize the kurtosis (flatness) in the distribution of coefficients within the eigenvectors. The significant masses in each factor are determined, thereby helping to evaluate the effects of functional groups on the mass spectrum. This is closely related to feature selection. Additional work on factor analysis of mass spectra was reported in Ref R5 and R6. Factor analysis was also applied to the study of fundamental properties of solutes and stationary phases affecting gas chromatographic retention times in Ref W16, W17, and W18 and to polarographic studies in Ref HlO. I t would seem important in the area to allow room for the common sense of the analyst to guide in the selection of important features of the data. Some combination of both numerical analysis and common intuition is emphasized by Kowalski et al. Procedures for creating files of known spectra and searching them by computer for a match to an unknown spectrum are very similar to pattern recognition, albeit somewhat simpler. One still has the problems of efficient encoding (feature selection) and defining a match (recognition). Several papers have appeared on file searching procedures for mass spectra (G6, G7, G8, G9, A5, C9, H11). Such techniques are also used in infrared spectroscopy and gas-liquid chromatography. Additional references are S13 and 03. Grotch uses maximum likelihood in Ref G I 0 to weight mass spectral channels so as to maximize the separation between like and different spectra. In Ref G11, he shows how to estimate the expected number of mismatches, for a given library and coding scheme, from the statistical properties of the library. Cluster analysis is an area of multivariate analysis closely related to pattern recognition. Clustering may be defined to be the generation of classes, or clusters, without knowledge of prototype classification. The essential difference from pattern recognition is that the classes themselves are initially undefined. A clustering procedure must both define classes and classify observations into them. Leading techniques are presented in Ref A3. Chemical applications have been limited but there are discussions of clustering in the review articles ( K l , M6). There have been several applications to chromatographic phase selection (M8, M9, E4). Pattern recognition has not fared well for certain specific applications such as identification of chemical compounds from the mass spectrum when compared to straight librarysearch routines. However, the cost of processing is significantly less, and there appear to be instances where pattern recognition can provide a great deal of information. The future may see significantly more effort put into the reduction of empiricism in certain subsections of feature selection. This may tend to improve overall predictive ability. Some of the techniques currently in existence could find use in an interactive type of feedback and control of parameters associated with a chemical analysis system. The pattern of chemical results may dictate a certain type of systematic error that can be corrected by on-line computer adjustment. In an analogous manner pattern recognition techniques ought to be of use in evaluating the performance of all types of measurement systems. The pattern of concentrations of the elements for given sample types may allow prediction of just what changes in experimental parameters are required or what types of samples need to be analyzed with what settings on the instrument. Specifically, one can show that empirical pattern recognition techniques have the ability to pick out systematic errors in data. Analysis of mistakes and blunders by pattern recognition routines has often shown after tracing the origin of the input data that a mistake in keyboarding into the computer has been made.

DIGITAL SIGNAL PROCESSING Generally, digital signal processing deals with computations on digital data arising from stationary or near-stationary stochastic processes, i.e., time-varying processes which are statistical in nature but whose statistics remain constant or vary relatively slowly, The numbers generated by such processes are generally called "time-series" in the statistical literature. There is a strong analogy between digital signal processing and that portion of the theory of passive electronic network design known as "linear systems theory" and pioneered theoretically by Wiener (W9) in the 1940's. T h e theoretical differences arise mainly from the fact that the older theory deals with continuous valued functions of a continuous, infinitely extending time variable, whereas signal processing deals with functions whose values are quantized and which are sampled a t discrete intervals over a finite time span. Signal processing includes: (a) the actual calculation and interpretation of Fourier and other types of transforms; (b) problems of filtering, prediction, and smoothing with digital data; (c) signal-to-noise enhancement; (d) dealing with quantization effects; (e) power spectrum estimation. Signal processing generally involves extensive real-time calculation; Le., the calculations must keep u p with the process. The growing number of applications is due primarily to the development of new calculating hardware (dedicated computers, hard-wired correlations, etc.) and mathematical advances reducing the amount of calculation needed, such as the fast Fourier Transform (FFT).A good general reference on computation is Ref 03. A good reference on statistical interpretation is Ref 53. The necessity for Fourier transform calculation from digital data arises frequently and has been mentioned in previous portions of this review. Certain problems almost always arise. The nonzero sampling interval limits the observable frequency range and causes high-frequency content to be "aliased", or folded in with the lower frequencies. T h e finite number of values results in decreased frequency resolution and ripple, or Gibbs phenomenon, in rapidly changing portions of the frequency spectrum. Techniques exist for ameliorating these effects (BI 1, 03). Some additional chemical applications are discussed in Ref D6 and N2. These problems must be faced in Fourier transform optical and NMR spectroscopy. Such problems in Fourier transform infrared spectroscopy are discussed in Ref A I , B6, and P4. Smoothing and filtering are related by the Fourier transform and the convolution theorem; convolution with a smoothing function in the time domain corresponds to multiplication by a filter function in the frequency domain. The two can often be used interchangeably. Digital filtering techniques are discussed in Ref 03. If the relative frequency characteristics of noise or other unwanted signal components are known, filtering or smoothing may be used for their removal. Such applications to signal-to-noise ratio enhancement in conventional spectrometry are discussed in Ref P4, to anodic stripping voltammetry in Ref S8, to electronanalytical data in Ref H 3 , and t o circular dichroism spectrometry in Ref B7. The use of a Chebyshev polynomial smoothing technique for data with end point discontinuities is discussed in Ref A7. The application of hardware cross-correlation to signal-tonoise improvement in flame spectrometry is discussed in Ref H9. Horlick's general discussion (HS) is useful. Quantization effects are discussed in a paper on digitizing analogue signals (K1I ) .

BOOKS A few books are listed, only to provide an introduction to the techniques discussed and to give the background necessary to read the journal articles covered in this review. No attempt to broad coverage of books has been made. LITERATURE CITED

( A l ) R. J. Anderson and P. R . Griffith, Anal. Chem.. 47, 2339 (1975). (A2) F. S. Acton, "Numerical methods that work", Harper and Row, New York, N.Y., 1970. (A3) H. C. Andrews, "Introduction to Mathematical Techniques in Pattern Recognition", Wiley-lnterscience. New York, N.Y.. 1972.

Good basic statistic books are Ref S6 and B8. Information on errors specific to analytical chemistry is found in Ref E3. A statistics book more oriented toward data reduction and errors in the laboratory is Ref B9. A good introduction to mathematical statistics is Ref L6. Sampling is covered by Ref A6. A good elementary text on linear algebra and matrix theory, providing the background necessary to read the statistical literature, is Ref M13. Matrix computations (eigenvectors, matrix inversion, etc.) are covered in Ref S9. A good reference on numerical methods in general is Ref 0 7 , which covers matrix computations, functional approximation, optimization, and Fourier methods among other topics. A more elementary book on numerical methods is Ref A2. A statistical approach to nonlinear leastsquares fitting is taken in Ref BIO. Two good books on pattern recognition are Ref A3 and D9. A book on chemical applications is Ref J4. A very concise introduction to multivariate techniques (principal components, factor analysis, etc.) is presented in Ref M7. In the field of time series analysis and digital signal processing, Ref 0 3 is a good reference on digital filtering, power spectrum estimation, etc. A comprehensive statistical analysis in this area is presented in Ref J3. Reference B l l is a good book on the problems associated with finite Fourier transforms and on fast Fourier transform algorithms. A good book on the general theory of the Fourier transform and its experimental significance is Ref J l .

FINAL COMMENTS What we have done in this brief review is to attempt to reflect a t least the most active interests in the use of numerical processing of data in analytical chemistry. I t is not clear that this necessarily coincides with the activities that may provide the most benefit to the analyst. For the reader who might be interested in what areas of numerical application might be more useful, it is suggested that a study be made of the sections on Characterization of the Measurement Process, Planning and Control of Experiments, and Statistical Techniques for Testing Basic Assumptions in the previous review ( C l ) . These subjects can be better utilized, and since computer processing of data is now a t our fingertips, it is important to invoke these techniques as well as those described in this review. The obvious impact will be decided improvement in the accuracy of measurement. We feel it necessary to reiterate a warning discussed in the previous review regarding errors in function subroutines that exist in minicomputers and microprocessor systems. Extreme caution must be taken to be assured that the function is accurate for the particular arguments used. Efforts are being made by a number of standards organizations to evaluate the accuracy of computational subroutines available. Most of the studies reported in this review are related to problems associated with reproducibility of measurement and the methodical correlation of results with phenomenological response that has real meaning in the world. Care must be taken to incorporate along with these statistical processes a concern for the correctness of the result. While it is possible to generate a self-consistent measurement-response system for a single organizational or operational structure, it often is impossible to provide meaningful interpretation of results (for the scientific community a t large) without an ever improving approximation to the absolute or correct result of chemical analysis. These techniques for numerical and statistical analysis of the data should be supplanted with careful studies of systematic errors in analysis, and perhaps a subsequent review should address this important question.

(A4) H. Abeand P. C. Jurs, Anal. Chem., 47, 1829 (1975). (As) F. P. Abramson, Anal. Chem., 47, 45-48 (1975). (A6) American Society for Testing and Materials, "Sampling, Standards and Homogeneity", Special Technical Publication 540, Philadelphia, Pa., 1973. (A7) D. E. Aspnes, Anal. Chem., 47, 1181 (1975). ( E l ) H. C. Burger and P. H. van Cittert, Z.Phys.,

79, 722 (1932). (82) J. T. Bulmer and H. F. Shurvell, J. Phys. Chem., 77, 256 (1973). (83) G.Brouwer and J. A. J. Jansen, Anal. Chem., 45, 2239 (1973). (84) J. H. W. Bruins Slot and A. Dijkstra, Chem. Weekbl.. 68, 13 (1972). (B5) J. T. Bulmer and H. F. Shurveil. Can. J. Chem., 53, 125 1 (1975). (B6) J. E. Bates, Science, 191, 31 (1976).

ANALYTICAL CHEMISTRY, VOL. 48, NO. 5, APRIL 1976

409R

(B7) C. A. Bush, Anal. Chem., 46, 890 (1974). (BE) K. A. Brownlee. "Statistical Theory and Methodology in Science and Engineering", Wiley, New York, N.Y., 1960. (69) P. R. Bevington, "Data Reduction and Error Analysis for the Physical Sciences", McGraw-Hill. New York. N.Y., 1969. (610) Y. Bard, "Non-linear Parameter Estimation". Academic Press, New York, N.Y., 1974. ( B l l ) E. Oran Brigham, "The Fast Fourier Transform", Prentice-Hall, N.J., 1974. (812) R. L. Brunelle and M. J. Pro, J. Assoc. Off. Anal. Chem., 55, 823 (1972). (813) T. R. Brunner, C. R. Wilkins. R. C. Williams, and P. J. McCombie, Anal. Chem., 47, 662 (1975). (614) C. F. Bender and B. R . Kowalski, Anal. Chem., 45, 590 (1973). (615) C. F. Bender, H. D. Sheperd, and B. R. Kowalski, Anal. Chem., 45, 617 (1973). (616) C. F. Bender and B. R. Kowalski, Anal. Chem., 46, 294 (1974). (817) C. L. Wiikins and P. J. McCombie, Anal. Chem., 46, 1798 (1974). ( C l ) L. A. Currie. J. J. Filliben, and J. R . DeVoe, Anal. Chem., 44, 497R (1972). (C2) L. A. Currie. Nucl. lnstrum. Methods, 100, 387 (1972). (C3) L. A. Currie, Anal. Chem., 40, 586 (1968). (C4) S. N. Chesler and S. P. Cram, Anal. Chem., 45, 1354 (1973). (C5) F. P. Czech, J. Assoc. Off. Anal. Chem., 56, 1 489 (1973). (C6) B. H. Campbell and L. Meites, Talanta, 21, 393 (1974). (C7) R . N. Carey, S. Wold, and J. 0. Westgard. Anal. Chem., 47, 1824 (1975). (C8) T. M. Cover and P. E. Hart, /E€€ Trans. lnf. Theory, IT-13, 21 (1967). (C9) C. E. Costello, H. S.Hertz, T. Sakai, and K. Biemann, Clin. Chem. (Winston-Salem, N.C.), 20, 255 (1974). (C10) H. A. Clark and P. C. Jurs, Anal. Chem., 47, 374 (1975). (Dl) A. Den Harder and L. DeGaien, Anal. Chem., 46, 1464 (1974). (D2) R . D. Dyson and I. Isenberg, Biochemistry, I O , 3233 (1971). (D3) P. F. Dupuis and A. Dijkstra. Anal. Chem., 47, 379 (1975). (D4) S. N. Deming and S. L. Morgan, Anal. Chem.. 45, 278A (1973). (D5) J. E. Davis, A. Shepard, N. Stanford, and L. B. Rogers, Anal. Chem., 46, 821 (1974). (D6) R . W. Dwyer, Jr., Anal. Chem., 45, 1380 (1973). (D7) G. Dahlquist and A. Bjork, "Numerical Methods", Prentice-Hall, Englewood Cliffs, N.J., 1974. (DE) D. L. Duewer, B. R. Kowalski, and T. F. Schatzki, Anal. Chem., 47, 1573 (1975). (D9) R . 0. Duda and P. E. Hart, "Pattern Classification and Scene Analysis", Wiley, New York, N.Y., 1973. (D10) D. L. Duewer and B. R . Kowalski, Anal. Chem., 47, 526 (1975). (D1 1) W. K. Dean, K. J. Heald, and S.N. Deming, Science, 189, 805 (1975). (El) H. Ebel and N. Gurker, J. Electron Spectrosc. Relat. Phenom., 5, 799 (1974). (E2) H. Ebel and N. Gurker, Phys. Lett. A., 50, 449 (1975). (E3) K. Eckschlager, "Errors, Measurement and Results in Chemical Analysis", Van NostrandReinhoid, London, 1961. (E4) A. Eskes, F. Dupuis. A. Dijkstra, H. D. DeClercq. and D. L. Massart. Anal. Chem., 47, 168 ( 1975). ( F l ) I. M. Fiseene, A. O'Toole, and R . Cutler, Radiochem. Radioanal. Lett., 16, 5 (1973). (F2) N. M. Frew. L. E. Wangen, and T. L. Isenhour, Pattern Recognition. 3, 281 (1971). (F3) W. L. Felty and P. C. Jurs, Anal. Chem., 45, 885 (1973). (Gl) A. Grinvald and I. 2. Steinberg, Anal. Biochem., 59, 583 (1974). (G2) B. Goldberg, J. Chromatogr. Sci., 9, 289 (1971). (G3) E. Gtushka. J. Phys. Chem., 76,2586 (1972). (G4) B. Griepink and G. Dijkstra. Fresenius', 2. Anal. Chem. 257, 269 (1971).

410R

(G5) D. Gamin, J. Res. Natl. Bur. Stand., Sect. A, 76, 67 (1972). (G6) S. L. Grotch. Anal. Chem., 43, 1362(1971). (G7) S. L. Grotch. Anal. Chem., 45, 2 (1973). (G8) N. A. B. Gray and T. 0. Gronneberg, Anal. Chem., 47, 419 (1975). (G9) T. 0. Gronneberg, N. A. 8. Gray, and G. Eglinton, Anal. Chem., 47, 415 (1975). (G10) S. L. eotch. Anal. Chem., 47, 1285 (1975). (G11) S. L. Grotch, Anal. Chem.,46, 526 (1974).

(K17) B. R . Kowalski and C. F. Bender, J. Am. Chem. SOC.,96, 916 (1974). (K18) B. R . Kowalski and C. F. Bender, Anal. Chem., 44, 1405 (1972). (K19) B. R. Kowalski and C. F. Bender, J. Am. Chem. SOC.,95, 686 (1973). (K20) J. R. Koskinen and B. R. Kowalski, J. Chem. lnf. Comput. Sci., 15, 119 (1975). (K21) B. R. Kowalski and C. F. Bender, Anal. Chem., 45, 2234 (1973).

(Hl) W. P. Helman. Int. J. Radiat. Phys. Chem.,

( L l ) I. L. Larsen, N. H. Hartmann. and J. J. Wagner, Anal. Chem., 45, 1511 (1973). (L2) I. L. Larsen and J. J. Wagner, J. Chem. Educ., 52, 215 (1975). (L3) F. E. Lytle. Anal. Chem., 44, 1867 (1972). (L4) W. H. Lawton, E. A. Sylvestre, and M. S. Maggio, Technometrics, 14, 513 (1972). (L5) W. H. Lawton and E. A. Sylvestre, Technometrlcs, 13, 617 (1971). (L6) H. J. Larson, "introduction to the Theory of Statistics". Wiley, New York, N.Y., 1973. (L7) R. W. Liddell, 111, and P. C. Jurs, Anal. Chem., 46, 2126 (1974). (L8) J. J. Leary, J. B. Justice, S. Tsuge, S. R. Lowry. and T. L. Isenhour, J. Chromatogr. Sci., 11, 201 (1973). (L9) S. R. Lowry, H. B. Woodruff, G. L. Ritter. and T. L. Isenhour, Anal. Chem., 47, 1126 (1975). (L10) R . W. Liddell, 111, and P. C. Jurs, Appl. Spectrosc., 27, 371 (1973).

3, 283 (1971). (H2) G. Hotlick, Appl. Spectrosc.,26,395 (1972). (H3) J. W. Hayes, D. E. Glover, D. E. Smith, and M. W. Overton, Anal. Chem., 45,277 (1973). (H4) W. E. Harris and B. Kratachril, Anal. Chem., 46, 313 (1974). (H5) Hardy and Young, J. Opt. SOC.Am., 39, 265 (1949). (H6) G. Horlick, Anal. Chem., 44, 943 (1972). (H7) G. Horlick, Anal. Chem., 45, 319 (1973). (H8) D. G. Howery. Anal. Chem., 46, 829 (1974). (H9) G. M. Hieftje. R. I. Bystroff, and R. Lim, Anal. Chem., 45, 253 (1973). (H10) D. G. Howery, Bull. Chem. SOC.Jpn., 45, 2643 (1972). (H11) S . R . Heller, H. M. Fales, and G. W. A. Milne, Org. Mass Spectrom., 7, 107 (1973). (11) J. D. Ingle, Jr., and S.R. Crouch, Anal. Chem., 44, 785 (1972). (12) J. D. Ingle, Jr., Anal. Chem., 46, 2161 (1974). (13) A. F. Isbeii. Jr., R. L. Pecsok, R. H. Davies, and J. H. Purnell, Anal. Chem., 45, 2363 (1973). (14) I. Isenberg, R. Dyson. and R . Hanson, Biophys. J., 13, 1090 (1973). (15) I. Isenberg, J. Chem. Phys., 59,5708 (1973). (16) J. D. Ingle, Jr., Anal. Chem., 46, 661 (1973). (17) T. L. Isenhour, B. R. Kowalski, and P. C. Jurs, Crit. Rev. Anal. Chem., 4, 1 (1974). (Jl) R. C. Jennison, "Fourier Transforms and Convolutions for the Experimentalist", Pergamon Press, Oxford, 196 1. (J2) L. J. Johnson and M. D. Harmony, Anal. Chem., 45, 1494 (1973). (J3) G. W. Jenkins and D. G. Watts, "Spectral Analysis and its Applications",, Holden-Day, San Francisco, Calif., 1968. (J4) P. Jurs and T. isenhour, "Chemical Applications of Pattern Recognition", Wiley-interscience, New York, N.Y., 1975. (J5) J. B. Justice and T. L. Isenhour, Anal. Chem., 46, 223 (1974). (J6) P. C. Jurs, Anal. Chem., 43, 1812 (1971). (J7) P. C. Jurs, Appl. Spectrosc., 25,483 (1971). (J8) J. B. Justice, Jr., D. N. Anderson, T. L. Isenhour, and J. C. Marshall, Anal. Chem., 44, 2087 (1972). (J9) J. B. Justice, Jr., and T. L. Isenhour, Anal. Chem., 47. 2286 (1975). ( K l ) B. R. Kowalski. Anal. Chem., 47, 1152A (1975). (K2) D. W. Kiomse and A. W. Westerberg, Anal. Chem., 43, 1035 (1971). (K3) P. C. Kelly and G. Horlick, Anal. Chem., 46, 2130 (1974). (K4) Kaminishi, Katsuji, Nawata. Shigenori, Jpn. J. Appl. Phys., 13, 1640 (1974). (K5) J. J. Kankore, Anal. Chem., 42, 1322 (1970). (K6) A. E. W. Knight and B. K. Selinger, Spectrochim. Acta, PartA., 27, 1223 (1971). (K7) J. H. Kindsvater, P. H. Weiner, and T. J. Klingen, Anal. Chem., 46, 982 (1974). (K8) D. I. Keefer, lnd. Eng. Chem., ProcessDes. Dev., 12, 92 (1973). (K9) P. G. King and S. N. Deming, Anal. Chem., 46, 1476 (1974). (K10) R . D. Krause and J. A. Lott, Clin. Chem. (Winston-Salem, N.C.), 20, 775 (1974). ( K l l ) P. C. Kelly and G. Horlick. Anal. Chem., 45, 518 (1973). (K12) B. R . Kowalski and C. F. Bender, J. Am. Chem. SOC.,94, 5632 (1972). (K13) D. A. Kuo and P. C. Jurs, J. Am. Water Works Assoc., 65, 623 (1973). (K14) F. K. Kawahara, J. F. Santner, and E. C. Julian, Anal. Chem., 46, 266 (1974). (K15) B. R . Kowalski, T. F. Schatzki, and F. H. Striss, Anal. Chem., 44, 2176 (1972). (K16) H. T. Kaili, R . C. T. Lee, G. W. A. Milne, M. Shapiro. and A. M. Guarino, Science, 180, 417 (1973).

ANALYTICAL CHEMISTRY, VOL. 48, NO. 5, APRIL 1976

(Ml) Y. Mesi, J. Chromatogr., 9, 66 (1972). (M2) S. L. Morgan and S. N. Deming. Anal. Chem., 46, 1170 (1974). (M3) D. L. Massart. J. Chromatogr., 79, 157 (1973). (M4) D. L. Massart and R. Smits, Anal. Chem., 46, 283 (1974). (M5) A. C. Moffat. A. H. Stead, and K. W. Smalldon, J. Chromatogr., 90, 19 (1974). (M6) D. L. Massart and L. Kaufman. Anal. Chem., 47, 1244A (1975). (M7) F. H. C. Marriott, "The Interpretation of Multiple Observations", Academic Press, London, 1974. (M8) D. L. Massart and H. DeClercq, Anal. Chem., 46, 1988 (1974). (M9) D. L. Massart, M. Lauwereys. and P. Lenders, J. Chromatogr. Sci.. 12, 617 (1974). (M10) D. Macnaughtan, Jr., L. B. Rogers, and G. Wernimont. Anal. Chem., 44, 1421 (1972). (M11) L. Meites and E. Matijevic, Anal. Chlm. Acta, 76, 423 (1975). (M12) L. Meites and D. M. Barry, Talanta, 20, 1173 (1973). (M13) M. Marcus and H. K. Minc, "Introduction to Linear Algebra", Macmillan, New York, N.Y., 1965. (Nl) J. A. Nelder and R. Mead, Comput. J., 7, 308 (1965). (N2) T. Nishikawa and K. Someno, Anal. Chem., 47, 1290 (1975). (01) D. M. Olsson and L. S.Nelson, Technometrics, 17, 45 (1975). (02) N. Ohta, Anal. Chem., 46, 553 (1973). (03) R. K. Otnes and L. Enochson, "Digital Time Series Analysis", Wiley, New York, N.Y., 1972. (Pl) D. R. Preuss and P. C. Jurs, Anal. Chem., 46, 520 (1974). (P2) P. Paatero, J. Manninen, and T. Paakhari, Phllos. Mag., 30, 1281 (1974). (P3) G. A. Pearson and R. I. Walter, J. Magn. Reson., 16, 348 (1974). (P4) H. M. Pickett and H. L. Strauss, Anal. Chem., 44, 265 (1972). (P5) H. L. Pardue, T. E. Hewitt, and M. J. Milano, Clin. Chem. (Winston-Salem, N.C.),20, 1028 (1974). (P6) M. A. Pichler and S. P. Perone. Anal. Chem., 46, 1790 (1974). (P7) L. Pietrantonio and P. C. Jurs, Pattern Recognition, 4, 391 (1972). (Rl) L. D. Rothman, S. R. Crouch, and J. D. Ingle, Jr., Anal. Chem., 47, 1226 (1975). (R2) S. M. Roberts, Anal. Chem., 44,502 (1972). (R3) I. B. Rubin, T. J. Mitchell, and G. Goldstein, Anal. Chem., 43, 717 (1971). (R4) G. L. Ritter, S. R . Lowry, C. L. Wilkins. and T. L. Isenhour. Anal. Chem., 47, 1951 (1975). (R5) R. W. Rozett and E. M. Petersen, Anal. Chem., 47, 1301 (1975). (R6) R. W. Rozett and E. M. Petersen, Anal. Chem., 47, 2377 (1975).

(Sl) J. Szoke, Chem. phys. Lett., 15, 404(1972). (52) R. Schuyier and I. Isenberg. Rev. Sci. lnstrum.. 42, 813 (1971). (S3) S.Schiffman, Science, 185, 112 (1974). (54) W. Spendley, G. R. Hext. and F. R . Himsworth, Technometrics, 4, 441 (1962). (55)R. Smits, C. Vanroeien, and D. L. Massart, Fresenius’Z. Anal. Chem., 27, 31 (1975). (S6) G. W. Snedecor and W. G. Cochran. “Statistical Methods”, 6th ed. The Iowa State University Press, Ames, Iowa, 1967. (57) E. A. Sylvestre, W. H. Lawton, and M. S. Maggio. Technometrics, 16, 353 (1974). (Sa) D. F. Seelig and H. N. Blount, Anal. Chem., 48, 252 (1976). (S9) G. W. Stewart, “introduction to Matrix Computations”, Academic Press, New York, N.Y., 1973. (S10) J. Schechter and P. C. Jurs, Appl. Spectrosc., 27, 30 (1973). ( S l l ) L. B. Sybrandt and S. P. Perone. Anal. Chsm., 44, 2331 (1972). (512) J. W. Sammon. A. H. Proctor, and D. F. Roberts, Pattern Recognifion, 3, 37 (1971). (513) D. H. Smith, Anal. Chem., 44, 536 (1972). (Tl) V . L. Tai’roze. V. V. Reznikov, and G. D. Tantsyver, Dokl. Akad. Nauk, SSSR. 159, 182 ( 1964).

(T2) D. D. Tunniciiff and P. A. Wadsworth, Anal. Chem., 45, 13 (1973). (T3) K. Tanabe and S.Saeki, Anal. Chem., 47, 118 (1975). ( V l ) B. G. M. Vandeginste and L. DeGaian. Anal. Chem., 47, 2124 (1975). ( W l ) G. K. Wertheim. J. Electron Spectrosc. Relat. Phenom., 6, 239 (1975). (W2) G. Wernimont, Anal. Chem., 39,554 (1967). (W3) J. 0. Westgard, R. Carey, and S.Wold, Clin. Chem. (Winston-Salem, N.C.) 20, 825 (1974). (W4) W. R. Ware, L. J. Doemeny. and T. L. Nemzek. J. Phys. Chem., 77, 2038 (1973). (W5) P. H. Weiner and J. F. Parcher, J. Chromatogr. Sci., 10, 612 (1972). (W6) W. Wilson and L. J. Swartzendruber, comput. Phys. Commun.,7, 151 (1974). (W7) J. J. White, 111, Appl. Phys., 5, 57 (1974). (W8) P. H. Weiner. H. L. Liao, and B. L. Karger, Anal. Chem., 48, 2182 (1974). (W9) N. Weiner, “Extrapolation, Interpolation, and Smoothing of Stationary Time Series, with Engineering Applications”, The M.I.T. Press, Cambridge, Mass., 1949. (W10) C. L. Wilkins and T. L. isenhour, Anal. Chem., 47, 1849 (1975). ( W l l ) C. L. Wilkins, R. C. Williams, T. R. Brunner.

and P. J. McCombie, J. Am. Chem. SOC.,96,4182 (1974). (W12) L. E. Wangen, N. W. Frew, and T. L. Isenhour, Anal. Chem. 43, 845 (1971). (W13) H. B. Woodruff, S. R . Lowry, and T. L. Isenhour, Anal. Chem., 46, 2150 (1974). (W14) H. B. Woodruff, S. R. Lowry, G. L. Ritter, and T. L. isenhour, Anal. Chem., 47, 2027 (1975). (W15) L. E. Wangen, N. W. Frew, T. L. Isenhour, and P. C. Jurs. Appl. Spectrosc., 25, 203 (197 1). (W16) P. H. Weiner and D. G. Howery, Anal. Chem., 44, 1189 (1972). (W17) P. H. Weiner and D. G. Howery. Can. J. Chem., 50, 448 (1972). (W18) P. H. Weiner and J. F. Parcher, Anal. Chem., 45, 302 (1973). (Y 1) K. Yamaoka and T. Nagakawa, J. Chromatogr., 92, 213 (1974). (Y2) K. Yamaoka and T. Nagakawa. J. Chromatogr., 103, 221 (1975).

( Z l ) F. Zenitani and S. Minami, Jpn. J. Appl. Phys., 12, 379 (1973). (22) G. S.Zander and P. C. Jurs, Anal. Chem., 47, 1562 (1975). (23) G. S.Zander, A. J. Stuper, and P. C. Jurs. Anal. Chem., 47, 1085 (1975).

Gas Chromatography Stuart P. Cram* Varian Instrument Division, Walnut Creek, Calif.

94598

Richard S. Juvef, Jr. Department of Chemistry, Arizona State University, Tempe, Ariz.

8528 1

This review surveys developments in the field of gas chromatography since publication of the last review in this series (788) and covers the years 1974-75. Earlier articles of significance appearing in foreign journals and the patent literature not available a t the time of our last review are also included. Gas chromatography continues to be one of the most active research areas in analytical chemistry. In the 1975 Directory of Members of the American Chemical Society Division of Analytical Chemistry (648),those listing their research specialty as “gas chromatography” were second in numbers only to those listing the specialty, “general analytical”. The February 1974 issue of the Journal of Chromatographic Science gave a comprehensive listing of over 500 manufacturers of chromatographic instrumentation, accessories, supplies, and services (785). McNair and Chandler (1078) also published a comprehensive survey of GC instrument capabilities and accessories in this biennium. Owing t o the vast literature in this field and the diversity of national and international journals, considerable selection was necessary in preparing this review. An extensive effort was made t o bring the literature coverage up to date by eliminating the large backlog of papers not covered in the 1974 review. Thus, an extensive bibliography is included which represents most technique-centered publications through November 1975, particularly in the English speaking primary literature.

BOOKS AND REVIEWS Books on gas chromatography published since the preparation of the last review in this series include: “Separation Methods in Chemical Analysis” (1098),a text by Miller designed as an undergraduate chemistry textbook in Separation and Analysis; an English translation of Guiochon and Pommier’s monograph on “Gas Chromatography in Inorganics and Organometallics” (644) which covers the analysis of inorganic gases, metals and metal halides, hydrides, organometallic compounds, metal chelates and isotopes; “Bonded Stationary

Phases in Chromatography” (622), edited by Grushka consisting of a collection of ten papers on the subject presented a t a recent national meeting of the American Chemical Society; “Gas Chromatographic Detectors” (372),by David covering in some depth both commercially available detectors as well as those which have not yet achieved commercial importance; “Gas Analysis Instrumentation” (1622) by Verdin which discusses the principal methods used for gas analysis and gives details of various GC applications; “The Packed Column in Gas Chromatography” (1498) by Supina which is a practical guide to the selection and preparation of columns; “The Practice of Gas Chromatography” (1340) by Rowland giving a nontheoretical description of techniques and applications; “Advances in Gas Chromatography” (858) are papers published by the Academy of Science, USSR, from a seminar on the Theory and Practice in GC; “Identification of Organic Compounds with the Aid of Gas Chromatography” (352) by Crippen describing the utilization of GC with other methods and techniques for qualitative analysis; “Spectroscopic Methods of Identification of Macroquantities of Organic Materials” (100) by Ayling; “New Developments in Gas Chromatography” (1288) by Purnell reviewed forensic applications; “Processes in Chromatographic Columns” (1380) presented fundamental contributions and experiences in advanced concepts of GC; “Lipid Analysis” (308) by Christie with 612 references on the separation, identification, and derivatization of lipids, ‘giving considerable emphasis to GC and spectrophotometric methods; “Industrial Gas Chromatographic Trace Analysis” (651) by Hachenberg covering the applications, potentialities, and difficulties of GC trace analysis in industrial samples; and “Chromatography of Antibiotics” (1657) by Wagman and Weinstein which presents detailed data on paper chromatography, TLC, electrophoresis, counter-current distribution and GC for over 1200 antibiotics and their derivatives. Other books of interest to gas chromatographers published in the last two years include Volume 10 ANALYTICAL CHEMISTRY, VOL. 48, NO. 5, APRIL 1976

411 R