Computerized signal processing


Computerized signal processinghttps://pubs.acs.org/doi/pdfplus/10.1021/ac60351a002Similarby G Dutaney - ‎1975 - ‎Cit...

0 downloads 96 Views 2MB Size

The term signal processing is widely used to cover aspects of data handling, especially in connection with minicomputers. Here, signal processing shall be taken to be the reduction of measurable physical phenomena to a specific coordinate set. By extension it is assumed that the experimental data, whether analog or digital, produced by the experimental apparatus are capable of being accepted by the automation system under consideration and converted into a set of digital information by it. Generally, any experiment can be considered as a superset of two-parameter measurements. Although a given experiment may embody multiple-parameter measurements, the resulting data can be converted to sets of x - y data; that is, two-parameter sets. These resulting x - y data sets can be presented and manipulated in a number of ways to obtain the desired multiparametric presentations and analyses. Therefore, one must considc -c er the methodologies of converting the data output from the conventional laboratory instrumentation into a format acceptable to the minicomputers or microcomputers-the process of interfacing-and the subsequent processing of these data into specific The design of computer systems for coordinate sets. processing experimental data from For example, consider the possible automation of a mass spectrometer. chemical experiments is an area of Typical data would be ion-current growing significance. The wide intensity, indirect mass measurements variety of instrumentation and (time), and the acceleration potential. This three-parameter coordinate set computer equipment available can would ultimately be reduced to the cause uncertainties in the choice of two-parameter set of mass vs. intensioptimum equipment for laboratory ty. automation. This paper defines a Other examples could be drawn from optical spectrophotometry, common framework for signal where analog data ( y -axis) may be processing and considers the transmittance, absorbance, or optical requirements of computer density; and the x - axis measurements might be time, wavelength, frequency, interfacing to and processing of etc. As before, the final data set will data from chemical instrumentation. usually be defined in terms of some inStrong emphasis is placed upon tensity parameter plotted against some x - axis parameter. cost effectiveness

-

24A

ANALYTICAL CHEMISTRY, VOL. 47, NO. 1, JANUARY 1975

Thus, signal processing applications will best be defined in terms of readily automated two-parameter measurements. This will be taken as a design criterion throughout. Common Framework for Computerized Signal Processing Usually, automation systems are thought of as being composed of a data acquisition portion and a data reduction portion. However, this is too limited a division, and the framework presented here will permit definition of five serial phases of operation, incorporating all common aspects from analog signal acquisition and conversion through final manipulation and reduction. The five aspects usually required for the widest variety of instrument automation are: data acquisition and analog-to-digital conversion; digital preprocessing; procedural reduction; manipulation and transformation; and postoperative amalgamation and interpretation. Data Acquisition and Analog-toDigital Conversion. Data acquisition is the extraction of information from measurable physical phenomena and the conversion of that information into computer-compatible data. This aspect is what is classically considered to be the experiment, with the addition of the conversion of the output data into a form acceptable to the computer. The data source (instrument) may present the data in analog form (such as voltage), although it is becoming more popular in modern laboratory equipment to output the data in digital form. This is one instance of the impact of computers on instrumentation. Aspects of Data Acquisition. The rate at which data can be acquired is an important consideration. The acceptable data rate for an automated system depends both upon the type of instrumentation and upon the characteristics of the computer system to be used. As a first approximation, data rates may be divided into four catego-

l

ries: Slow data rates-fewer than 100 samples/sec. Such rates are typical of a wide variety of laboratory instrumentation. Medium data rates-from 100 t o 1000 samples/sec. Such rates are associated with instrumentation such as that used for nuclear magnetic resonance (NMR), electron paramagnetic resonance (EPR), et al. High data rates-from 1000 to 50,000 sampledsec. Such data rates are encountered in mass spectrometry and Fourier transform NMR. Very high data rates-beyond 50,000 sampledsec. These are not relevant to the discussion. The number of instruments that a computer system can handle depends upon the sample rate requirements. In low and medium data rate situations, a computer can handle several instruments a t the same time. However, for high data rate applications, a computer system is usually dedicated to the servicing of the instrument producing the high data rates during the period of its operation. That is, the entire operation of the computer is restricted solely to that one instrument producing the high data rate samples. Without quite sophisticated (and usually costly) hardware, it is awkward and often impossible to require a computer system to operate simultaneously with both low and high data rate instrumentation. Data rates can be specified from the frequency content of the output signal of the instrumentation. In many cases, this can be inferred from the experiment or the phenomenon being measured. For example, if the output of the instrumentation takes the form of peaks, the sample rate can be estimated as a function of the peak width. In general, the sampling frequency should be taken to be a t least twice the frequency defined by the inherent characteristics of the analog output signal to prevent loss of information. Before the optimum sample rate can

be adequately determined, it is desirable to understand exactly how the data being obtained will ultimately be processed. In the case of determining peak area or signal area, a t least 20 samples over each significant peak or phenomenon would be required to make accurate calculations by means of numerical integration techniques. Noise is another parameter that must be considered in the determination of the best data acquisition approach. There are a variety of noise types, several of which are always present. These may be categorized as follows: Line frequency noise (or mains frequency noise)-this is primarily the fundamental and harmonic frequencies of the alternating current used to power the instrument. I t is almost always present in any analog output. Transmission noise (or pickup noise)-this is usually created by radio interference (RFI), electromagnetic interference (EMI), etc. When present, it is usually picked up in the cabling or circuitry between the output of the instrument and the computer or conversion device. System noise-it is reasonable to assume that the instrument itself generates an output noise, as a function of the phenomenon being measured and/ or as a function of the instrumentation electronics. An important subset of system noise is interfacing or conversion noise; this can arise from the interfacing and/or conversion techniques used. Equipment component manufacturers have tried to insure that interface device quality will be sufficiently high to prevent the inclusion of noise from the interface. However, prospective users should always consider the possibility of this noise when planning an application setup. There are several reasonable techniques for eliminating the effects of noise within a laboratory automation system. One technique that almost invariably produces improvements is the analog signal filter. The choice and de-

l

i

l

sign of such filters are much discussed in the literature. Such a filter will pass only signals with a frequency less than a specified amount (cutoff frequency). The cutoff frequency is typically chosen to be 2-10 times the sample rate being used. In practice, it is often convenient to have the actual filter accessible and modifiable by the user depending upon the application. Additional advantage may be gained by using software filters-digital or numeric filtering techniques employed within the computer's program itself-in addition to analog filters. These software filters, which can assume many forms, have been discussed in the literature. Such filtering techniques will be considered in more detail subsequently. Sometimes, a third technique can be employed to improve the signalto-noise ratio ( S I N ) .This technique can be used in situations where a series of equivalent (repeated) signals is obtained over a period of time. Here, the data set is acquired and stored temporarily. Another equivalent set is acquired, added to the first, and the sum stored. This add-and-store process is continued for some specified period of time. In the final set of data the random or noise element will have been lessened by the averaging of the ensemble of data sets. This technique is known as signal averaging or computed average transients (CAT) and has been widely employed in many applications. This technique can be performed by hard-wired nonprogrammable devices or can be implemented with computers. Optimally, noise is minimized by a combination of the above discussed considerations. Choice of components with the least inherent noise is important. Good technique in analog signal handling and cabling is essential. Line frequency noise and RFI/EMI radiation can be decreased by the use of twisted-pair and shielded cable. This is further discussed in the section Data Transmission.

ANALYTICAL CHEMISTRY, VOL 47, NO. 1, JANUARY 1975

-

- -

25A

.-

____I__

Figure 1. Ground loop phenomenon SHIELD

OUTPUT INSTRUMENT

---

+--

TO COMPUTER

+

Dynamic range may be defined as the ratio of the largest desired observable signal (in voltage units) to the minimum desired observable signal (in voltage units). The dynamic range is extremely important when choosing a computer system for laboratory automation. The relevant parameters include signal resolution, amplifier (or instrument) signal linearity, and signal-to-noise performance. Consider a simple curve such as might be generated by a thermocouple. The highest temperature a t which the thermocouple will be used will generate an output (say A mV); the lowest temperature will similarly result in lower output ( B mV). The ratio of these two readings, AIB, defines the dynamic range of the signal source. The accuracy-of-reading and the resolution may be computed from the knowledge of the voltage range. For a desired accuracy in the readings, the resolution may be defined as the voltage range divided by the desired accuracy-of-reading, In the example above, suppose the voltage range (A-B, or effectively A since B is comparatively small) is 20 mV and the desired accuracy-of-reading is 1%(1 part in 100); then the resolution would be 20 mV/ 100 or 0.2 mV/unit. Most chemical instrumentation today is capable of a dynamic range in excess of 1000. In certain cases, particularly chromatography, the dynamic range can exceed lo6; but most applications do not require a dynamic range that would exceed several thousand. In these cases, all that may be required is amplification to increase the instrument’s output signal to a level compatible with the computer system to be used. Moderately inexpensive amplifiers can be used for this purpose. In situations where extremely large dynamic ranges are encountered, amplifiers with selectable gain are often used. The mode of gain selection can either be automatic in the amplifying device (autoranging amplifiers) or programmable (programmable gain26A

ranging devices). The choice between the two types is primarily determined by costs and by the programming (software) for the computer system. Programmable gain-ranging amplifiers are used if it is known that the system’s software can predict the desired gain range necessary for the dynamically changing system. Autoranging amplifiers can determine, within their own hardware elements, the appropriate gain range for the voltage level of the input system. Such autoranging devices typically tend to be fairly slow in operation and thus cannot be used with high data rate instrumentation. Conversion of Analog Data t o Digital Form. When automating a laboratory, it is always necessary to convert the analog output data from the instrument to a digital form compatible with the computer system. More precisely, if the data source produces analog information, this must be converted into parallel digital information. This is done by an analog-to-digital converter (ADC). These devices are commonly available from essentially all computer vendors as well as a large number of analog equipment manufacturers. There are many specifications for the ADC that must be considered, including dynamic range of the converter, its linearity, and its inherent noise. In addition, the speed of conversion (conversion rate) must be compatible with the desired data sampling rate. ADC’s themselves are rarely capable of processing low-level signals (less than 1 V) over their entire range. Thus, amplifiers must be used in conjunction with the ADC. Since a laboratory usually has several analog signals to be converted, the ADC can be preceded in the circuit by a device which allows the selection of the specific analog signal which is to be converted a t any particular point in the analysis. The selections done by such a device-an analog multiplexer-are usually under the control of the program running in the computer. The conversion rate is governed by

ANALYTICAL CHEMISTRY, VOL. 47, NO. 1, JANUARY 1975

Leads running from output from experimental instrumentation to interfacing input to computer can be subjected to ground loops. Let us consider a shielded two-line cable: actual lines have characteristic impedances of R and R2;effective impedance of shield is R3. Instrument and computer system are grounded through G1at instrument and G2 at computer input. R3 is much, much lower than R 1 or R2. If there is a potential difference between G1 and G2,there will be current flow through R3, which can cause dc offset on results input to computer. It is also possible for loop between G, and G2 to pick up alternating current signals, impressing noise onto input to computer

the hardware rate (or throughput) of the ADC plus the multiplexer. There are two modes of operation of any computer-program-supported ADC. The simpler is program-controlled conversion; the computer program requests data from the ADC, manipulates them, stores them, etc. The other is the employment of hard-wired front ends and/or direct memory access/ analog-to-digital converter (DMA/ ADC) interfaces. Program-controlled conversion rates are typically 50 kHz or less, whereas hard-wired front ends can exceed 1 MHz acquisition rates. DMAIADC’s typically operate a t rates below 200 kHz. The entire interface between the instrumentation and the computer is considered in terms of analog filters, multiple-input channels (requiring a multiplexer), and the ADC. These must be specified with respect to accuracy, resolution, and signal-to-noise ratio. Consider the specific example where the analog data consist of a series of peaks of definable width, amplitude, area, etc. The requisite dynamic range of the front end is determined from the maximum and minimum peaks of interest as previously described. The required accuracy for the minimum peak will then be determined primarily by the signal-to-noise ratio and the front-end resolution. If the required accuracy for the smallest peaks were to be 1%,then the front-end resolution must be a t least 100 ADC units, and the signal-to-noise ratio 50:l or more. (Here, the minimum signal is divided by the peak-to-peak noise). Thus, the dynamic range of the total front end for a specific resolution can be defined as the dynamic range of the input signals times the number of ADC units needed for the required accuracy of the smallest peak (Equation 1): DR (Front end) = DR (input) x ADC resolution units (1) If, in the above case of 1%resolution, the maximum and minimum peaks are

1 V and 1 mV, respectively, then the dynamic range of the front end is seen to be 100,000 ADC units. This corresponds to an ADC with a capability of a 17- or 18-bit result. Although such is available, this would be a very expensive item which suggests that other approaches may be more cost-effective. Autoranging techniques can be used more economically than the direct approach just discussed. A gain-ranging amplifier, either autoranging or under computer control, is used in front of the ADC. The dynamic range required of the ADC can be reduced since lowlevel signals can be amplified sufficiently to be acceptable to this ADC. The ADC can then be chosen to have the resolution and accuracy required. For example, a measurement accuracy of 0.1% (resolution of 1 part in 1000) in the ADC range can be achieved with an ADC of 10 bits. D a t a Transmission. An important decision is whether it is more cost-effective to convert the analog data to digital form a t the instrument site, sending this digital information to the computer, or to transmit the analog signal directly to the computer location for conversion there. Converting analog data a t the instrument site is becoming more prevalent. In this case, the facilities for data transmission become the limiting factor as will be seen below. Since a large number of older facilities are still extant, transmission of analog data directly to the computer is still the most widely used technique. Such analog-transmission techniques are usually restricted to distances under Y4 mile (400 m). Suitable transmission techniques involve use of moderate-cost analog front-ends incorporating preamplifiers, analog multiplexers, and the ADC. Analog transmission is usually more expensive than digital, owing to the lse of expensive shielded cable for in:reased noise immunity. For distances ip to '/4 mile (400 m), it is possible to lave the signal path be a direct coniection from the instrumentation to he computer system's analog frontnd. For greater distances a booster mplifier a t the instrumentation site 3 strongly recommended to increase the signal strength sufficiently above the noise so that a good signal-to-noise ratio can be realized at the analog front-end. Digital data transmission techniques require conversion of the analog signals to a digital signal or signals a t or near the instrumentation site. Often, this conversion is in the form of binary-coded decimal (BCD). Once in the digital form, the information can be transmitted in either serial or parallel fashion. Serial transmission offers the highest noise immunity of any

technique but requires the use of data communications hardware at both the instrument (data source) and at the computer site. Parallel transmission offers the highest speed in digital transmission, but many parallel cables may be required to satisfy the instrument resolution. The multiplicity of cables generally limits parallel transmission to distances under 100 f t (30 m), but this technique is very useful and inexpensive in smaller laboratory environments. Ground loops should always be avoided. This phenomenon arises whenever the signal ground is different from the computer ground. This is shown for a common instrumentation setup in Figure 1. Here, there are multiple pathways through which current owing to signal can reach an earth ground that is different from signal ground. Avoidance of ground loops may mean modification of the signal source (the instrument) or slightly more sophisticated techniques of floating either the signal source or the

signal sink (the computer). Digital Preprocessing. Digital preprocessing is the initial, usually real-time, processing of data by the computer prior to the procedural processing routines. Such preprocessing often involves signal averaging, digital filtering, and real-time decision making. Signal Auerqging. Ensemble averaging is probably the oldest signal averaging technique used by minicomputer systems. Several sets of data (called an array) from a given experiment are collected. Based upon the assumption that the arrays are repetitive, they are averaged together point by point. The amplitude and phase of the noise relative to the data in the arrays should be random so that the noise should average to zero, allowing the extraction of the signal. The exact improvement made in the signal-tonoise ratio by this technique is given in Equation 2: S / N ( n ) = l/G x SIN(1) (2)

Figure 2. Digital data before and after preprocessing by boxcar averaging

FIRST BOXCAR AVERAGE REPt ACES ALL DATA I N FIRST BOXCAR

I

SECOND BOXCAR AVERAGE REPLACES ALL DATA I N SECOND BOXCAR

27A

ANALYTICAL CHEMISTRY, VOL 47, NO 1, JANUARY 1975

.......

u

-1

I

t

I

i

FIRST' 1 WINDOW AVERAGE RERACES CENTER DATUM OF FIRST WINDOW1 SECOND WINDOW AVERAGE REPLACES CENTER DATUM OF SECOND WI,NDOW THIRD WINDOW AVERAGE REPLACES CENTER

DATUM we

DIGITAL DPJA AFTER PREPROCESSING BY MOVING-WINDOW AVERAGE (NOTE LOSS OF INITIAL AND FINAL DATA POINTS)

e

e

e

e

e e

THWD WINDQW

t

11 I

*

1

e

e e e

..-

I ___c

TIME

Figure 3. Digital data before and after preprocessing by moving-window average

where n is the number of arrays averaged, S I N ( 1 ) is the signal-to-noise ratio of one array, and SIN ( n )is the signal-to-noise ratio of n averaged arrays. A second form of averaging is known as boxcar averaging. The rationale for this technique is the assumption that the analog signal being sampled varies slowly with respect to the sampling rate and that an average of a small number of samples will be a better measure of the signal than a single sample since the SIN will be improved as discussed above (Figure 2). In practice, between 2-50 or so samples may be averaged together to generate a final datum. This lowers the effective sampling rate but removes noise, especially the high-frequency components imposed upon the signal. Boxcar averaging can be used in conjunction with ensemble averaging. Each array is boxcar averaged, and then these reduced arrays are ensemble averaged. This dual process reduces the number of arrays that would be 28A

required to get the same improvement in signal-to-noise ratio if ensemble averaging alone were used. The moving-window average is a dynamic extension of the boxcar averaging technique. As in the boxcar technique, a subset of the array is averaged to form a new datum. However, this new datum does not replace this entire subset but only the subset's central datum. Subsequent subsets are then formed by dropping the first datum of the previous subset and adding the sample datum following the last datum of the previous subset, Le., the first datum which was not included in the previous subset (Figure 3). In this manner, the window is said to be moved up one datum point. The average of this window is used to form a new datum to replace the new central point. The process is repeated. The averages generated form a new array which is then processed. I t should be noted that only the original data are used throughout in calculating these averages. The new array is almost as

ANALYTICAL CHEMISTRY, VOL. 47, NO. 1 , JANUARY 1975

large as the original, lacking only a number of data points equal to the width of the window, as it may be seen that a half-window width a t the start and end of the array is lost. This technique has the noise reduction advantages of boxcar averaging without the concomitant significant reduction in effective sampling rate (and, hence, resolution) of that technique. Digital Filtering. Digital filtering is another technique of data preprocessing performed by the computer; it has been widely discussed in detail. Most of the commonly used forms of digital filtering may be expressed by the following equation:

where the y 's are the data samples; N, the number of data points (called the span of the filter); the 9 's, the filtered or preprocessed points; and the A , B, C, . . . are the filter coefficients, the values of which determine the operation of the filter. The moving-window average may also be regarded as a linear digital filter (or a linear coefficient digital filter). The moving-window average filter is obtained when all the A coefficients are unity, and the B, C, . . . coefficients are zero. The least-squares derivation for a polynomial digital filter will generate nonequal coefficients with values weighted toward the center point of the span over which the filter is applied, The more strongly the coefficients give weighting toward the center value, the less effect the terminal points will have. High-frequency filtering with different cutoff frequencies can be obtained by varying the span. Other choices of coefficients can be used to create a high-pass digital filter to remove low-frequency information while retaining the high-frequency information. Digital filtering is now widely used in most analytical instrumentation applications. In comparison to the digital techniques, analog filtering tends to be difficult, expensive, or both. By proper choice of coefficients and span, it is possible to create filter conditions which could not easily be obtained, if a t all, in an analog manner. The ease with which computers allow the modification of filter parameters leads to great improvements in data enhancement techniques. Digital filtering is almost always required for low-level signal (less than 1 mV) applications, chromatography, and optical spectrosCOPY. Real-Time Decision Making. Realtime decision making involves a variety of tests upon the preprocessed

CIRCLE 251 ON READER SERVICE CARD

A N A L Y T I C A L CHEMISTRY

VOL 4 7

NO

1 J A N b A W v 1975

29A

data with action to be taken in realtime based upon the results of those tests. The exact form of the decisionmaking processes depends upon time constraints. The data must be accepted, a decision made, and control functions implemented within the time frame of the external device(s). Such decisions might involve enabling or disabling certain instrumentation, modification of measurement ranges, or specifically timing the interval between special events. The speed of the computer in performing real-time decision making gives the experimenter considerably increased control over the experimental setup. A choice can be made among microcomputers (based upon microprocessors), minicomputers, and larger general-purpose computers based upon memory capacity and the relative importance of different functions. When instrumentation automation emphasizes decision making and control functions rather than computation, it may prove more cost-effective to use microcomputers. However, if computational algorithms and data storage require memory capacities in excess of 3K, then minicomputers or larger computers are needed. Procedural Reduction. Procedural reduction is a step toward the final two-parameter data set prior to user interaction, manipulation, interpretation, and report generation. I t is the use of a fixed algorithm to process the data obtained from the acquisition and digital preprocessing stages; classically, this would be referred to as data reduction. The algorithm is selfsustaining, self-correcting, and capable of handling the entire data input stream and formulating it for later manipulation and transformation. These algorithms or techniques are usually computational in orientation rather than having real-time constraints. In certain cases advantages may be gained by performing the procedural reduction time-dependent as when the data rate of one instrument may be slow in comparison to others serviced by the computer or when several instruments with low data rates are used. A specific instance of this is the chromatograph in which such a procedural reduction in the form of peak analysis is a reasonable approach. The nature of the reduction depends upon the data source, final data required, and the form and format of the final data presentation. This stage includes much of the system’s analytical software and utilizes many of the advantages of the minicomputer, although in simpler computational cases, microcomputers may be used to advantage. Procedural reduction may involve 30A

e

retention of the entire data input set or processing of each datum on-line in real-time. The former has the obvious advantage of retaining all information acquired during the experiment. The latter has the advantage of reducing the amount of storage needed if large amounts of data are required and of being more economical in experiments where retention of each datum is unnecessar.y. The retention of the entire data set is desirable in some instances; for example, for spectral or frequency analyses, interactive cathode ray tube (CRT) or visual display handling, information plotting or comparative analyses among several data sets. In terms of computer programming, storage of the entire data set from an experiment is the simplest type of operation. As the data are preprocessed or received, they are buffered temporarily and then stored, either within the core memory of the computer or on a mass storage device. The storage format should be planned so that information is readily retrievable and accessible to all programs of the computer system. Often, data will be gathered in real-time and then stored; data reduction may take place off-line later. In more powerful computer systems, foregroundbackground operation offers the advantage of more effective and multiple use of the laboratory computer. In the foreground, data are accepted and stored, whereas in the background the user manipulates and transforms data acquired earlier. On-line processing of each datum as acquired is usually performed only for moderately slow data rate (less than 2 kHz) experiments. Examples of such processing are unit conversions and limit testing. Unit conversions usually have a calibration table for the physical parameter which is compared against the ADC output. In the case of peak analysis, peaks present might be extracted and then reduced to a table containing such information as peak amplitude and its time-dependent position. More complex forms of peak analysis might involve other factors: area, width, or centroid. If input rates are moderate (less than 500 Hz), such analysis can be performed in realtime. However, very complex peak analysis such as the deconvolution of fused peak envelopes may require storage of the entire data set and offline data reduction. One example of procedural reduction is deconoolution. Deconvolution extracts accurate information concerning all parameters for each of the component peaks in a fused peak envelope. The deconvolution algorithms depend greatly upon the exact peak form and structure. Most such tech-

ANALYTICAL CHEMISTRY, VOL. 47, NO. 1, JANUARY 1975

-.,.,...

niques for use in real-time minicomputer environments make simplifying assumptions concerning the peak shapes and approach the solution by approximation. Nonlinear iterative least-squares techniques are a specific useful example of these.. Many experiments generate a twoparameter data set ( x - y ) where the raw 3c- parameter is not meaningful physically; it is necessary to transform these data into a new coordinate set. Frequency domain analysis is a large area of procedural reduction where this is the case. The y- vs. time data are converted into an intensity vs. frequency set. Several newer forms of analytical instrumentation such as Fourier transform NMR and Fourier transform infrared (IR) require the use of frequency domain analysis to interpret the experimental data. Although frequency domain analysis typically retains the entire data set for off-line reduction, it is performed as a real-time function for certain applications. In vibration testing it is important to know, in real-time, the exact resonance frequencies as a function of some stimulus. By selecting limited bandwidths, real-time analysis of data from numerous (possibly a dozen or more) vibration sensors can be performed. Input data can be sorted by histogram formation. In nuclear instrumentation pulse-height analysis (PHA), the input datum is a measure of energy. It is desired to know how many occurrences of that energy will appear in a given time; the data are represented as event-count vs. energy. Correlation techniques are used when input analog data are extremely noisy and do not lend themselves to time-based signal averaging. The basic purpose of correlation is to determine an underlying repetitive signal, if any, in the analog data stream and to calculate its frequency(ies). In autocorrelation the data set (or array) is time-shifted by an amount T , multiplied by its unshifted values and a correlation coefficient calculated from the result. If this coefficient is plotted as a function of T (autocorrelogram), local maxima indicate the ) the sigfrequencies ( 1 / ~composing nal. Knowing the frequencies, the components of the signal can be found. In cross correlation the data from one input channel are multiplied by the time-shifted data from another analog input channel, and correlation coefficients calculated. As above, a correlogram can be constructed. If it is assumed that the same frequencies are present in both arrays, then the phase shift between the two channels as well as the frequencies can be determined. Manipulation a n d Transforma-

Its performance is just as surprising as its price. Buy four gallons of Mallinckrodt's new Quantafluo? ScintillAR". You'll pay less than $25 a gallon. And that's less than you II pay for any other cocktail base. But, it's only a bargain if it performs. So, consider this: throughout the convenient 9 to 20% water content range Quantafluor gives you stable, "one-phase'' cocktails and reliable counts. Good efficiency, too: 30-32% absolute for

tritium at 2000 dpm, for instance. Quantafluor ScintillAR isn't going to do everything the high priced cocktail bases do, but it certainly fits the range in which you'll probably do most of your scintillation counting. Write us for complete technical data sheets. Or, order from your Mallinckrodt distributor; he has gallon jugs ready to go for under $25. SCIENCE PRODUCTS DIVISION Saint Louis Missouri 63147

CIRCLE 162 ON READER SERVICE CARD

...

tion. This is the most general phase of laboratory system computing, and it requires the utmost flexibility to obtain the final presentation. In many cases, the manipulation depends primarily upon the x-axis data being either time based or in the frequency domain. Time-based analyses consisting of intensity transformations and time-base conversions comprise the largest number of applications. Intensity transformations include base line corrections, peak intensity and area operations, deletion of unnecessary information, and comparative studies of several data sets. Time-base conversion is commonly done to such x-axis parameters as energy, wavelength, and frequency. Often, these conversions can be done directly, but they can also necessitate calibration tables and interpolation techniques. The use of an interactive computer system is extremely advantageous here since it allows quick, easy transformations and immediate presentation of the results. Although frequency domain transformations were discussed under procedural reduction, it is a form of analysis amenable to flexible interactive manipulative techniques. The major frequency domain transformation is the Fourier transform. Commonly used fast Fourier transform (FFT) algorithms generate two arrays of data consisting of real (or absorption) mode and imaginary (or dispersion) mode coefficients. These coefficients are themselves often the desired end result of an experiment and its analysis. In addition, they are often manipulated further and transformed to obtain magnitude, power density, and correlation coefficients. From the Fourier coefficients it is possible to generate correlation coefficients equivalent to those described under procedural reduction. The use of the F F T allows derivation of information directly comparable to that obtained by realtime correlation. While there have been a number of software (program) techniques for computing F F T data, hardware F F T computation techniques are now also available. Software techniques are governed by the transform program and the available computer; they require a smaller system expenditure than hardware techniques. They also are not as limited in the number of data points they can handle and so have better frequency resolution than the hardware FFT techniques. However, the hardware F F T techniques can typically process 1,024 data points in under 20 msec, whereas software transformations would average a second or more. In either technique it is important that the sampling frequency be a t least twice that of the highest 32A

expected frequency in the analog signal to prevent the phenomenon of aliasing which is the contribution to lowfrequency components of the signal from frequencies higher than the sampling frequency. The final manipulative phase is the presentation of results, often involving coordinate axis calibration utilizing previously stored tables and interpolation techniques based upon finite difference theory. Information presentation is enhanced greatly by the use of interactive or visual computer techniques. The ability to display the information rapidly and modify it by use of a light pen or keyboard entry allows interactive procedures to be performed quickly and easily. The ability of the computer system to handle large amounts of data and to present them visually in a flexible manner aids considerably in the final interpretation of the data. Where printed or graphical output is required, the use of such output devices with general and flexible formats eliminates previous time-consuming manual writing, plotting, and drafting. Postoperative Amalgamation a n d Interpretation. In this final phase of laboratory automation, the computer is no longer used primarily as an analytical tool but rather as a more general aid in the merging, sorting, storage, retrieval, and interpretation of data. Data Merging. Data sets obtained from the previous stages of the experiment may be combined to form a superset of data. The elements of this superset need not necessarily be limited to those obtained from a particular instrument but can originate from a combination of data sets from a number of instruments serviced by the computer, from previous experiments performed with these instruments or even from other experiments. In the last instance, these data would be entered into the computer memory by the experimenter directly rather than through instrumentation. Library Techniques. The computer can be used to store, for various analytical and interpretational techniques, standards and calibrations in the form of tables. Such tables may include standard spectra for matching, identification, and quantification of results. Such spectra might be in the two-parameter form of intensity vs. energy, intensity vs. frequency, or intensity vs. time. File StoragelRetrieual. As a storage and retrieval device, the computer is unsurpassed. Data can be stored in a compact form and can be tagged in a number of ways so that only the desired data will be accessed and all the desired data will be accessed. I t is therefore important that the proper format for the data and its labeling

A N A L Y T I C A L CHEMISTRY, V O L . 47, NO. 1 , J A N U A R Y 1975

Gerald Dulaney is riiarketing supervisor for physical sciences within the Lab Data Products Group of Digital Equipment Corp. He earned a BA a t Carthage college in chemistry and physics and undertook two years of graduate study at Purdue and three years at Virginia Polytechnic Institute and State University. His interests a t that time were in rare earth Mossbauer spectrometry. Mr. Dulaney joined Digital Equipment Corp. in 1969 as senior applications programmer and for the last three years has worked in marketing as a specialist in instrument automation and lab product development.

should be chosen so that this primary function of a filing system can be obtained with a minimal amount of storage space and access time to optimize the cost-effectiveness of this section of the automation system. Conclusions The medium- and large-scale general-purpose computer, the microcomputer, and more particularly the interactive minicomputer, have brought about a new dimension in laboratory automation. By connecting the instrumentation to the computer through appropriate interfacing, analyses that frequently consumed hours to days of a researcher’s time can now be completed in minutes. Other analyses, heretofore impractical, can be done routinely. However, to achieve these results, it is necessary to understand the processes involved in all areas of an automated laboratory system from the instrumentation output to the finished report. In this discussion each major area-techniques, interfacing, and hardware and software considerations-has been discussed. From this, it is hoped that a clearer perspective of the automated signal processing system for laboratories has been reached.