Home | Articles | Forum | Glossary | Books |
AMAZON multi-meters discounts AMAZON oscilloscope discounts 1. Introduction Modern design trends use the power and precision of the digital world of components to process analog signals. However, the link between the digital/processing world and the analog/real world is based on analog-to-digital and digital-to-analog converter ICs, which generally are grouped as the data converters. Until about 1988, engineers have had to stockpile their most innovative A/D converter (ADC) designs, because available manufacturing processes simply could not implement those designs onto monolithic chips economically. Prior to 1988, except for the introduction of successive approximation and integrating and flash ADCs, the electronics industry saw no major changes in monolithic ADCs. Since then, manufacturing processes caught up with the technology and many techniques such as subranging flash, self-calibration, delta/sigma, and many other special techniques have been implemented on monolithic chips. High-speed ADCs are used in a wide variety of real-time digital signal processing (DSP) applications, replacing systems that used analog techniques alone. The major reasons for using DSP are that the cost of the processors has gone down, their speed and computational power have increased, and they are reprogrammable, allowing for system performance upgrades without hardware changes. DSP offers practical solutions that cannot be easily achieved in the analog domain; For example, V.32 and V.34 modems. This section provides an overview of design concepts and application guidelines for systems using modern analog/digital and digital/analog converters implemented on monolithic chips 2 Sampled Data Systems To specify intelligently the ADC portion of the system, one must first understand the fundamental concepts of sampling and quantization and their effects on the signal. Let us consider the traditional problem of sampling and quantizing a base-band signal whose bandwidth lies between DC and an upper frequency of interest, f s. This often is referred to as Nyquist or sub-Nyquist sampling. The topic of super-Nyquist sampling (sometimes called undersampling), where the signal of interest falls outside the Nyquist bandwidth (DC to fs/2) is treated later. FIG. 1 shows key elements of a baseband sampled data system. 2.1 Discrete Time Sampling of Analog Signals FIG. 2 shows the concept of discrete time and amplitude sampling of an analog signal. The continuous analog data must be sampled at discrete intervals, ts, which must be carefully chosen to ensure an accurate representation of the original analog signal. It’s clear that, the more samples taken (faster sampling rates), the more accurate the digital representation; and if fewer samples are taken (slower sampling rates), a point is reached where critical information about the signal actually is lost. To discuss the problem of losing information in the sampling process, it’s necessary to recall Shannon's information theorem and Nyquist's criteria. Shannon's information theorem: • An analog signal with a bandwidth of fa must be sampled at a rate of fs > 2fa to avoid loss of information. • The signal bandwidth may extend from DC to fa (baseband sampling) or from f1 to f2, where fa --f2 – f1 (undersampling, or super-Nyquist sampling). Nyquist criteria: • If fs < 2fa, then a phenomenon called aliasing will occur. • Aliasing is used to advantage in undersampling applications. ======== Signal DC to fa Amplification: • Gain • Level Shifting • Conditioning • Transmission Low-pass Antialiasing Filter: • Remove Unwanted Signals • Reduce Noise N-Bit N ADC To DSP • Sampling To Signal • Quantization Processing System
========
2.1.1 Implications of Aliasing To understand the implications of aliasing in both the time and frequency domains, first consider the case of a time domain representation of a sampled sine wave signal shown in FIG. 3. In Figures 3(a) and 3(b), it’s clear that an adequate number of samples have been taken to preserve the information about the sine wave. FIG. 3(c) represents the ambiguous limiting condition where fs --2f,. If the relationship between the sampling points and the sine wave is such that the sine wave is being sampled at precisely the zero crossings (rather than at the peaks, as shown in the illustration), then all information regarding the sine wave would be lost. FIG. 3(d) represents the situation where fs < 2fa, and the information obtained from the samples indicates a sine wave having a frequency lower than fs/2. This is a case where the out-of-band signal is aliased into the Nyquist bandwidth between DC and fs/2. As the sampling rate is further decreased and the analog input frequency fa approaches the sampling frequency fs, the aliased signal approaches DC in the frequency spectrum. Let us look at the corresponding frequency domain representation of each case. From each case of frequency domain representation, we make the important observation that, regardless of where the analog signal being sampled happens to lie in the frequency spectrum, the effects of sampling will cause either the actual signal or an aliased component to fall within the Nyquist bandwidth between DC and fs/2. Therefore, any signals that fall outside the bandwidth of interest, whether they be spurious tones or random noise, must be adequately filtered before sampling. If unfiltered, the sampling process will alias them back within the Nyquist bandwidth,.~ where they will corrupt the wanted signals. 2.1.2 High-Speed Sampling Now let us discuss the case of high-speed sampling, analyzing it in the frequency domain. First, consider the use of a single-frequency sine wave of frequency fa sampled at a frequency fs by an ideal impulse sampler (see FIG. 4(a)). Also assume that f~ > 2fa as shown. The frequency domain output of the sampler shows aliases or images of the original signal around every multiple of f~; that is, at frequencies equal to … The Nyquist bandwidth, by definition, is the frequency spectrum from DC to fs/2. The frequency spectrum is divided into an infinite number of Nyquist zones, each having a width equal to 0.5fs, as shown. Now consider a signal outside the first Nyquist zone, as shown in FIG. 4(b). Notice that, even though the signal is outside the first Nyquist zone, its image (or alias), fs-fa, falls inside. Returning to FIG. 4(a), it’s clear that, if an unwanted signal appears at any of the image frequencies of fa, it also will occur at fa, thereby producing a spurious frequency component in the Nyquist zone. This is similar to the analog mixing process and implies that some filtering ahead of the sampler (or ADC) is required to remove frequency components that are outside the Nyquist bandwidth but whose aliased components fall inside it. The filter performance will depend on how close the out-of-band signal is to fs/2 and the amount of attenuation required. 2.1.3 Antialiasing Filters Baseband sampling implies that the signal to be sampled lies in the first Nyquist zone. It’s important to note that, with no input filtering at the input of the ideal sampler, any frequency component (either signal or noise) that falls outside the Nyquist bandwidth in any Nyquist zone will be aliased back into the first Nyquist zone. For this reason, an antialiasing filter is used in almost all sampling ADC applications to remove these unwanted signals. Properly specifying the antialiasing filter is important. The first step is to know the characteristics of the signal being sampled. Assume that the highest frequency of interest is fa. The antialiasing filter passes signals from DC to fa while attenuating signals above fa. Assume that the corner frequency of the filter is chosen to be equal to fa. The effect of the finite transition from minimum to maximum attenuation on system dynamic range (DR) is illustrated in FIG. 5. Assume that the input signal has full-scale components well above the maximum frequency of interest, fa. The diagram shows how full-scale frequency components above fs -fa are aliased back into the bandwidth DC to fa. These aliased components are indistinguishable from actual signals and therefore limit the dynamic range to the value on the diagram, which is shown as DR. The antialiasing filter transition band therefore is determined by the corner frequency f a, the stop band frequency (fs -f a), and the stop band attenuation DR. The required system dynamic range is chosen based on our requirement for signal fidelity. Filters have to become more complex as the transition band becomes sharper, all other things being equal. For instance, a Butterworth filter gives 6 dB attenuation per octave for each filter pole. Achieving 60 dB attenuation in a transition region between 1 and 2 MHz (1 octave) requires a minimum of ten poles. This is not a trivial filter and definitely a design challenge. Therefore, other filter types generally are better suited to high-speed applications where the requirement is for a sharp transition band and in-band flatness coupled with linear phase response. Elliptic filters meet these criteria and are a popular choice.
From this discussion, we can see how the sharpness of the antialiasing transition band can be traded off against the ADC sampling frequency. Choosing a higher sampling rate (oversampling) reduces the requirement on transition band sharpness (hence, the filter complexity) at the expense of using a faster ADC and processing data at a faster rate. This is illustrated in FIG. 6, which shows the effects of increasing the sampling frequency while maintaining the same analog corner frequency, fa, and the same dynamic range, DR, requirement. Based on this discussion one could start the design process by selecting a sampling rate of two to four times fa. Filter specifications could be determined from the required dynamic range based on cost and performance. If such a filter is not realizable, a high sampling rate with a faster ADC will be required. The antialiasing filter requirements can be relaxed somewhat if it’s certain that there never will be a full-scale signal at the stop band frequency, fs-fa. In many applications, it’s improbable that full-scale signals will occur at this frequency. If the maximum signal at the frequency fs -fa will never exceed X dB below full scale, the filter stop band attenuation requirement is reduced by that amount. The new requirement for stop band attenuation at f s -f a based on this knowledge of the signal now is only (DR-X) dB. When making this type of assumption, be careful to treat any noise signals that may occur above the maximum signal frequency, fa, as unwanted signals that also alias back into the signal bandwidth. Properly specifying the antialiasing filter requires a knowledge of the signal's spectral characteristics as well as the system's dynamic range requirements. Consider the signal in FIG. 6(c), which has a maximum full-scale frequency content of fa --35 kHz sampled at a rate of fs = 100 kS/s. Assume that the signal has the spectrum shown in FIG. 6(c) and is attenuated by 30 dB at 65 kHz (fs-fa). Observe that the system dynamic range is limited to 30 dB at 35 kHz because of the aliased components. If additional dynamic range is required, an antialiasing filter must be provided to provide more attenuation at 65 kHz. If a dynamic range of 74 dB (12 bits) at 35 kHz is desired, then the antialiasing filter attenuation must go from 0 dB at 35 kHz to 44 dB at 65 kHz. This is an attenuation of 44 dB in approximately one octave; therefore, a seven-pole filter is required. (Each filter pole provides approximately 6 dB attenuation per octave.) One must consider that broadband noise may be present with the signal, which also can alias within the bandwidth of interest. This is especially true with wideband opamps that provide low distortion levels. 2.2 ADC Resolution and Dynamic Range Requirements Having discussed the sampling rate and filtering, we next discuss the effects of dividing the signal amplitude into a finite number of discrete quantization levels. Table 3-1 shows relative bit sizes for various resolution ADCs, for a full-scale input range chosen as approximately 2 V, which is popular for higher-speed ADCs. The bit size in determined by dividing the full-scale range (2.048 V) by 2 N. The selection process for determining the ADC resolution should begin by determining the ratio between the largest signal (full-scale) and smallest signals you wish the ADC to detect. Convert this ratio to dB and divide by 6. This is your minimum ADC resolution requirement for DC signals. You actually will need more resolution to account for extra signal headroom, since ADCs act as hard limiters at both ends of their range. Remember that this computation is for DC or low-frequency signals and that the ADC performance will degrade as the input signal slew rate increases. The final ADC resolution actually will be dictated by dynamic performance at high frequencies. This may lead to the selection of an ADC with more resolution at DC than is required. Table 1 also indicates the theoretical rms quantization noise produced by a perfect N-bit ADC. In this calculation, the assumption is that quantization error is uncorrelated with the ADC input. With this assumption, the quantization noise appears as random noise spread uniformly over the Nyquist bandwidth, DC to fs/2, and it has an rms value equal to q/~'-~. Other cases may be different, and some practical explanation is given in Analog Devices (1995). 2.3 Effective Number of Bits of a Digitizer Table 1 shows the theoretical full-scale SNR calculated for the perfect N-bit ADC, based on the formula SNR = 6.02N + 1.76 (dB) Various error sources in the ADCs cause the measured SNR to be less than the theoretical value shown in equation (3.2). These errors are due to integral and differential nonlinearities, missing codes, and internal ADC noise sources (some of which are discussed later). In addition, the errors are a function of the input slew rate and therefore increase as the input frequency gets higher. In calculating the rms value of the noise, it’s customary to include the harmonics of the fundamental signal. This sometimes is referred to as the signal-to-noise-and-distortion, S/(N + D) or SINAD, but usually simply as SNR. TABLE 1 Bit Sizes, Quantization Noise, and Signal-to-Noise Ratio (SNR) for 2.048 V Full-Scale Converters This leads to the definition of another important ADC dynamic specification, the effective number of bits (ENOB). The effective bits are calculated by first measuring the SNR of an ADC with a full-scale sine wave input signal. The measured SNR (SNRactual or SINAD) is substituted into the equation for SNR, and the equation is solved for N as shown next: SINAD-1.76 dB ENOB = (3.3) For a typical ADC, the AD676 from Analog Devices (a 16-bit ADC) is shown in FIG. 7. For this device, the SNR value of 88 dB corresponds to approximately 14.3 effective bits (for 0 dB input), while it drops to 6.4 ENOB at 1 MHz. The methods for calculating ENOB, SNR, and other parameters are described in Analog Devices (1992, Section 7) and Tektronix (1986). In testing ADCs, the SNR usually is calculated using DSP techniques while applying a pure sine wave signal to the input of ADC. A typical test system is shown in FIG. 8(a). The fast Fourier transform (FFT) processes a finite number of time samples and converts them into a frequency spectrum such as the one shown in FIG. 8(b) for an AD676-type 16-bit 100 kSPS sampling ADC. The frequency spectrum then is used to calculate the SNR as well as the harmonics of the fundamental input signal. The rms value of the signal is first computed. The rms value of all other frequency components over the Nyquist bandwidth (this includes not only noise but also distortion products) is computed. The ratio of these two quantities, expressed in decibels is the SNR. Various error sources in the ADC cause the measured SNR to be less than the theoretical value, 6.02N 4-1.76 dB. 2.3.1 Spurious Components and Harmonics The peak spurious or peak harmonic component is the largest spectral component excluding the input signal and DC. This value is expressed in decibels relative to the rms value of a full-scale input signal as was shown in FIG. 8(a). The peak spurious specification also occasionally is referred to as the spurious free dynamic range (SFDR). SFDR usually is measured over a wide range of input frequencies and at various amplitudes. It’s important to note that the harmonic distortion or SFDR of an ADC is not limited by its theoretical SNR value. The SFDR of a 12-bit ADC may exceed 85 dB, while the theoretical SNR is only 74 dB. On the other hand, the SINAD of the ADC may be limited by poor harmonic distortion performance, since the harmonic components are included with the quantization noise when computing the rms noise level. The SFDR of an ADC is defined as the ratio of the rms signal amplitude to the rms value peak spurious spectral content (measured over the entire first Nyquist zone, DC to fs/2). The SFDR generally is plotted as a function of signal amplitude and may be expressed relative to the signal amplitude (dBc) or the ADC full scale (dBFS). For a signal near full scale, the peak spectral spur generally is determined by one of the first few harmonics of the fundamental input signal. However, as the signal falls several decibels below full scale, other spurs generally occur that are not direct harmonics of the input signal due to the differential nonlinearity of the ADC transfer function. Therefore, the SFDR considers all sources of distortion, regardless of their origin. The total harmonic distortion (THD) is the ratio of the rms sum of the harmonic components to the rms value of an input signal, expressed in a percentage or decibels. For input signals or harmonics above the Nyquist frequency, the aliased components are used in making the calculation. The THD usually is measured at several input signal frequencies and amplitudes. FIG. 9 shows the SFDR performance of a 12-bit, 41 MSPS wideband ADC designed for communication applications (AD9042 from Analog Devices). Note that a minimum of 80 dBc SFDR is obtained over the entire first Nyquist zone (DC to 20 MHz). The plot also shows SFDR expressed as dBFS, The SFDR generally is much greater than the ADCs theoretical N-bit SNR (6.02N + 1.76 dB). For example, the AD9042 is a 12-bit ADC with an SFDR of 80 dBc and a typical SNR of 65 dBc (the theoretical SNR is 74 dB). This is due to the fundamental distinction between noise and distortion measurements.
3 A/D Converter Errors First, let us look at how bits are assigned to the corresponding analog values in a typical analog-to-digital converter. The method of assigning bits to the corresponding analog value of the sampled point often is referred to as quantization (see FIG. 10(a)). As the analog voltage increases, it crosses transitions of "decision levels," which causes the ADC to change state. In an ideal ADC, the transitions are at half-unit levels, with A representing the distance between the decision levels. The A is often is referred to as the bit size or quantization size. The fact that A always has a finite size leads to uncertainty, since any analog value within the finite range can be represented. This quantization uncertainty is expressed as plus or minus half the least significant bit (LSB) as shown in FIG. 10(b). As this plot shows, the output of an ADC may be thought of as the analog signal plus some quantizing noise. The more bits the ADC has, the less significant this noise becomes. Certain parameters limit the rate at which an ADC can acquire a sample of the input waveform: the acquisition turn-on delay, acquisition time, sample or track time, and hold time. FIG. 10(c) shows a graphic representation of the acquisition cycle of a typical ADC. The turn-on time (the time the device takes to get ready to acquire a sample) is the first event. The acquisition time is next. This is the time the device takes to get to the point at which the output tracks the input sample, after the sample command or clock pulse. The aperture time delay is the time that elapses between the hold command and the point at which the sampling switch is completely open. The device then completes the hold cycle and the next acquisition is taken. This process indicates that the real world of acquisition is not an ideal process at all, and the value sampled and converted could have some sources of error. Most of these errors increase with the sampling rate. The approximation or "rounding" effect in A/D converters is called quantization, and the difference between the original input and the digitized output, the quantization error, is denoted here by eq. For the characteristic of FIG. 10(a), eq varies as shown in FIG. 10(b), with the maximum occurring before each code transition. This error decreases as the resolution increases, and its effect can be viewed as additive noise (quantization noise) appearing at the output. Thus, even an "ideal" m-bit ADC introduces nonzero noise in the converted signal simply due to quantization. We can formulate the impact of quantization noise on the performance as follows. For simplicity, consider a slightly different input/output characteristic, shown in FIG. 1 l(a), where code transitions occur at odd (rather than even) multiples of A/2. A time domain waveform therefore experiences both negative and positive quantization errors, as illustrated in FIG. 11 (b). To calculate the power of the resulting noise, we assume that 8q is (i) a random variable uniformly distributed between --A/2 and +A/2, and (ii) independent of the analog input. While these assumptions are not strictly valid in the general case, they usually provide a reasonable approximation for resolutions above four bits. Razavi (1995) provides more details and the derivations of equations (2) and (3). Full specification of the performance of ADCs requires a large number of parameters, some of which are defined differently by different manufacturers. Some important parameters frequently used in component data sheets and the like are described here. FIG. 12 could be used to illustrate parameters such as differential nonlinearity (DNL), integral nonlinearity (INL), and offset and gain errors--all static parameters of the ADC process. 3.1 3.1.1 Differential Nonlinearity Differential nonlinearity is the maximum deviation in the difference between two consecutive code transition points on the input axis from the ideal value of 1 LSB. The DNL is a measure of the deviation code widths from the ideal value of 1 LSB. 3.1.2 Integral Nonlinearity
The INL is the maximum deviation of the input/output characteristic from a straight line passed through its end points (line AB in FIG. 12). The overall difference plot is called the INL profile. The INL is the deviation of code centers from the converter's ideal transfer curve. The line used as the reference may be drawn through the end points or may be a best-fit line calculated from the data. The DNL and INL degrade as the input frequency approaches the Nyquist rate. The DNL shows up as an increase in quantization noise, which tends to elevate the converter's overall noise floor. Theoretical quantization noise for an ideal converter with the Nyquist bandwidth is rms quantization noise … where q is the weight of the LSB. At the same time, because the INL appears as a bend in the converter's transfer curve, it generates spurious frequencies (spurs) not in the original signal information. The testing of ADC linearity parameters is discussed in Shill (1995). 3.1.3 Offset Error and Gain Error The offset is a vertical intercept of the straight line through the end points. The gain error is the deviation of the slope of line AB from its ideal value (usually unity). 3.1.4 Testing of ADCs A known periodic input is converted by an ADC under test at sampling times that are asynchronous relative to the input signal. The relative number of occurrences of the distinct digital output codes is termed the code density. For an ideal ADC, the code density is independent of the conversion rate and input frequency. These data are viewed in the form of a normalized histogram showing the frequency of occurrence of each code from zero to full scale. The code density data are used to compute all bit transition levels. Linearity, gain, and offset errors are readily calculated from a knowledge of the transition levels. This provides a complete characterization of the ADC in the amplitude domain. The effect of some of these static errors in the frequency domain for high-speed ADCs is discussed in Louzon (1995). Doernberg, Lee, and Hodges (1984) provide ADC characterization methods based on code density test and spectral analysis using FFT. 4 Effects of Sample and Hold Circuits The sample and hold amplifier (SHA) is a critical part of many data acquisition systems. It captures an analog signal and holds it during some operation (most commonly during analog-to-digital conversion). The circuitry involved is demanding, and unexpected properties of commonplace components such as capacitors and printed circuit boards may degrade SHA performance. When a sample and hold amplifier is in the sample mode, the output follows the input with only a small voltage offset. In some SHAs, the output during the sample mode does not follow the input accurately and the output is accurate only during the hold period. Today, high-density IC processes allow the manufacture of ADCs containing an integral SHA. Wherever possible, ADCs with an integral SHA (often known as sampling ADCs) should be used in preference to separate ADCs and SHAs. The advantage of such a sampling ADC-apart from the obvious ones of smaller size, lower cost, and fewer external components is that the overall performance is specified. The designer need not spend time ensuring that no specification, interface, or timing issues are involved in combining a discrete ADC and a discrete SHA. 4.1 Basic SHA Operation Regardless of the circuit details or type of SHA in question, all such devices have four major components. The input amplifier, energy storage device (capacitor), output buffer, and switching circuits are common to all SHAs, as shown in the typical configuration of FIG. 13(a). The energy storage device, the heart of the SHA, almost always is a capacitor. The input amplifier buffers the input by presenting a high impedance to the signal source and providing current gain to charge the hold capacitor. In the track mode, the voltage on the hold capacitor follows (or tracks) the input signal (with some delay and bandwidth limiting). FIG. 10 depicts this process. In the hold mode, the switch is opened and the capacitor retains the voltage present before it was disconnected from the input buffer. The output buffer offers a high impedance to the hold capacitor to keep the held voltage from discharging prematurely. The switching circuit and its driver form the mechanism by which the SHA is alternately switched between track and hold. Four groups of specifications describe basic SHA operation: track mode, track-to-hold transition, hold mode, and hold-to-track transition. These specifications are summarized in Table 2, and some of the SHA error sources are shown in FIG. 13(b). Because of both DC and AC performance implications for each of the four modes, properly specifying an SHA and understanding its operation in a system are complex matters. === TABLE 2 Sample and Hold Specifications Track-w-Hold Hold-m-Sample Track Mode Transition Hold Mode Transition Offset Pedestal Droop Static Gain error Pedestal nonlinearity Dielectric Nonlinearity absorption Dynamic Settling time Aperture delay Feed through Acquisition time Bandwidth Time Distortion Switching transient Slew rate Aperture jitter Noise Distortion Switching transient Noise Settling time ==== 4.1.1 Track Mode Specifications
Since an SHA in the sample (or track) mode is simply an amplifier, both the static and dynamic specifications in this mode are similar to those of any amplifier. The principal track mode specifications are offset, gain, nonlinearity, bandwidth, slew rate, settling time, distortion, and noise; however, distortion and noise in the track mode often are of less interest than in the hold mode. Fundamental amplifier specifications are discussed in Section 2. 4.1.2 Track-to-Hold Mode Specifications When the SHA switches from track to hold, generally a small amount of charge is dumped on the hold capacitor because of nonideal switches. This results in a hold-mode DC offset voltage called pedestal error. If the SHA is driving an ADC, the pedestal error appears as a DC offset voltage that may be removed by performing a system calibration. If the pedestal error is a function of the input signal level, the resulting nonlinearity contributes to hold mode distortion. Pedestal errors may be reduced by increasing the value of the hold capacitor with a corresponding increase in acquisition time and a reduction in bandwidth and slew rate. Switching from track to hold produces a transient, and the time required for the SHA output to settle to within a specified error band is called the hold mode settling time. Occasionally, the peak amplitude of the switching transient also is specified (see FIG. 14). 4.1.3 Aperture and Aperture Time Perhaps the most misunderstood and misused SHA specifications are those that include the word aperture. The most essential dynamic property of an SHA is its ability to disconnect quickly the hold capacitor from the input buffer amplifier (see FIG. 13(a)). The short (but nonzero) interval required for this action is called the aperture time (ta). The actual value of the voltage held at the end of this interval is a function of both the input signal and the errors introduced by the switching operation itself. FIG. 15 shows what happens when the hold command is applied with an input signal of arbitrary slope (for clarity, the sample-to-hold pedestal and switching transients are ignored). The value finally held is a delayed version of the input signal, averaged over the aperture time of the switch, as shown in FIG. 15. The first-order model assumes that the final value of voltage on the hold capacitor is approximately equal to the average value of the signal applied to the switch over the interval during which the switch changes from a low to a high impedance (ta). The model shows that the finite time required for the switch to open (ta) is equivalent to introducing a small delay in the sampling clock driving the SHA. This delay is constant and may be either positive or negative. Called effective aperture delay time or simply aperture delay (te), it’s defined as the time difference between the analog propagation delay of the front-end buffer (tda) and the switch digital delay (tdd) plus half the aperture time (ta/2). The effective aperture delay time usually is positive but may be negative if the sum of half the aperture time (ta/2) and the switch digital delay (tad) is less than the propagation delay through the input buffer (tda). The aperture delay specification thus establishes when the input signal actually is sampled with respect to the sampling clock edge. The aperture delay time can be measured by applying a bipolar sine wave signal to the SHA and adjusting the synchronous sampling clock delay such that the output of the SHA is 0 during the hold time. The relative delay between the input sampling clock edge and the actual zero crossing of the input sine wave is the aperture delay time (see FIG. 16). Aperture delay produces no errors but acts as a fixed delay in either the sampling clock input or the analog input (depending on its sign). If there is sample-to-sample variation in aperture delay (aperture jitter), then a corresponding voltage error is produced, as shown in FIG. 17. This sample-to-sample variation in the instant that the switch opens, called aperture uncertainty or aperture jitter, usually is measured in rms picoseconds. The amplitude of the associated output error is related to the rate of change of the analog input. For any given value of aperture jitter, the aperture jitter error increases as the input dr~dr increases. Measuring aperture jitter error in an SHA requires a jitter-free sampling clock and analog input signal source, because jitter (or phase noise) on either signal cannot be distinguished from the SHA aperture jitter itself rather effects are the same. In fact, the largest source of timing jitter errors in a system most often is external to the SHA (or the ADC if it’s a sampling one), caused by noisy or unstable clocks, improper signal routing, and lack of attention to good grounding and decoupling techniques. SHA aperture jitter generally is less than 50 ps rms and less than 5 ps rms in high-speed devices. FIG. 18 shows the effects of total sampling clock jitter on the signal-to-noise ratio of a sampled data system. The total rms jitter will be composed of a number of components, the actual SHA aperture jitter often being the least of them. 4.1.4 Hold Mode Droop During the hold mode, there are errors due to imperfections in the hold capacitor, switch, and output amplifier. If a leakage current flows in or out of the hold capacitor, it will slowly charge or discharge and its voltage will change, an effect known as droop in the SHA output, expressed in V/~s. Droop can be caused by leakage across a dirty PCB if an external capacitor is used or by a leaky capacitor but most commonly is due to leakage current in semiconductor switches and the bias current of the output buffer amplifier. An acceptable value of droop is found when the output of an SHA does not change by more than 1/2 LSB during the conversion time of the ADC it’s driving. See FIG. 19. Droop can be reduced by increasing the value of the hold capacitor, but this will increase acquisition time and reduce the bandwidth in the track mode. Even quite small leakage currents can cause troublesome droop when SHAs use small hold capacitors. Leakage currents in PCBs may be minimized by the intelligent use of guard rings. Details of planning a guard ring are discussed in Analog Devices (1995, Section 8). 4.1.5 Dielectric Absorption Hold capacitors for SHAs must have low leakage, but another characteristic is equally important: low dielectric absorption. If a capacitor is charged, discharged, and then left on an open circuit, it will recover some of its charge. The phenomenon, known as dielectric absorption, can seriously degrade the performance of an SHA, since it causes the remains of a previous sample to contaminate a new one and may introduce random errors of tens or even hundreds of milli-volts (see FIG. 20). After discharge, CD and Rs in circuit could cause the residual charge.
Different capacitor materials have differing amounts of dielectric absorption: electrolytic capacitors are dreadful (and their leakage is high) and some high-K ceramic types are bad, while mica, polystyrene, and polypropylene generally are good. Unfortunately, dielectric absorption varies from batch to batch, and even occasional batches of polystyrene and polypropylene capacitors may be affected. Measuring hold mode distortion is discussed in Analog Devices (1995, Section 8). 4.1.6 Hold-to-Track Transition Specification When the SHA switches from hold to track, it must reacquire the input signal (which may have made a full-scale transition during the hold mode). Acquisition time is the interval of time required for the SHA to reacquire the signal to the desired accuracy when switching from hold to track. The interval starts at the 50% point of the sampling clock edge and ends when the SHA output voltage falls within the specified error band (usually 0.1% and 0.01% times are given). Some SHAs also specify acquisition time with respect to the voltage on the hold capacitor, neglecting the delay and settling time of the output buffer. The hold capacitor acquisition time specification is applicable in high-speed applications, where the maximum possible time must be allocated for the hold mode. The output buffer settling time, of course, must be significantly smaller than the hold time. 5 SHA Architectures There are numerous SHA architectures and we will examine a few of the most popular ones. For a more detailed discussion on SHA architectures. 5.1 Open- Loop Architecture The simplest SHA architecture is shown in FIG. 21. The input signal is buffered by an amplifier and applied to the switch. The input buffer may either be open or closed loop and may or may not provide gain. The switch can be CMOS, FET, or bipolar (using diodes or transistors), controlled by the switch driver circuit. The signal on the hold capacitor is buffered by an output amplifier. This architecture sometimes is referred to as open loop because the switch is not inside a feedback loop. Note that the entire signal voltage is applied to the switch; therefore, it must have excellent common mode characteristics.
Switch Can Be: • CMOS • FET • Diode Bridge • Bipolar Hold Command --Switch Driver 5.2 Open-Loop Diode Bridge SHA Semiconductor diodes exhibit small on-resistance, large off-resistance, high-speed switching, and thus potential for the switching function in sampling circuits. A simplified diagram of a typical diode switch is shown in FIG. 22(a). Here, four diodes form a bridge that provides a low-impedance path from Vin to Vout when current sources 11 and 12 are on and (in the ideal case) isolates Vout from Vin when I] and 12 are off. Nominally, I1 = 12 = I. Implementation is shown in FIG. 22(b). 5.3 Closed- Loop Architecture The SHA circuit shown in FIG. 23 represents a classical closed-loop design and is used in many CMOS sampling ADCs. Since the switches always operate at virtual ground, there is no common mode signal across them. Switch $2 is required to maintain a constant input impedance and prevent the input signal from coupling to the output during the hold time. In the track mode, the transfer characteristic of the SHA is determined by the opamp and the switches introduce no DC errors because they are within the feedback loop. The effects of charge injection can be minimized by using the differential switching techniques shown in FIG. 24.
(cont >>) |
PREV. | Next | Related Articles | HOME |