Switching-Mode Power Supply (SMPS) -- Feedback Loop Analysis and Stability (part 3)

Home | Articles | Forum | Glossary | Books

AMAZON multi-meters discounts AMAZON oscilloscope discounts

..

Mathematics in the Log Plane

As we proceed toward our ultimate objective of control loop analysis and compensation network design, we will be multiplying transfer functions of cascaded blocks to get the overall transfer function. That is because the output of one block forms the input for the next block, and so on.

It turns out that the mathematics of gain and phase is actually easier to perform in the log plane, rather than in a linear plane. Some simple rules that will help us later are as follows:

++ When we combine transfer functions, decibels add up. So for example, if we take the product of two transfer functions A and B (cascaded stages), we get C = AB. This follows from the property log(AB) = log(A) + log(B). In words, the gain of A in decibels plus the gain of B in decibels, gives us the gain of C in decibels.

++ The overall phase shift is also the sum of the phase shifts produced by each of the cascaded stages. So phase angles also add up.

++ From the upper half of Fig. 6, we see that if we know the crossover frequency (and the slope of the line), we can find the gain at any frequency.

++ Suppose we now shift the line vertically (keeping the slope constant) as shown in the lower half of Fig. 6. Then, by the equation provided therein, we can calculate by what amount the crossover frequency shifts in the process.

Fig. 6: Math in the Log Plane

Transfer Function of the LC Filter

In a buck, there is a post-LC filter present. Therefore this filter stage can easily be treated as a cascaded stage following the switch. The overall transfer function is then very easy to compute as per the rules mentioned in the previous section. However, when we come to the boost and buck-boost, we don't have a post-LC filter - there is a switch/diode connected between the two reactive components that alters the dynamics. However, it can be shown, that even the boost and buck-boost can be manipulated into a 'canonical model' in which an effective post-LC filter appears at the output - thus making them as easy to treat as a buck.

The only difference is that the original inductance L (of the boost and buck-boost) gets replaced by an equivalent (or effective) inductance equal to L/(1-D) 2. The "C" remains the same in the canonical model.

Since the LC filter thus becomes representative of the output section of any typical switching topology, we need to understand it better, as we now do using Fig. 7:

++ For most purposes, we can assume that the break frequency of the gain plot does not depend on the load or on the associated parasitic resistive elements of the components. So the resonant frequency of the filter-plus-load combination can be taken to be simply 1/(2p v(LC)), that is, no resistance term is included.

++ The LC filter gain decreases at the rate of "-2" at high frequencies. The phase also decreases providing a total phase shift of 180°. So we say we have a "double-pole" at the break frequency 2p v(LC).

++ Q is the 'quality factor' (as defined in the figure). In effect, it quantifies the amount of "peaking" in the response at the break frequency. Very simply put, if for example Q = 20, then the output voltage at the resonant frequency is 20 times the input voltage. On a log scale, this is written as 20 × log Q, as shown in the figure. If Q is very high, the filter is considered "under-damped." If Q is very small, the filter is "over-damped." And if Q = 0.707, we have 'critical damping.' In critical damping, the gain at the resonant frequency is 3 dB below its dc value, that is, the output is 3 dB below the input (similar to an RC filter). Note that -3 dB is a factor of 1/ v2 = 0.707 - that is, roughly 30% lower. Similarly, +3dBis2 = 1.414 (i.e. roughly 40% higher).

Fig. 7: The LC Filter Analyzed in the Frequency Domain

++ The effect of resistance on the break frequency is usually minor, and therefore ignored. But the effect of resistance on the Q (i.e. the peaking) is significant (though eventually, that is also usually ignored too). However, we should keep in mind that the higher the associated series parasitic resistances of L and C, the lower the Q. On the other hand, at lower output powers, the resistor across the C (i.e. load resistor) is high, and this actually increases the Q. Remember that a high parallel resistance is in effect a small series resistance, and vice versa.

++ We can use the asymptotic approximation for the LC gain plot as we did for the RC filter. However, the problem with trying to do the same with the phase of the LC is that there will be a very large error, more so if the Q (defined in the figure) becomes very large. If so, we can get a very abrupt phase shift of 180° close to the resonant frequency. This sudden phase shift in fact can become a real problem in a power supply, since it can induce "conditional stability" (discussed later). Therefore, a certain amount of damping helps from the standpoint of phase and possible conditional stability.

++ Unlike an RC filter, the output voltage in this case can be greater than the input voltage (at around the break frequency). But for that to happen, Q must be greater than 1.

++ Instead of using Q, engineers often prefer to talk in terms of the damping factor defined as damping factor = ? = 1/2Q

++ So a high Q corresponds to a low ?.

From the equations for Q and resonant frequency, we can conclude that if L is increased, Q tends to decrease, and if C is increased, Q increases.

Note: One of the possible pitfalls of putting too much output capacitance in a power supply is that we may be creating significant peaking (high Q) in its output filter's response. And we know that when that happens, the phase shift is also more abrupt, and that can induce conditional instability. So generally, if we increase C but simultaneously increase L, we can keep the Q (and the peaking) unchanged.

Summary of Transfer Functions of Passive Filters

The first-order (RC) low-pass filter transfer function can be written in different ways as ...

... where flo = 1/(RC). Note that the "K" in the last equation is a constant multiplier often used by engineers who are more actively involved in the design of filters. In this case, K = flo.

For the second-order filter, various equivalent forms seen in literature are ...

... where flo = 1/(LC)^1/2.

Note that here, K = ?2 O. Also, Q is the quality factor, and ? is the damping factor defined earlier.

Finally, note also that the following relations are very useful when trying to manipulate the transfer function of the LC filter into different forms:

Poles and Zeros

Let us try to "connect the dots" now. Both the first- and second-order filters we have discussed gave us poles. That is because they both had 's' in the denominators of their transfer functions -if s takes on specific values, it can force the denominator to become zero, and the transfer function then becomes infinite, and we get a pole by definition. The values of s at which the denominator becomes zero are the resonant (or break) frequencies, that is, the locations of the poles. For example, a hypothetical transfer function "1/s" will give us a pole at zero frequency (the "pole-at-zero" we talked about earlier).

Note that the gain, which is the magnitude of the transfer function (calculated by putting s = j?), won't necessarily be infinite at the pole location. For example, in the case of the RC

filter, we know that the gain is in fact always less than or equal to unity, despite a pole being present at the break frequency.

Note that if we interchange the positions of the two primary components of each of the passive low-pass filters we discussed earlier, we will get the corresponding 'high-pass' RC and LC filters respectively. If we calculate their transfer functions in the usual manner, we will see that besides giving us poles, we also now get single- and double-zeros respectively (both at zero frequency) as indicated in Fig. 8. So, zeros occur whenever (and wherever) the numerator of the transfer function becomes zero.

Zeros are "anti-poles" in many senses. For one, their presence is indicated by both the gain and the phase increasing with frequency - opposite to a pole. Further, zeros also "cancel" poles if they happen to fall at the same frequency location.

Fig. 8: High-pass RC and LC (first-order and second-order) filters.

We had mentioned that gain-phase plots are called Bode plots. In the case of Fig. 8, we have drawn these on the same graph, just for convenience. Here the solid line is the gain, and to read its value, we need to look at the y-axis on the left side of the graph. Similarly, the dashed line is the phase, and for it, we need to look at the y-axis on the right side. Note that just for practice, we have once again reverted to plotting the gain (expressed as a simple ratio) on a log scale. The reader should hopefully by now have learnt to correlate the major grid divisions of this type of plot with the corresponding dB. So a 10-fold increase is equivalent to +20 dB, a 100-fold increase is +40 dB, and so on.

Now we can generalize our approach. A network transfer function can be described as a ratio of two polynomials:

Interaction of Poles and Zeros

We can break up this analysis in two parts:

1. For poles and zeros lying along the same gain plot (i.e. belonging to the same stage) - the effect is cumulative in going from left to right. So suppose we are starting from zero frequency and move right, toward a higher frequency, and we first encounter a double pole. We know that the gain will start falling with a slope of -2 beyond the resonant frequency point. But as we go further to the right, suppose we now encounter a single-zero. This will impart a change in slope of +1. So the net slope of the gain plot will become -2 + 1 =-1 (after the zero location). Note that despite a zero being present, the gain is still falling, though at a lesser rate. In effect, the single-zero canceled half the double pole, so we are left with the response of a single pole (to the right of the zero).

The phase angle also cumulates in a similar manner, except that in practice a phase angle plot is harder to analyze. That is because phase shift can take place slowly over two decades around the resonant frequency. We also know that for a double pole (or double-zero), the change in phase may in fact be very abrupt at the resonant frequency. However, eventually, the net effect is still predictable. So for example, a double pole followed by a single-zero will start with a phase angle of 0° (at dc) and then tend toward -180°. But about a decade below the location of the single-zero, the phase angle will gradually start increasing (though still remaining negative). It will eventually settle down to -180° + 90° =-90° at high frequencies.

2. For poles and zeros lying along different gain plots (all coming from cascaded stages) - we know that the overall gain in decibels is the sum of the gain of each (also in decibels). The effect of this math on the pole-zero interactions is therefore simple to describe. If for example, at a specific frequency, we have a double pole in one plot and a single-zero on the other plot, then the overall gain plot will have a single pole at this break frequency. So we see that poles and zeros tend to "destroy" each other, as we would expect since zeros are "anti-poles" as mentioned previously.

But poles and zeros also add up with their own type. For example, if we have a double pole on one plot, and a single-pole on the other plot (at the same frequency), the net gain will fall with a slope of -3 after the break frequency. Phase angles also add up similarly.

These rules will become even clearer, a little later, when we actually try to work out the open-loop gain of a converter.

===

Fig. 9: General Feedback Loop Analysis

G = control to output transfer function (plant) H = feedback transfer function T = GH = open-loop gain OUT/IN = G/(1 + GH) = closed-loop gain

Go around the loop starting here;

We can therefore equate…

===

Closed and Open Loop Gain

Fig. 9 represents a general feedback controlled system. The 'plant' (also sometimes called the "modulator") has a 'forward transfer function' G(s). A part of the output gets fed back through the feedback block to the control input, so as to produce regulation at the output. Along the way, the feedback signal is compared with a reference level, which tells it what the desired level is for it to regulate to.

H(s) is the "feedback transfer function," and we can see this goes to a summing block (or node) - represented by the circle with an enclosed summation sign.

Note: The summing block is sometimes shown in literature as just a simple circle (nothing enclosed), but sometimes rather confusingly as a circle with a multiplication sign (or x) inside it. Nevertheless, it still is a summation block.

One of the inputs to this summation block is the reference level (the 'input' from the viewpoint of the control system), and the other is the output of the feedback block (i.e. the part of the output being fed back). The output of the summation node is therefore the 'error' signal.

Comparing Fig. 9 with Fig. 10, we see that in a power supply, the plant itself can be split into several cascaded blocks. These blocks are - the pulse width modulator (not to be confused with the term 'modulator' used in general control loop theory for the entire plant itself), the power stage consisting of the driver-plus-switch, and the LC filter.

The feedback block, on the other hand, consists of the voltage divider (if present) and the compensated error amplifier. Note that we may prefer to visualize the error amplifier block as two cascaded stages - one that just computes the error (summation node), and another that accounts for the gain (and its associated compensation network). Note that the basic principle behind the pulse width modulator stage (which determines the shape of the pulses driving the switch), is explained in the next section, and in Fig. 11.

In general, the plant can receive various 'disturbances' that can affect its output. In a power supply these are essentially the line and load variations. The basic purpose of feedback is to reduce the effect of these disturbances on the output voltage.

In Fig. 9 we have derived the open-loop gain, which is simply the magnitude of the product of the forward and feedback transfer functions - that is, obtained by going around the loop. On the other hand, the magnitude of the reference-to-output transfer function is called the closed-loop gain. Note that the word "closed" has really nothing to do with the feedback loop being literally "open" or "closed." Further, "GH" is called the 'open-loop transfer function'- again, irrespective of whether the loop is literally "open," say for the purpose of measurement, or "closed" as in normal operation. In fact, in a typical power supply, we can't ever even hope to break the feedback path. Because the gain is typically so high, even a minute change in the feedback voltage will cause the output to swing wildly. So in fact, we always need to "close" the loop (and thereby dc-bias the converter into regulation), before we can measure the so-called "open-loop" gain.

As a further proof of this, note that in Fig. 9, if we go around the cascaded stage consisting of G and H, and calculate the ratio of the input signal to the output, we get ...

Therefore, the ratio of the output to the input, that is, the transfer function of the cascaded G and H blocks, is equal to GH - which is simply the open-loop gain. Therefore, even with the loop "closed," as we go around, we are always going to get the open-loop gain GH. Note that the phrase "closed-loop gain" actually refers to the change in the output, if we change the reference voltage slightly.

Fig. 10: A Power Converter and Its Plant and Feedback Blocks

Fig. 11: Combining Blocks and Thus Showing That "Open-loop" Gain Is Actually in a Closed Loop

Top of Page

PREV: part 2 | NEXT: part 4 Guide Index | HOME