Questions/Answers about Switching Power-Supply Topology (part 2)

Home | Articles | Forum | Glossary | Books

AMAZON multi-meters discounts AMAZON oscilloscope discounts

<< cont. from part 1

Question---27: What is the basic design rule for calculating inductance for all the topologies?

Answer---To reduce stresses at various points inside a power supply, and also to generally reduce the overall size of its components, a 'current ripple ratio' ('r') of about 0.4 is considered to be a good compromise for any topology, at any switching frequency.

"r" is the ratio ?I/IL, where ?I is the swing in the current, and IL is the average inductor current (center of the swing ?I). An r of 0.4 is the same as r = 40%, or r =±20%. This means that the peak inductor current is 20% above its average value (its trough being 20% below).

To determine the corresponding inductance we use the definition r = ?I/IL, along with the inductor equation, to get

V_oN =

L (delta I/delta T) = L (IL × r)/D/f

This gives us the inductance in Henries, when f is in Hz. Note that V_oN is the voltage across the inductor when the switch is ON. It is therefore equal to V_in - V_o for a buck, and V_in for a boost and a buck-boost. Also, IL is the average inductor current, equal to IO for a buck, and IO/(1 - D) for a boost and a buck-boost.

Question---28: What is a 'forward converter'?

Answer---Just as the isolated flyback is a derivative of the buck-boost topology, the forward converter is the isolated version (or derivative) of the buck topology. It too uses a transformer (and opto-coupler) for providing the required isolation in high-voltage applications. Whereas the flyback is typically suited for output powers of about 75 W or less, the forward converter can go much higher.

The simplest version of the forward converter uses only one transistor (switch), and is thus often called "single-ended." But there are variants of the single-ended forward converter with either two or four switches. So whereas the simple forward converter is suited only up to about 300 W of power, we can use the 'double-switch forward' to get up to about 500 W.

Thereafter, the half-bridge, push-pull, and full-bridge topologies can be exploited for even higher powers (see Ill. 2). But note that all of the above topologies are essentially 'buck-derived' topologies.

Question---29: How can we tell whether a given topology is "buck-derived" or not?

Answer---The simplest way to do that's to remember that only the buck has a true LC filter at its output.

Question---30: Which end of a given input voltage range V_inMIN to V_inMAX should we pick for starting a design of a buck, a boost, or a buck-boost converter?

Answer---Since the average inductor current for both the boost and buck-boost increases as D increases (IL = IO/(1 - D)) - the design of boost and buck-boost inductors must be validated at the lower end of the given input range, that's , at V_inMIN - since that's where we get the highest (average and peak) inductor current. We always need to ensure that any inductor can handle the maximum peak current of the application without saturating. For a buck, the average inductor current is independent of the input or output voltage. However, observing that its peak current increases at higher input voltages, it's preferable to design or select a buck inductor at the upper end of the given input range, that's , at V_inMAX.

Ill. 2: Various Buck-derived Topologies

Question---31: Why are the equations for the average inductor current of a boost and a buck-boost exactly the same, and why is that equation so different from that of a buck?

Answer---In a buck, energy continues to flow into the load (via the inductor) during the entire switching cycle (during the switch on-time and off-time). Therefore, the average inductor current must be equal to the load current, that's, IL = IO.

Note that capacitors contribute nothing to average current flow, because, in steady state, just as the volt seconds across an inductor averages out to zero at the end of each cycle, the charge in a capacitor does likewise (charge is the integral of current over time, and has the units Amperes-seconds). If that did not happen, the capacitor would keep charging up (or discharging) on an average, until it reaches a steady state.

However, in a boost or buck-boost, energy flows into the output only during the off-time.

And it can only be coming via the diode. So the average diode current must be equal to the load current. By simple arithmetic, since the average diode current calculated over the full cycle is equal to IL × (1 - D), equating this to the load current IO gives us IL = IO/(1 - D) for both the boost and the buck-boost.

Question---32: What is the average output current (i.e. the load current) equal to for the three topologies?

Answer---This is simply the converse of the previous question. For the buck, the average output current equals the average inductor current. For the boost and buck-boost, it's equal to the average diode current.

Question---33: What is the average input current equal to for the three topologies?

Answer---In a buck, the input current flows only through the switch. It stops when the switch turns OFF. Therefore, the average input current must be equal to the average switch current.

To calculate the average of the switch current, we know it's ON for a fraction D (duty cycle) of the switching cycle, during which time it has an average value (center of ramp) equal to the average inductor current, which in turn is equal to the load current for a buck.

Therefore the arithmetic average of the switch current must be D × IO, and this must be equal to the input current IIN. We can also do a check in terms of the input and output power

P_IN = V_in × I_IN = V_in × D × IO = V_in × V_o V_in × IO = V_o × IO = PO

We therefore get input power equal to the output power - as expected, since the simple duty cycle equation used above ignored the switch and diode drops, and thus implicitly assumed no wastage of energy, that's , an efficiency of 100%.

Similarly, the input current of a boost converter flows through the inductor at all times.

So the average input current is equal to the average inductor current - which we know is IO/(1 - D) for the boost. Let us again do a check in terms of power

P_IN = V_in × I_IN = V_in × IO 1 - D = V_in × IO 1 - V_o – V_in = V_o × IO = PO

Coming to the buck-boost, the situation isn't so clear at first sight. The input current flows into the inductor when the switch is ON, but when the switch turns OFF, though the inductor current continues to flow, its path does not include the input. So the only conclusion we can make here is that the average input current is equal to the average switch current. Since the center of the switch current ramp is IO/(1-D), its arithmetic average is D×IO/(1-D). And this is the average input current. Let us check this out:

P_IN = V_in × I_IN = V_in × D × I_O 1 - D = V_in × V_o V_in + V_o × I_O 1 - V_o V_in + V_o = V_o × I_O = PO

We get PIN = PO as expected.

Question---34: How is the average inductor current related to the input and /or output currents for the three topologies?

Answer---For the buck, we know that average inductor current is equal to the output current, that's , IL = IO. For the boost we know it's equal to the input current, that's , IL = IIN. But for the buck-boost it's equal to the sum of the (average) input current and the output current.

Let us check this assertion out IIN + IO = D × I_O 1 - D + I_O = I_O ×_ D 1 - D + 1 = I_O 1 - D = IL It is thus proved. See Tbl 1 for a summary of similar relationships.

Tbl 1: Summary of relationships of currents for the three topologies

Question---35: Why are most buck ICs not designed to have a duty cycle of 100%?

Answer---One of the reasons for limiting DMAX to less than 100% is specific to synchronous buck regulators ( Ill. 3) - when it utilizes a technique called 'low-side current sensing.' In "low-side current sensing," to save the expense of a separate low-resistance sense resistor, the RDS of the "low-side mosfet" (the one across the "optional" diode in Ill. 3) is often used for sensing the current. The voltage drop across this mosfet is measured, and so if we know its RDS, the current through it's also known by Ohm's law. It becomes obvious that in fact for any low-side current sense technique, we need to turn the high-side mosfet OFF, and thereby force the inductor current into the freewheeling path, so we can measure the current therein. That means we need to set the maximum duty cycle to less than 100%.

Ill. 3: Synchronous Buck Regulator with Bootstrap Circuit

Another reason for choosing DMAX < 100% comes from the use of n-channel mosfets in any (positive-to-positive) buck regulators. Unlike an npn transistor, an n-channel mosfet's gate terminal has to be taken several volts above its source terminal to turn it ON fully. So to keep the switch ON, when the mosfet conducts, we need to drive its gate a few Volts higher than the input rail. But such a rail isn't available! The only way out is to create such a rail - by means of a circuit that can pump the input rail higher as required. This circuit's called the 'bootstrap circuit,' as shown in Ill. 3.

But to work, the bootstrap circuit demands we turn the switch OFF momentarily, because that's when the switching node goes low and the 'bootstrap capacitor' gets charged up to V_in. Later, when the switch turns ON, the switching node (lower terminal of the bootstrap capacitor) rises up to V_in, and in the process, literally "drags" the upper terminal of the bootstrap capacitor to a voltage higher than V_in (by an amount equal to V_in!) -- that happens because no capacitor loses its charge spontaneously! Therefore, the reason for setting the maximum duty cycle to less than 100% is simply to allow a bootstrap circuit (if present) to work! We will find that a bootstrap circuit's almost always present if an n-channel mosfet switch is used in a positive to positive (or just "positive") buck converter, or in a positive to negative buck-boost, or in a negative to negative (or just "negative") boost. Further, by circuit symmetry we can show that it will also be required (though this time to create a drive rail below ground) when using a p-channel mosfet in a negative buck, or in a negative to positive buck-boost, or in a positive boost.

Here, we should also keep in mind that the n-channel mosfet is probably the most popular choice for switches, since it's more cost-effective as compared to p-channel mosfets with comparable drain-to-source on-resistance 'RDS.' That is because n-channel devices require smaller die sizes (and packages). Since we also know that the ubiquitous positive buck topology requires a bootstrap circuit when using an n-channel mosfet switch, it becomes apparent why a good majority of buck ICs out there have maximum duty cycles of less than 100%.

Question---36: Why are boost and buck-boost ICs almost invariably designed not to have 100% duty cycle?

Answer---We should first be clear that the boost and buck-boost topologies are so similar in nature, that any IC meant for a boost topology can also be used for a buck-boost application, and vice versa. Therefore, such control ICs are generally marketed as being for both boost and buck-boost applications.

One of the common aspects of these two topologies is that in both of these, energy is built up in the inductor during the switch on-time, during which none passes to the output. It is delivered to the load only when the switch turns OFF. In other words, we have to turn the switch OFF to get any energy at all delivered to the output. Contrast this with a buck, in which the inductor, being in series with the load, delivers energy to the load even as it's being built up in the inductor itself (during the switch on-time). So in a buck, even if we have 100% duty cycle (i.e. switch is ON for a long time), we will get the output voltage to rise (smoothly). Subsequently, the feedback loop will command the duty cycle to decrease when the required output voltage is reached.

However, in the boost and buck-boost topologies, if we keep the switch ON permanently, we can never get the output to rise, because in these topologies, energy is delivered to the output only when the switch turns OFF. We can thus easily get into a "Catch 22" situation, where the controller "thinks" it's not doing enough to get the output to rise - and therefore continues to command maximum duty cycle. But with a maximum 100% duty cycle, that means zero off-time - so how can the output ever rise?! We can get trapped in this illogical mode for a long time, and the switch can be destroyed. Of course, we hope that the current limit circuit's designed well enough to eventually intervene, and turn the switch OFF before the switch destructs! But generally, it's considered inadvisable to run these two topologies at 100% duty cycle. The only known D = 100% buck-boost IC is the LM3478 from National.

Question---37: What are the 'primary' and 'secondary' sides of an off-line power supply?

Answer---Usually, the control IC drives the switch directly. Therefore the IC must be located at the input side of the isolation transformer - that's called the 'primary side'. The transformer windings that go to the output are therefore said to all lie on the 'secondary side.' Between these primary and secondary sides lies "no-man's land" - the 'isolation boundary.' Safety norms regulate how strong or effective this boundary must be.

Question---38: In many off-line power supplies, we can see not one, but two optocouplers, usually sitting next to each other. Why?

Answer---The first optocoupler transmits error information from the output (secondary side) to the control IC (primary side). This closes the feedback loop, and tells the IC how much correction is required to regulate the output. This optocoupler is therefore often nicknamed the "regulation opto" or the "error opto." However, safety regulations for off-line power supplies also demand that no 'single-point failure' anywhere in the power supply produces a hazardous voltage on the output terminals. So if, for example, a critical component (or even a solder connection) within the normal feedback path fails, there would be no control left on the output, which could then rise to dangerous levels. To prevent this from happening, an independent 'overvoltage protection' (OVP) circuit's almost invariably required. This is usually tied to the output rail in parallel to the components of the regulation circuitry. This fault detector circuit also needs to send its sensed 'fault signal' to the IC through a separate path altogether, so that its functioning isn't compromised in the event of failure of the feedback loop. So logically, we require an independent optocoupler - the "fault opto." Note that by the same logic, this optocoupler must eventually connect to the IC (and cause it to shut down) using a pin other than the one being used for feedback.

The reason why the two optocouplers are "sitting next to each other" is usually only for convenience in the PCB layout - because the isolation boundary needs to pass through these devices, and also through the transformer (see Ill. 1 in Section 1).

Question---39: To get safety approvals in multi-output off-line converters, do we need separate current limiting on each output?

Answer---Safety agencies not only regulate the voltage at the user-accessible outputs, but also the maximum energy that can be drawn from them under a fault condition. Primary-side current sensing can certainly limit the total energy delivered by the supply, but cannot limit the energy (or power) from each output individually. So for example, a 300 W converter (with appropriate primary-side current limiting) may have been originally designed for 5 V at the rate of 36 A and 12 V at the rate of 10 A. But what prevents us from trying to draw 25 A from the 12 V output alone (none from the 5 V)? To avoid running into problems like this during approvals, it's wise to design separate secondary-side current limiting circuits for each output. We are allowed to make an exception if we are using an integrated post-regulator (like the 7805) on a given output, because such regulators have built-in current limiting. Note that any overcurrent fault signal can be "OR-ed" with the OVP signal, and communicated to the IC via the fault optocoupler.

Question---40: How do safety agencies typically test for single-point failures in off-line power supplies?

Answer---Any component can be shorted or opened by the safety agency during their testing.

Even the possibility of a solder connection coming undone anywhere, or a bad 'via' between layers of a PCB would be taken into account. Any such single-point failure is expected to usually cause the power supply to simply shut down gracefully, or even fail catastrophically.

That is fine, but in the process, no hazardous voltage is permitted to appear on the outputs, even for a moment.

Question---41: What is a synchronous buck topology?

Answer---In synchronous topologies, the freewheeling diode of the conventional buck topology is either replaced, or supplanted (in parallel) with an additional mosfet switch.

See Ill. 3. This new mosfet is called the "low-side mosfet" or the "synchronous mosfet," and the upper mosfet is now identified as being the "high-side mosfet" or the "control mosfet." In steady state, the low-side mosfet is driven such that it's "inverted" or "complementary" with respect to the high-side mosfet. This means that whenever one of these switches is ON, the other is OFF, and vice versa - that's why this is called "synchronous" as opposed to "synchronized" which would imply both are running in phase (which is clearly unacceptable because that would constitute a dead short across the input). However, through all of this, the effective switch of the switching topology still remains the high-side mosfet. It is the one that effectively "leads" - dictating when to build up energy in the inductor, and when to force the inductor current to start freewheeling. The low-side mosfet basically just follows suit.

The essential difference from a conventional buck regulator is that the low-side mosfet in a synchronous regulator is designed to present a typical forward drop of only around 0.1 V or less to the freewheeling current, as compared to a Schottky catch diode which has a typical drop of around 0.5 V. This therefore reduces the conduction loss (in the freewheeling path) and enhances efficiency.

In principle, the low-side mosfet does not have any significant crossover loss because there is virtually no overlap between its V and I waveforms - it switches (changes state) only when the voltage across it's almost zero. Therefore, typically, the high-side mosfet is selected primarily on the basis of its high switching speed (low crossover loss), whereas the low-side mosfet is chosen primarily on the basis of its low drain-to-source on-resistance, 'RDS' (low conduction loss).

One of the most notable features of the synchronous buck topology is that on decreasing the load, it does not enter discontinuous conduction mode as a diode-based (conventional) regulator would. That is because, unlike a bjt, the current can reverse its direction in a mosfet (i.e. it can flow from drain to source or from source to drain). So the inductor current at any given moment can become negative (flowing away from the load) - and therefore "continuous conduction mode" is maintained - even if the load current drops to zero (nothing connected across the output terminals of the converter) (see Section 1).

Question---42: In synchronous buck regulators, why do we sometimes use a Schottky diode in parallel to the low-side mosfet, and sometimes don't?

Answer---We indicated above that the low-side switch is deliberately driven in such a manner that it changes its state only when the voltage across it's very small. That simply implies that during turn-off (of the high-side mosfet), the low-side mosfet turns ON a few nanoseconds later. And during turn-on, the low-side mosfet turns OFF just a little before the high-side mosfet starts to conduct. By doing this, we are trying to achieve 'zero voltage (lossless) switching' (ZVS) in the low-side mosfet. We are also trying to prevent "cross-conduction" - in which both mosfets may conduct simultaneously for a short interval during the transition (which can cause a loss of efficiency at best, and possible switch destruction too). However, during this brief interval when both mosfets are simultaneously OFF (the "deadtime"), the inductor current still needs a path to follow. However, every mosfet contains an intrinsic 'body diode' within its structure that allows reverse current to pass through it even if we haven't turned it ON (see Ill. 3). So this provides the necessary path for the inductor current. However, the body diode has a basic problem - it's a "bad diode." It does not switch fast, nor does it have a low forward drop. So often, for the sake of a couple of percentage points in improved efficiency, we may prefer not to depend on it, and use a "proper" diode (usually Schottky), strapped across the low-side mosfet in particular.

Question---43: Why do most synchronous buck regulators use a low-side mosfet with an integrated Schottky diode?

Answer---In theory, we could just select a Schottky diode and solder it directly across the low-side mosfet. But despite being physically present on the board, this diode may be serving no purpose at all! E.g., to get the diode to take over the freewheeling current quickly from the low-side mosfet when the latter turns OFF requires a good low-inductance connection between the two. Otherwise, the current may still prefer the body diode - for the few nanoseconds it takes before the high-side mosfet turns ON. So this requires we pay great attention to the PCB layout. But unfortunately, even our best efforts in that direction may not be enough - because of the significant inductive impedance that even small PCB trace lengths and internal bond wires of the devices can present when we are talking about nanoseconds. The way out of this is to use a low-side mosfet with an integrated Schottky diode; that's , within the same package as the mosfet. This greatly reduces the parasitic inductances between the low-side mosfet and the diode, and allows the current to quickly steer away from the low-side mosfet and into the parallel diode during the deadtime preceding the high-side turn-on.

Question---44: What limits our ability to switch a mosfet fast?

Answer---When talking about a switching device (transistor), as opposed to a converter, the time it spends in transit between states is referred to as its "switching speed." The ability to switch fast has several implications, including the obvious minimization of the V-I crossover losses. Modern mosfets, though considered very "fast" in comparison to bjts, nevertheless don't respond instantly when their drivers change state. That is because, first, the driver itself has a certain non-zero "pull-up" or "pull-down" resistance through which the drive current must flow and charge/discharge the internal parasitic capacitances of the mosfet, so as to cause it to change state. In the process, there is a certain delay involved. Second, even if our external resistances were zero, there still remain parasitic inductances associated with the PCB traces leading up from the gate drivers to the gates, that will also limit our ability to force a large gate current to turn the device ON or OFF quickly. And further, hypothetically, even if we do achieve zero external impedance in the gate section, there remain internal impedances within the package of the mosfet itself - before we can get to its parasitic capacitances (to charge or discharge them as desired). Part of this internal impedance is inductive, consisting of the bond wires leading from the pin to the die, and part of it's resistive. The latter could be of the order of several ohms in fact. All these factors come into play in determining the switching speed of the device.

Question---45: What is 'cross-conduction' in a synchronous stage?

Answer---Since a mosfet has a slight delay before it responds to its driver stage, though the square-wave driving signals to the high- and low-side mosfets might have no intended "overlap," in reality the mosfets might actually be conducting simultaneously for a short duration. That is called 'cross-conduction' or 'shoot-through.' Even if minimized, it's enough to impair overall efficiency by several percentage points since it creates a short across the input terminals (limited only by various intervening parasitics).

This situation is aggravated if the two mosfets have significant "mismatch" in their switching speeds. In fact, usually, the low-side mosfet is far more "sluggish" than the high-side mosfet.

That is because the low-side mosfet is chosen primarily for its low forward resistance, 'RDS.' But to achieve a low RDS, a larger die-size is required, and this usually leads to higher internal parasitic capacitances, which end up limiting the switching speed.

Question---46: How can we try and avoid cross-conduction in a synchronous stage?

Answer---To avoid cross-conduction, a deliberate delay needs to be introduced between one mosfet turning ON and the other turning OFF. This is called the converter's or controller's 'deadtime.' Note that during this time, freewheeling current is maintained via the diode present across the low-side mosfet.

Question---47: What is 'adaptive dead-time'?

Answer---Techniques for implementing dead-time have evolved quite rapidly as outlined below.

¦ First Generation (Fixed Delay) - The first synchronous IC controllers had a fixed delay between the two gate drivers. This had the advantage of simplicity, but the set delay time had to be made long enough to cover the many possible applications of the part, and also to accommodate a wide range of possible mosfet choices by customers. The set delay had often to be further offset (made bigger) because of the rather wide manufacturing variations in its own value. However, whenever current is made to flow through the diode rather than the low-side mosfet, we incur higher conduction losses. These are clearly proportional to the amount of dead-time, so we don't want to set too large a fixed dead-time for all applications.

Second Generation (Adaptive Delay) - Usually this is implemented as follows.

The gate voltage of the low-side mosfet is monitored, to decide when to turn the high-side mosfet ON. When this voltage goes below a certain threshold, it's assumed that the low-side mosfet is OFF (a few nanoseconds of additional fixed delay may be included at this point), and then the high-side gate is driven high. To decide when to turn the low-side mosfet ON, we usually monitor the switching node in "real-time" and adapt to it. The reason for that's that after the high-side mosfet turns OFF, the switching node starts falling (in an effort to allow the low-side to take over the inductor current). Unfortunately, the rate at which it falls isn't very predictable, as it depends on various undefined parasitics, and also the application conditions. Further, we also want to implement something close to zero-voltage switching, to minimize crossover losses in the low-side mosfet. Therefore, we need to wait a varying amount of time, until we have ascertained that the switching node has fallen below the threshold (before turning the low-side mosfet ON). So the adaptive technique allows "on-the-fly" delay adjustment for different mosfets and applications.

¦ Third Generation (Predictive Gate Drive Technique) - The whole purpose of adaptive switching is to intelligently switch with a delay just large enough to avoid significant cross-conduction and small enough so that the body-diode conduction time is minimized - and to be able to do that consistently, with a wide variety of mosfets. However, the "predictive" technique, introduced by Texas Instruments, is often seen by their competitors as "overkill." But for the sake of completeness it's mentioned here. Predictive Gate Drive™ technology samples and holds information from the previous switching cycle to "predict" the minimum delay time for the next cycle. It works on the premise that the delay time required for the next switching cycle will be close to the requirements of the previous cycle. By using a digital control feedback system to detect body-diode conduction, this technology produces the precise timing signals necessary to operate very near the threshold of cross-conduction.

Question---48: What is low-side current sensing?

Answer---Historically, current sensing was most often done during the on-time of the switch.

But nowadays, especially for synchronous buck regulators in low output voltage applications, the current is being sensed during the off-time.

One reason for that's that in certain mobile computing applications for example, a rather extreme down-conversion ratio is being required nowadays - say 28 V to 1 V at a minimum switching frequency of 300 kHz. We can calculate that this requires a duty cycle of 1/28 = 3.6%. At 300 kHz, the time period is 3.3 µs, and so the required (high-side) switch on-time is about 3.6 × 3.3/100 = 0.12 µs (i.e. 120 ns). At 600 kHz, this on-time falls to 60 ns, and at 1.2 MHz it's 30 ns. Ultimately, that just may not give enough time to turn ON the high-side mosfet fully, "de-glitch" the noise associated with its turn-on transition ('leading edge blanking'), and get the current limit circuit to sense the current fast enough.

Further, at very light loads we may want to be able to skip pulses altogether, so as to maximize efficiency (since switching losses go down whenever we skip pulses). But with high-side current sensing we are almost forced into turning the high-side mosfet ON every cycle - just to sense the current! For such reasons, low-side current sensing is becoming increasingly popular. Sometimes, a current sense resistor may be placed in the freewheeling path for the purpose. However, since low-resistance resistors are expensive, the forward drop across the low-side mosfet is often used for the purpose.

Question---49: Why do some non-synchronous regulators go into an almost chaotic switching mode at very light loads? Answer---As we decrease the load, conventional regulators operating in CCM (continuous conduction mode - see Section 1) enter discontinuous conduction mode (DCM). The onset of this is indicated by the fact that the duty cycle suddenly becomes a function of load - unlike a regulator operating in CCM, in which the duty cycle depends only on the input and output voltages (to a first order). As the load current is decreased further, the DCM duty cycle keeps decreasing, and eventually, many regulators will automatically enter a random pulse-skipping mode. That happens simply because at some point, the regulator just cannot decrease its on-time further, as is being demanded. So the energy it thereby puts out into the inductor every on-pulse starts exceeding the average energy (per pulse) requirement of the load. So its control section literally "gets confused," but nevertheless tries valiantly to regulate by stating something like - "oops ... that pulse was too wide (sorry, just couldn't help it), but let me cut back on delivering any pulses altogether for some time - hope to compensate for my actions." But this chaotic control can pose a practical problem, especially when dealing with current-mode control (CMC). In CMC, usually the switch current is constantly monitored, and that information is used to produce the internal ramp for the pulse-width modulator (PWM) stage to work. So if the switch does not even turn ON for several cycles, there is no ramp either for the PWM to work off.

This chaotic mode is also a variable frequency mode of virtually unpredictable frequency spectrum and therefore unpredictable EMI and noise characteristics too. That is why fixed-frequency operation is usually preferred in commercial applications. And fixed frequency basically means no pulse-skipping! The popular way to avoid this chaotic mode is to "pre-load" the converter, that's , place some resistors across its output terminals (on the PCB itself ), so that the converter "thinks" there is some minimum load always present. In other words, we demand a little more energy than the minimum energy that the converter can deliver (before going chaotic).

Question---50: Why do we sometimes want to skip pulses at light loads? Answer---In some applications, especially battery-powered application, the 'light-load efficiency' of a converter is of great concern. Conduction losses can always be decreased by using switches with low forward drops. Unfortunately, switching losses occur every time we actually switch. So the only way to reduce them is by not switching, if that's possible.

A pulse-skipping mode, if properly implemented, will clearly improve the light-load efficiency.

Question---51: How can we implement controlled pulse-skipping in a synchronous buck topology, to further improve the efficiency at light loads?

Answer---In DCM, the duty cycle is a function of the load current. So on decreasing the load sufficiently, the duty cycle starts to "pinch off" (from its CCM value). And this eventually leads to pulse-skipping when the control runs into its minimum on-time limit. But as mentioned, this skip mode can be fairly chaotic, and also occurs only at extremely light loads. So one of the ways this is being handled nowadays is to not "allow" the DCM duty cycle to pinch off below 85% of the CCM pulse width. Therefore now more energy is pushed out into a single on-pulse than under normal DCM - and without waiting to run into the minimum on-time limits of the controller. However, now because of the much-bigger-than-required on-pulse, the control will skip even more cycles (for every on-pulse). Thereafter, at some point, the control will detect that the output voltage has fallen too much, and will command another big on-pulse. So this forces pulse-skipping in DCM, and thereby enhances the light-load efficiency by reducing the switching losses.

Question---52: How can we quickly damage a boost regulator?

Answer---The problem with a boost regulator is that as soon as we apply input power, a huge inrush current flows to charge up the output capacitor. Since the switch isn't in series with it, we have no control over it either. So ideally, we should delay turning ON our switch until the output capacitor has reached the level of the input voltage (inrush stops). And for this, a soft-start function is highly desirable in a boost. However, if while the inrush is still in progress, we turn the switch ON, it will start diverting this inrush into the switch. The problem with that's in most controllers, the current limit may not even be working for the first 100 to 200 ns after turn-on - that being deliberately done to avoid falsely triggering ON the noise generated during the switch transition ("leading edge blanking"). So now the huge inrush current gets fully diverted into the switch, with virtually no control, possibly causing failure. One way out of that's to use a diode directly connected between the input supply rail and the output capacitor (cathode of this diode being at the positive terminal of the output capacitor). So the inrush current bypasses the inductor and boost diode altogether.

However, we have to be careful about the surge current rating of this extra diode. It need not be a fast diode, since it "goes out of the picture" as soon as we start switching (gets reverse biased permanently).

Note also, that a proper ON/OFF function cannot be implemented on a boost topology (as is). For that, an additional series transistor is required, to completely and effectively disconnect the output from the input.

Top of Page

PREV: Part 1 | NEXT: Conduction and Switching Losses Guide Index | HOME