Switching-Mode Power Supply (SMPS)--Real-world Issues (part 2)

Home | Articles | Forum | Glossary | Books

AMAZON multi-meters discounts AMAZON oscilloscope discounts

The Incredible Shrinking Core

Magnetics is a terrible embarrassment to many engineers, writes SMPS-guru, in this month's column on power supply design. I suspect they often end up pretending it doesn't really matter ("magnetics-denial"), he says. But dc-dc size, it turns out, is more a function of reliability than of switching frequency. Undersizing the core, he reminds, can have some serious consequences.

Magnetics is a terrible embarrassment to many engineers. I suspect they often end up pretending it doesn't really matter - 'magnetics-denial'- "Oh, I just toddle up to the bin and pick any inductor that works."

Considering that the entire movement in switching power conversion in the last decade to higher and higher frequencies is driven mainly by the burning desire to shrink magnetic components, there must be something wrong with this rather cool, laid-back attitude. It’s of course my personal opinion that some engineers often make the problem sound even more complex, by spending all their remaining waking hours twiddling with the Nyquist criterion, double-edge modulation, and so on.

To put it in perspective, none of these issues have really ever been a show-stopper in any practical design scenario, nor have they allowed us to eventually reduce the size of the power supply. We note that size must ultimately dove-tail with reliability, because if we undersize the core For example, we will certainly cause core saturation and a fair amount of resulting silicon shrapnel in the lab! We saw that last month. Now consider the equations:

[...]

µ = relative permeability

µe = effective permeability

(MKS units)

Above, I have extracted four key equations from my recent book. I hope to give you a simple insight into the art of reducing the core size. Here we are assuming that core and copper losses are not the limiting factor (as is usually the case with the modern geometries and materials), and that the inductor size is simply related to the energy storage requirement or 1/2 × L × Ipeak2. In my book, I have introduced a useful variable called the 'z-factor,' defined above, since I found it helps simplify the equations considerably. (Also, see the magnetics on-line seminar) Here is the logic: from the fourth equation, we see that to be able to keep the inductance fixed as z goes from 1 to 10 (air gap increased), we only need to increase the number of turns N by 100.5 = 3.2 times. Therefore, from the second equation we can see that if z went from 1 to 10, but the ampere-turns NI was increased only by a factor of 3.2 (so as to keep L fixed, as we usually want to do), then the operating B-field would be reduced to 1/3rd of its original value.

From the first equation, the energy stored in the core has remained unaltered in the process, though from the third equation we can see that its overload capability (i.e. measured up to a certain saturation flux density BSAT) has increased 10 times. So, any "headroom," as measured from the operating B-field value to the saturation level (BSAT), or from the operating energy storage level to the peak energy handling capability must have increased considerably, even though inductance has been kept a constant in this case.

All this could translate to a much higher field reliability where the converter will likely encounter severe abnormal or transient line/load conditions. However, if all the "bells and whistles" are present in the design of the control circuitry (e.g. feedforward, primary/ secondary current limit, duty cycle clamp, and the like), and they serve to protect the converter adequately against any such abnormal conditions, this gives us a great opportunity to select a smaller core for the same power level. In doing so we would be essentially returning to the point of optimum core size, which is defined as that where the peak operating flux density BPEAK is just under BSAT (with current limiting and/or duty cycle clamping present to ensure that BSAT is never exceeded, even for a cycle).

What do we learn here? That by increasing the gap of the core we can move to smaller core sizes. Yes, powdered iron cores for example have a distributed air gap, and come in various "effective permeabilities." So actually, lower permeability materials should in principle always lead to smaller core sizes, as they have a larger air gap in effect. All this is rather counter-intuitive I admit. The restricting factor is that to use very low permeability materials, we need more and more turns, and so we will either just run out of enough window space to accommodate these extra turns, or we will have our copper losses mount to the extent that the core size becomes a secondary issue.

Now returning to another issue I promised to touch upon in last month's column. High voltage off-line integrated flyback switcher ICs are available from several vendors, but they are restricted because they usually come only in a family of fixed current limits. So if Ffor example we have a 5 A part, the next lower part being a 3 A part, the 5 A part is certainly optimum for peak currents slightly below 5 A. But what if the peak current in our particular application is 4 A? For lack of a suitably matching part, we would now be forced to use a 5 A IC for a 4 A application, but we would still need to size the core for 5 A! We may be able to reduce the copper diameter in going from a 5 A application to a 4 A application, but not the physical size of the inductor, as it must still continue to withstand 5 A, which it would see under sudden step-load changes (or even under normal power-up or power-down).

So how do some such vendors manage to showcase a smaller magnetic component for the 4 A application? By increasing the current ripple ratio 'r'! By the previous equations, it can also be shown that if L is allowed to decrease (fewer turns), the energy storage requirement decreases and so we can reduce the core size. Inductors operated with large current ripple ratios are therefore always smaller, though the problem is that they just transfer the burden to the input/output capacitors (more filtering required). But you may not notice that immediately!

Plain Lucky We Don't Live in a PSpice World!

We have a natural ally in nature, notes SMPS-guru. Design problems occur, he says, when we schedule a confrontation with natural forces. Nature doesn't have "convergence" problems, like PSpice often does. Try simulating the air flow over a heat sink.

We should keep that in mind, that all we need to do sometimes is to just sit back and allow nature to do its 'thing. 'We have a natural ally in nature. Design problems actually really start, only when we have somehow managed to adopt what is in effect a schematic confrontation with natural forces. Luckily, nature doesn't have 'convergence' problems, like PSpice often does. All this may sound like some vague debating point in some esoteric man vs. machine philosophy panel discussion but it’s actually just plain design common-sense. Appreciating these finer aspects of nature can help us succeed with a host of seemingly mundane or challenging engineering pursuits. And it's almost faster than getting our grand simulator engine to essentially mimic nature itself.

As a seemingly trivial example, we all know that when an object heats up, the air around it moves upward trying to cool it down. Eventually, we achieve thermal equilibrium. But have you noticed that oddly enough, the higher the dissipation, the lower the thermal resistance (expressed in degrees C per Watt). This is because the rising air turns "turbulent" under higher dissipations, in an effort to help us even further. This phenomenon, once understood, is actually exploited in creating special pin fin heatsinks that try to provoke turbulent air flow even at lower dissipations. You can try "Pin Fin Heatsinks" and "Equations of Natural Convection" . Carrying our learning to the extreme, small miniature fans are sometimes mounted on high dissipation ICs blasting air perpendicularly at very close quarters onto the exposed hot surface. This is called 'impingement air flow,' and it leads to a further and really dramatic reduction in thermal resistance. A good read on this technique is available at the ETD Library of Louisiana State University.

Now let's take a typical switching converter. If we apply a voltage across an inductor during the switch on-time, we get a corresponding increment in current from the basic equation V = L dI/dt. But then we turn the switch OFF. Now we will find that a certain voltage automatically appears across the inductor. Its magnitude may be undefined initially (as during initial power-up), but what we can be sure of always is that it’s of reverse polarity to the voltage we applied during the on-time. If we think hard, we realize that our contribution as engineers is simply that we managed to create a circuit schematic (or 'topology') where we allowed nature to develop this reverse inductor voltage. But if for example, in a typical converter, we put the diode in the wrong way (or forget the diode altogether!), we may just be preventing this voltage reversal, causing instantaneous combustion.

From V = L dI/dt, we also see that to get the increment in current during the on-time to exactly equal the decrement, every cycle, we need the quantity 'V multiplied by time' during the on-time to be equal in magnitude to the same quantity during the switch off-time. In fact, that is the fundamental voltseconds law of power conversion. It leads directly to the expression of duty cycle in terms of input and output voltages.

But what drove nature to strive to do all this? To put it bluntly, we are lucky we don't live in a 'PSpice world.' Luckily, most natural processes tend to converge in our world and without 'user intervention!' We can foresee that if the current doesn't decrease to exactly the same instantaneous value it had at the start of the cycle, then every cycle there will be a small net increase in current. After a million switching cycles (nowadays that could take just one second!) this 'small' net increase won’t be so 'small' anymore. Ultimately, we would probably never achieve a measurable or stable 'steady state' on the bench. And any 'switch' we may develop will ultimately combust under these escalating currents. Keep in mind that if it’s not current, the switch will certainly fail due to excess voltage, because nature goes all out to help us, even increasing the reverse voltage (if that is schematically possible) to force a 'reset' (i.e. convergence in effect). Too bad if we didn't leave any available doors open to allow nature to step in to help us out here. On the other hand, we would be hard pressed to make a typical PSpice-based converter circuit stabilize without explicitly implementing closed-loop feedback.

Note that the voltage reversal can be considered from the electromagnetic viewpoint as a result of Faraday's law of induced EMF (or Lenz's law). It’s interesting to recognize that not only 'transformer action' would not be possible without this law, but no stable dc-dc inductor-based topology could exist either, because Faraday's law is simply the voltseconds law in another form (or vice versa). Without Faraday, we have no voltseconds law either.

And without this, there would be no switching power conversion for one! But what has all this to do with PSpice? Well, on a more subtle level, and in the same spirit of things, nature also tries to lend an additional helping hand by imparting 'parasitics' to every component we use. These actually help many processes converge eventually (or stabilize), even if we have partially overlooked some crucial design aspect or abnormal operating condition. Yes, these parasitics do seem like a nuisance usually, but they have the potential to even temporarily stabilize an inherently flawed new topology. Of course we usually don't want to operate a converter that depends on parasitics for its functioning.

Though we do something similar when implementing ZVS ('zero voltage switching').

Parasitics often just 'soften' an abnormal or excessive application condition though we may not realize it (that's until we run PSpice!). As a trivial example, the ESR helps limit the inrush current into the input capacitor. We also know that even the small trace inductances leading to the input capacitor can help dramatically reduce shoot-through (cross-conduction) currents in synchronous buck converters.

Let us also consider what happens if we suddenly overload a normal non-synchronous buck converter ... say by placing a dead short on the output? The duty cycle is still way up initially, not having had time to respond. So the converter 'thinks' the output voltage is still high and since its duty cycle is unchanged, it actually continues to try to deliver the normally required output voltage. But we know for a fact that the actual voltage on the output terminals has been forced to zero by the short. So to where did the excess Volts disappear? In fact, the full calculated output voltage momentarily appears across the diode and the dc resistance of the inductor. And the current must therefore increase (overload current) such that the following equation is satisfied unequivocally during the initial moments of the short:

Vd + I * DCR = Vo, where Vd is the diode forward drop, and DCR the dc resistance of the inductor. So in a fault condition on the output, the DCR of the inductor and the diode drop actually both help in reducing the overload current. 'Good diodes' (with low forward drop) make the overload currents even higher. Note also, that in the latter case, it’s not only the fact that we have a diode drop that helps reduce the overload, but the fact that this drop actually increases with increasing current, thus effectively helping out when needed the most. In fact, this effect was belatedly replicated by placing a series resistance (ohmic) term in the PSpice diode model.

Do contact us you want to express convergence of views on this topic. But it's also OK, even if you want to momentarily explode and vent yourself after reading my little viewpoint. Just as long as we all manage to ultimately stabilize the resulting situation! And quite naturally so!

Why Does the Efficiency of My Flyback Nose-dive?

SMPS-guru's back with a confession: "I learned a lot at that little company I worked for way-back-when." What he learned is that flyback switchers have problems with leakage inductance which can come around to bite your tail. Also, a thing-or-two about PSpice, Andrew Lloyd Weber musicals, consultancies in the dot-com era, and the little Precision Advocate that lives inside us. A December smorgasbord, SMPS-guru's most personal column yet.

Looking back at the six or seven power conversion companies I have worked in so far, I think it’s somewhat intriguing to realize that professionals often learn (and contribute) the most in smaller companies. Just a plain coincidence that these companies are also often the ones engineers are least likely to ever want to admit having worked for, many years later! Call it a 'trial by fire' or 'hardening by heat treatment.' Whatever! I remember --- had written a last-page article in some forgotten publication over a decade ago titled "Why Innovations Seem to Come from Smaller Companies." A very pointed and thought-provoking article, one that I glanced over several times over the years. I just recently threw it away after cleaning up after completing my book.

This month however, I decided I was finally going to come to terms with my past in a way, and try to force myself to remember in vivid (and somewhat painful) detail how I learned to deal specifically with leakage inductance many years ago, while working in a rather small outfit making innovative integrated switcher ICs for off-line flyback applications. In doing so, it will also be clear to you, what really hampers the flyback topology itself, at higher output power levels and low output voltages.

But before I get into that, I will take stock of some of the interesting correspondence I received in response to last month's column. In particular I had an interesting Email exchange: "I have this recollection of a passage from an out of print guide (I think it was) 'Paper Money' by Adam Smith, in which the author tells of a conversation with a Southern stock-broker - 'The computer is like a dog. Very useful. Wouldn't think of hunting without one. They spot birds and retrieve. But you don't give the gun to the dog'!" We ended up being pretty much on the same page as he also agreed with my basic sentiment that "PSpice essentially includes all the equations ... like say Kirchhoff's laws. So it does a great job in predicting the final outcome (usually ... !). However, in using it, we engineers therefore tend to forget the actual equations ourselves - which is a curse for any good engineer, since he loses the power of 'optimization' so essential to a good designer ... he just forgets to think! Not the direct fault of the machine though. In a way, he gets blinded by the luxury of a powerful machine ...which 'does' but can't 'think.' I feel he can certainly use it to AID design, not as a substitute for design." Returning to the flyback topology, working in this particular switcher IC company, I had been trying to come up with a more accurate set of 'quick selection curves' for their next-generation switcher IC family. Their older Excel spreadsheet and the corresponding published efficiency curves were really not holding up very well in actual bench verification.

Seemed to be finally giving credence to the constant griping by previous customers that the efficiency and max power curves were "unachievable" and "how the hell did you come up with these anyway?" Not that the company was initially really looking to correct these apparent inaccuracies, since when it got assigned to me, the decision was based mainly on some rather astute financial and business sentiments (I didn't say 'Machiavellian').

Actually this story itself is interesting enough to merit a slight detour here, since it keenly epitomizes the touch-and-feel of the entire dot-com era in the Silicon Valley area (while it lasted). The previous applications senior engineer (the one who had created the original Excel spreadsheet and thus indirectly ensured only he understood it fully) had quit suddenly.

He had then thoughtfully set up a lucrative private consulting agency - but not before taking with him a whole lot of stock options from this flyback company ("high flying adored did you believe in your wildest moments, all this would be yours" - Evita).

The consulting activity actually was in parallel to a 'full-time' senior position that he also took up in a rather sleepy company ( you guessed it, a 'big' one). I just don't know whether this latter company knew/didn't know/didn't care/thought it was perfectly OK, or even admired any human's ability to 'multiplex' so dramatically. I couldn't do it for sure! Oh yes, talented Mr. Ripley was also managing to teach evening EE classes at the local university in his 'spare time'?! Several years later, after this 'big' company got swallowed, and then re-swallowed successively, by bigger and bigger companies, it apparently just became too big even for Mr. Ripley, and so one fine day, along with the whole dot-com era, he too got a pink slip. But, till that transpired, he was more than willing to come back, again and again, to the previous (and precious) flyback IC company, generating all the efficiency and selection curves for every future product family they desired (essentially by magically reconfiguring his previous spreadsheet, as only he could).

Yes, he was certainly counting on making much more than a 40-hours-per-week exempt employee like me. However, company management may have been on to him, opening the door just a little for him (a few hours per week over a few months), perhaps with the unstated intention of transferring his expert knowledge back in-house, and afterward dumping him. All this while, with some help from him, I had successfully developed far more elaborate Mathcad models which I then used to put out the quick-selection curves for two new product families. In the process I also showed the company what they needed to do to improve their efficiency estimates, especially in regard to leakage inductance. Job done, I too then quit suddenly. I probably left the company with the unenviable choice of either figuring out the older, flawed but simpler Excel program, or running with the more accurate and bench-verified, but horribly complex Mathcad program. I really don't know what they did after that, though I suspect they may have had to play footsie with the other engineer again, at least to make sense of the spreadsheet.

During the bench verification process, I noticed that for 12 V outputs, we got a nice fit with the theoretical efficiency estimates on that spreadsheet, but for lower voltage outputs (5 V, For example) the bench test verification went way off the efficiency estimate curves, especially for high loads. Why so? We went over every loss term, often with a microscope, both on the bench and in the program, refining the models more and more, almost to the extent of making the entire Mathcad file completely iterative. (It would take roughly 24-36 hours to generate the final efficiency curves from the moment I would hit 'calculate' on a 600 MHz PC. The company had to put three PC workstations in my cubical to ensure I would get some work done while the simulation was running!) But no luck getting the actual measured efficiency to match the simulation! We were certainly getting much better with each iteration, but still couldn't explain where the remaining couple of efficiency points were going.

I rechecked my program several times, dotting the i's and crossing the t's (the little German living inside of me ever since my Leipzig days) but it looked solid every way I attacked it.

In desperation, I then started poring over some old literature looking for clues, and this one from a very old Philips publication "3C85 Handbook" caught my attention: "Leakage will reflect from secondary side to primary side according to square of turns ratio of the transformer." That was it! We all know it’s standard design practice for any universal-input off-line flyback to keep the reflected output voltage 'VOR' fixed at an ideal of about 105 V.

The VOR is the output voltage multiplied by the turns ratio. So basically, the required turns ratio for a 5 V output is ...

For a 12 V output the turns ratio is ...

The secondary side leakage (uncoupled) inductance is associated not only with the actual transformer windings, but the lead-out terminations and even the PCB traces leading to and returning from the diode and output capacitor. Assume two inches of total secondary side trace length For example, and remembering the thumb rule of 20 nH/inch, we get say 40 nH secondary leakage inductance. For a 12 V output, this will reflect to the switch side as an effective leakage of 40 × (8.75) 2 = 3062 nH or about 3 µH. This will be added to the existing primary side leakage (typically about 10 µH) to give about 13 µH. The associated energy will be dissipated in the zener clamp. However for a 5 V output, the same 40 nH gets reflected as 40 × (21) 2 = 17640 nH or about 18 µH! This will give a total primary side leakage of 10 + 18 = 28 µH! Triple the first estimate. Enough to inflict trauma on the wimpy zener clamp which probably was just expecting a nice sunny day paddling away on the beach, but got hit by a tsunami instead.

As soon as I understood this, and learned to correctly estimate and perform an 'in-PCB' measurement of effective leakage (not by just shorting the secondary side pins of the transformer, but by placing thick shorts across the diode and output cap), I modeled it into the Mathcad file. The fit to bench results was almost too good to be true - within 1% over the entire load range. After much more testing, by several engineers in fact, the company finally acknowledged this to be the missing piece of the Flyback jigsaw puzzle, and then published the new product family quick-selection curves I had generated and verified, and also the guidelines on leakage inductance measurement. Of course they didn't want to 'alarm' previous customers by going back and correcting the curves of the previous family (which they now knew were super-optimistic). Plain marketing sense!

It's Not a Straight Line: Computing the Correct Drain to Source Resistance from V-I Curves

Is the V-I curve of a MOSFET switch really a straight line as we imagined? The RDSON is clearly a function of the current through the MOSFET. But with the device alternating between peaks and valleys, what current value do we use? We can do a 'worst-case analysis', based on the highest RDSON (an instantaneous value) along the V-I curve. But is that value really 'worst-case', or is it even worse than 'worst-case'?! Power supply guru SMPS-guru shares some observations on the proper use of RDSON.

Performing an efficiency calculation for a power converter will certainly require knowledge of the drain to source on-resistance (hereby called RDSON) of the switch. In doing so we may refer to the V-I characteristics of the said mosfet. We will probably be thinking that all we need do is to find the slope - "V/I" - to get the RDSON.

But hold on just a minute! Is the V-I curve really a straight line as we imagined? If not, the RDSON is clearly a function of the current through the mosfet. So what is the RDSON we need to take for our calculation? We can do a "worst-case analysis" based on the highest RDSON (slope) along the V-I curve, which would always be found to occur at the highest instantaneous value of current, that is, the peak switching current. But is that value really "worst case," or is it even worse than "worst-case"?! The current through the mosfet is actually varying every cycle between two values, the peak and the trough. It’s certainly not fixed at the peak value (at which we may be finding the "worst-case" slope). We’re not interested in finding the worst-case instantaneous value of the RDSON, what we want is the worst-case value over the entire switching cycle. In a switching converter, the RDSON is actually varying smoothly between two values, just as the current is.

By a rather painful analysis, it can be shown that a very close fit to the exact integration-based calculation is obtained simply by (a) finding the RDSON at the extreme current values: the peak and trough, and (b) averaging these two values to get the effective RDSON over the entire cycle. Simple enough! But hold on a minute longer. Look at the published V-I curve (the black part of Fgr. 3).

This represents a typical integrated switcher device, rated for 1.5A. The device is therefore supposed to function up to 100 degrees C at 1.5 A. But does the curve extend all the way? Not at all! The 100 degrees C is mysteriously truncated! No other information is available in the datasheet.

The only way out for us as designers is to try to extrapolate the V-I curve. See gray part.

We now see that the curve intersects at a whopping drop of 17 V at 1.5 A at 100 degrees C! Maybe that is what the vendor didn't want to circle out for us. But at least we can now find the effective RDSON.

However, we could erroneously take the average over the entire range to get

RDSAVG = 17 / 1.5 = 11.3 Ohm

Fgr. 3: The Missing Half of the Topswitch Datasheet Curve DRAIN CURRENT (A) ; DRAIN VOLTAGE (V); Extraplolated to max current.

This estimate is way too optimistic. As mentioned previously, a more correct estimate is the RDS averaged over the RDS at the extreme values. At peak value ...

RDSMAX =

We also know that the RDSON-MIN is 10 Ohm from the datasheet since the RDSON is rather typically stated at only 1/10th the maximum current, that is, 10 Ohm at 150 mA in this case.

So the correct effective RDSON is the average of the two

RDS = RDSMIN + RDSMAX / 2

Now that we know the RDS is 50% higher than what was indicated to us, we also have a better idea of the conduction loss in the mosfet.

cont. to part 3 >>

Top of Page

PREV: Part 1 | NEXT: Part 3 Guide Index | HOME