Switching-Mode Power Supply (SMPS)--Real-world Issues (part 3)

Home | Articles | Forum | Glossary | Books

AMAZON multi-meters discounts AMAZON oscilloscope discounts

<< cont. from part 2

Don't Have a Scope? Use a DMM, Dummy

SMPS-guru's got a gremlin on one of his shoulders; a cherub on the other. The gremlin says, " You call yourself an engineer?! You'll never get this power supply to stabilize. Why don't you just quit?" The cherub says: "Back of the envelope calculations can work. Look what I brought you from Micro Center. Put this little meter to work." Somewhere between computational theory and the simplest hands-on measurement, SMPS-guru has a window on switching regulator efficiency. If only these voices would stop...

Ouch that hurts! Heaping insult on top of a latent injury, and then adding some disgrace for good measure too! And just as I was getting around to finally using the 'Math function' on my $3000 digital storage oscilloscope (DSO) to carefully measure the duty cycle of the switching regulator (which I may add, was strongly requested by you in the first place) ...

And now you come back to claim that the $19.99 Velleman digital multimeter (DMM) is enough to do the job! And even more accurately?! Hmm. Wise Guy! Now you know why I have recently started to just hate everything about power!! And you know what!? I think I am going to have to cut and run pretty soon! Now wait a minute (the voice of reason), relax. Let's get some information first. Just the basic facts. I promise to ease your pain, get you on your feet again. But in return you must understand that we really can't afford to overlook the numbers. For that is where the subtle secrets of power conversion usually reside: in those very 'trivialities' that you were always taught to ignore. "Don't get bogged down with comfortably numbing details (if you ever want to amount to anything in life)," that other voice said! "Try to see the big picture." The DSO? You see that! What do you want with a DMM? Let's take the buck regulator for starters (the voice of reason replies). The simplified 'text guide equation' for duty cycle is D = VO/Vin. So if we were say stepping down from 20 V to 5 V, the calculated duty cycle is D = 5/20 = 0.25. Suppose the load is 2 A. That's an output power of 5 V × 2A = 10 W.

Now, the center of the ramp portion (average) of the inductor current is always going to be

fixed at 2 A for a buck topology, come what may. That follows from Kirchhoff's first law (and believe me, you can't afford to ever get on the wrong side of that guy!).

Then by simple waveform analysis, the average input current (which is the average switch current for this topology) can be calculated from the rectangular waveform of height 2 A with a duty cycle of 0.25. This gives an average input current of 2 A × 0.25 = 0.5 A. So the input power is 20 V × 0.5 A = 10 W. We can see that that is exactly the same as the output power, and therefore we are at 100% efficiency.

Well, what did you expect? (That voice again, the other one.) The very act of using the simplified 'text book equation' for duty cycle was tantamount to assuming 100% efficiency.

Where are the loss terms? We have clearly been guilty of ignoring the forward voltage drops across the switch, the diode, the inductor, the capacitor, besides assuming zero switching losses too!! In fact, as soon as we start to consider the non-ideal characteristics of real components, we will see that the inclusion of each non-ideality contributes a little to the increase in the duty cycle over its baseline 'textbook' value.

Which is precisely why we wanted to know the actual duty cycle in the first place. But without an oscilloscope with 'Math function' capability, are we really stuck? Let's think again. We’re expecting the average input current to be 0.5 A. Now let's measure it using the DMM. Suppose we read 0.6 A. Right off the bat, we now know the efficiency is (2 A× 5 V)/(0.6 A ×20 V) = 10/12 = 83.3%. Two extra watts are lost somewhere. That's right. An additional 0.1 A drawn from the input at 20 V is 20 V × .1 = 2 W. So that ties up. But for the buck topology, the center of the current waveform MUST still be the load current.

Therefore the only way the average input current, as calculated from the rectangular waveform, can be 0.6 A, is if the duty cycle has increased to 0.6 A/2 A = 0.3. So now we do know the duty cycle! Look Ma, no DSO! In fact, at this point we can declare completion of our initial assignment, that of using a DMM instead of a DSO to find out the actual duty cycle. But we can take this opportunity to delve a little deeper too. Suppose the diode drop is known to be 0.4 V. The actual duty cycle of the current through it’s 1 - 0.3 = 0.7. The estimated loss in it’s 2 A × 0.4 V × 0.7 = 0.56 W. However, we know that we are dissipating 2 W. But we have computed that only 0.56 W is being lost in the diode. That leaves 2 - 0.56 = 1.44 W still to be explained. We can keep going in this manner and try to account more and more accurately for all the wasted energy. But we can see that the actual duty cycle is the key to a full efficiency analysis.

It’s also interesting to note that had we used our initial duty cycle estimate of 0.25 to calculate the diode loss, we would have estimated it to be 2 A × 0.4 V × 0.75 = 0.6 W. That is more than the 0.56 W calculated above. So in reality, losses tend to be lesser than we may have first thought. Why so? This is just another subtle example of how nature moves in mysterious ways to rescue us, as it tries to create conditions for natural processes to converge.

Do see my Nov 2004 article on this topic. If we are interested in creating a mathematical model of our power supply in gory detail, we should realize that the entire mathematical process needs to be both iterative and convergent too! For example, as described in last month's column, the voltage drop across the switch is a function of current. So suppose the switch voltage drop is a little higher than first estimated, for any reason, like say a higher ambient temperature. Now the switch would dissipate more of the incoming energy (conduction loss term). Therefore, the converter, in an effort to keep delivering the required output power will have to increase the input power - and therefore the input current. But this will cause the switch voltage drop to increase further too, and so the input current will have to increase a little more. So on and so forth, till mathematical convergence occurs.

Nature converges similarly, only much, much faster! Like almost at the speed of light!?

The number crunching is actually quite different for the non-buck topologies. Try to work through a numerical analysis for any other topology. This article was itself inspired by a rather persistent but nice caller --- he was asking me just such a question, based on the boost topology. Actually, nowadays I do get several Emails and calls from engineers from different parts of the world, seeking advice or help to solve a troublesome and usually non part-specific technical problem. And I am always glad to help, provided of course I have the time and acumen to think it through, and 'get it.' Though sometimes I admit, in my case too, my lips move, but you won't hear what I am saying! Good Luck.

Are We Making Light of Electronic Ballasts?

Way to go, SMPS-guru! Rave reviews of SMPS-guru's guide on switching power supply design identify it as the new "bible," the absolute definitive text on this subject. Planet Analog is proud to have "discovered" him, to have published excerpts, and invites you to read his current column. This one is on the reliability of electronic lighting ballasts in India, where the wiring is frayed, the temperature sizzles, and ownership of fluorescent tubes constitutes bragging rights. Afterward, we'll send you to Amazon to see some of those terrific reviews. First, click here ...

"Power conversion" is a conversion from one form of energy to another. It doesn't preclude the conversion of input energy into light, instead of more conventional load profiles that power supply engineers are generally accustomed to. In fact, I found out the hard way how difficult this area of power conversion can really be. Far more difficult than your average dc-dc converter! My first exposure to electronic ballasts was, it now seems, light years ago (no pun intended), while I was working in the swank and well-equipped central R&D labs in Bombay of one of India's largest electrical manufacturers. That company is to India, probably what Siemens is to Germany - with a huge and diversified market share in almost everything 'electrical.' India, of all the hot, confusing, bustling places on earth, may actually form the best place to see what the brouhaha about ballasts is all about. It’s the perfect testing ground (and potential graveyard) for all electronic ballasts, big and small, the ultimate leveler if ever there was one.

We’re aware that electronic ballasts are universally known to provide more stress-free working environments, attributed to their steady flicker-free light, besides other nice features like instant-start, the more recent auto-dimming capability, and power-factor correction.

Though electronic ballasts have been available for a dozen years, it’s very surprising that sales have barely started to exceed copper-wound (or 'magnetic') ballasts, which are known to be physically bulkier, heavier, and much less energy efficient. (Touch one and you will surely scream 'yeeeooow.') True, electronic ballasts cost more, but you are supposed to get paid back handsomely in a few years in reduced energy costs. Some governments also continue to contemplate subsidies to help consumers afford such ballasts. But I don't think it has happened yet.

In a typical office environment, the light bills can amount to 40% of the total energy costs.

There is great need to conserve energy in lighting, much as we are doing with standby power requirements of appliances. Community-conscious and inherently progressive organizations, like my employer For example, have installed PIR (passive infra-red, i.e. body-heat) sensors almost everywhere to turn the lights on and off as needed. I suspect they also use electronic ballasts. Remember, PIR sensors do help at night, but clearly can't save energy on their own during the day. For that electronic ballasts must be used in conjunction.

(As an aside, we may not conserve THAT much energy if engineers like me continue to leave their computers and monitors on the whole day, night, through weekends, and on long breaks, spicing up their screensavers with graphic-intensive witticisms like "go away," "scat," or simply "attending a meeting" refreshing over and over again. I had heard of endless meetings, but a 10-day blinking message can really take the cake).

California, with its high energy costs is certainly waking up to the potential savings from electronic ballasts this year. See a very useful discussion of ballast sales and impending regulations starting 2005 with the papers at http://www.aboutlightingcontrols.org/ education/papers/ballasts.shtml. Another very nice piece on the myths associated with ballasts and tubes can be found here.

Note that the fluorescent tube also responds rather positively, in terms of its own life and performance, when driven at over 20 kHz (as in an electronic ballast), rather than with 60 Hz (as in magnetic ballasts). Tube-replacement costs are thus significantly reduced with electronic ballasts. But, wait! What about the life of the ballast itself? That is another story altogether! Electronic ballasts unfortunately have been plagued by failures. So how do we as traditional power supply engineers provoke incipient failures in power supplies? We increase the stress levels, especially during design phases. Simple burn-ins are actually too kind to the system! To demonstrate an exaggerated burn-in situation, let's take an 'Amazing Race' detour, and parachute down directly into a remote residential area somewhere in the heartland of India.

Oh I know you feel suddenly out-of place! Here time stands still. Well, almost! After a moment's rest, you can probably observe that the AC utility lines and the frayed household wiring haven't been replaced for several decades now. But worse! Here we happen to be ensconced by several industrial units using heavy electrical machinery, even during the night.

The mains input in India is officially supposed to be 230 VAC or is it 220 V or 240 V? Even I will never know. In fact, that may be a rather redundant question, especially here, because the voltage is known to drop down to a steady level of almost 120 VAC (no typo here: one hundred and twenty!). More so in the summer months (9 of these in India) when all the fans and air-circulators try to come on at the same time. The standard household incandescent bulb is now seen glowing faintly in the distance, too dim to even see where the TV and fridge lie gasping for breath.

So it may confer minor bragging rights with inquisitive neighbors to possess fluorescent tubes in every room of the house! The sage had mentioned these tubes provide a more acceptable illumination level even at such low input voltages. Provided they work!! To get them to work, magnetic ballasts are literally powerless. An electronic ballast can work in principle, since it’s basically the flyback (buck-boost) topology principle. Aha! Out with the sage! ... I now see a power conversion engineer firmly entrenched here too! And this poor guy has to ensure his circuit design can get the tube to fire, and continue to run at such a low input voltage.

However, we can't be in the business of designing and selling one ballast design for one area and another circuit design for another area or locality. So now let's take the same ballast and move inside an industrial facility (with better local wiring). We’re not surprised to find this is where the maximum usage of fluorescent lighting occurs. Rows upon rows of multi-tube ceiling fixtures, as in all countries. Sounds familiar and encouraging. I too feel almost at home now.

But now we sadly note that the utilities often raise the voltage at the substation end, just to compensate for some arbitrary calculated ohmic drops across miles and miles of lines. But suppose we happen to be the unlucky business unit sitting 'up close and personal' to the actual distributing substation. The intervening ohmic drops are thus virtually negligible. What we get coming into our facility 24/7/365 is a steady overvoltage of over 270 VAC! But now if we dare to use heavy industrial machinery on our premises, we also know that that can unleash huge inductive spikes back into the mains coinciding with the solenoids and motors turning off. As any typical ballast manufacturer in much of the third-world (and Eastern European, especially the former Soviet bloc), we are faced with the daunting design task of ensuring rock-solid reliability under steady voltage variations of about 100-300 VAC, overlaid with huge spikes.

In fact the relevant qualifying test is usually based on keeping the ballast in operation, and simultaneously applying at the input, the well-known 8/20 µs lightning surge test.

No short-term or long-term damage should ever occur. Note that these line surges are of frequent occurrence in these areas (rather than a rare 'one in a blue-moon' type of thing), so we just cannot rely on MOV's (metal oxide varistors) which have inherent lifetime/wearout issues.

Nor can we rely on TVS's (silicon transient voltage suppressors) because the latter often can't even handle the type of energy a single 8/20 µs spike throws at them, least of all a succession of spikes at a certain constant rate per minute.

But that's not all! Any ballast in the world must be able to survive the 'deactivated tube' test.

That is where the gas has leaked out, but the heating filaments are still present, and so the circuit keeps trying to start the tube endlessly (at the elevated voltage and frequency needed to cause it to strike). In fact, this particular test killed every ballast we ever tested in Bombay with no exception (except the one I finally designed! You knew I would say that, but it's true!). We burned out every known name-brand ballast we actually imported at that time from the USA, Europe, Singapore, Korea, Japan (you-name-it) into India. We personally performed the last rites. And it was a virtual crematorium.

So India is possibly a great place to hone the design skills for any mains input ('off-line') power conversion device. Remember that when you set up a design center in Bangalore! This should help take the sting out of it. Next month will get into the nitty-gritty technicalities, and tell you exactly how the simple electronic ballast actually works, and how we ended up enhancing the reliability besides reducing the manufacturing cost by a factor of almost 2 in the course of what was probably the most successful R&D technology transfer project in that company. Not that they remember me anymore!

Fgr. 4: The Basic Electronic Ballast and the Improvements

More on Designing Reliable Electronic Ballasts

Long, long ago, in a land far away (actually, not so far away with the Internet and 800 numbers), our hero tried to figure out how to keep electronic ballasts from blowing up. The key, writes SMPS-guru in this installment of his power design column, is in the ferrite inductor in series with the lighting tube. While its basic purpose is to limit the current, there is a good deal of resonant frequency energy coming off-line. Your choice to use it for good or evil.

This month I need to fulfill the promise I made about explaining what all we did with electronic ballast technology in India, thousands of years ago (or so it seems).

The most common (and commercially viable) fluorescent ballasts still use bipolar transistors (BJTs), not mosfets. They are also self-oscillating, and therefore need no PWM control IC.

This is actually an advantage. Engineers who have worked with self-oscillating topologies (like the well-known Royer oscillator) know not to underestimate them. They are inherently self-protecting and tend to be very rugged. Short their outputs and the frequency automatically adjusts itself to maintain critical conduction mode, so there is no staircasing of the current or magnetic flux in the inductor.

In the electronic ballast too, there is a ferrite inductor in series with the tube. Its basic purpose is to limit the current, as in conventional copper ballasts. The difference being that copper ballast needs a large line-frequency choke made of iron/steel, whereas the electronic ballast is much smaller, lighter, and made of ferrite material. The size advantage is exactly what we expect and invariably achieve by creating switching action at a very high frequency.

However, one basic difference as compared to a conventional half-bridge switching power supply (which the electronic ballast shown in Fgr. 4 resembles) is that the ballast is actually a resonant topology. The L in series with the tube forms a series-resonant circuit with the two C's of the half-bridge (which are effectively in parallel from the AC point of view). So the current sloshes back and forth, and it makes perfect sense to therefore make the circuit self-oscillate, with the help of base-drive transformers as shown in the oval circle.

Note that when the tube has not fired, that is, when we first apply AC input, the small unnamed capacitor across the tube is the effective series capacitance of the resonant tank circuit. It’s the starter cap. The oscillations are in fact at even a higher frequency than in normal operation on startup. But what's more, since there is almost no damping resistance during startup, a high voltage is created across the tube, to get it to fire. We must remember from our high-school EM course that a series LC presents a very low input impedance at its resonant frequency, and at that moment, the voltages across the L and the C can both be very high, though opposite in phase, thereby effectively canceling out as far as the input is concerned.

This is just the opposite as compared to a text book parallel LC tank circuit, which presents a very high impedance at its resonant frequency, and in which the currents in each component can be very high, though opposite in phase, thus effectively canceling out as far as the input is concerned. We also remember that if we drive such a low-impedance series LC tank circuit with the driving frequency equal to its natural resonant frequency (which is what we really do by using a self-oscillatory scheme), the oscillations build up every cycle, and so, though the input voltage remains the same, the currents and the voltages across each reactive component keep building up every cycle. Finally, the tube 'fires,' thus effectively bypassing the small starter capacitance. Thereafter, the circuit lapses into a more stable, damped, and lower frequency oscillation based on the resonant frequency created by the C's of the half-bridge.

We can clearly see one major problem already. That is, what if the tube does not fire? This is a real-world possibility, since the seals at the ends of the tube may leak, thus affecting the 'vacuum' inside the tube over a period of time. In this situation, we are expecting to replace the tube, not the ballast! But in a virtually undamped LC circuit, the oscillations will build up every cycle, and eventually the transistors, which see the same current when they turn on, will be destroyed. This is what leads to the 'deactivated tube' test. The tube does not fire and the filaments at the end of the tube are typically of such low resistance, that they really can't damp out the steadily escalating oscillations. Some engineers therefore try to place an additional resistor in series with the small starter capacitance, but this certainly affects the ability to start the tube, especially at lower mains input voltages.

A PTC (a thermistor with a positive temperature coefficient) can be used, but it’s an expensive solution and also has response-time limitations. In the case of the existing ballast design (just before we set to work on it), the previous engineers had tried to circumvent the deactivated tube failures by using more expensive and hefty 'horizontal deflection' transistors (the well-known BU508A). But these have low gain, and they run inefficiently and get hot.

So now heatsinks had to be added. In addition, there was still a need to turn off the ballast after several such unsuccessful attempts to fire the tube. So in came an expensive mechanical thermal overload relay, fastened to the heatsinks. But then they found out that it just failed to act fast enough to protect the transistors, given the high heatsink thermal capacity involved.

 

====

(BEST) BALUN T_hi T_lo T_ct Oblique View T_ct T_hi T_lo Top View T_hi T_lo T_ct Top View (BASIC) SINGLE TOROID (IMPROVED) DOUBLE TOROID Fgr. 5: Different Toroidal-type Drive Transformers

====

Our contribution was to use the principle of the flyback converter to recover energy back from the inductor. See the second Fgr. 5. So an additional winding (with turns ratio and polarity carefully worked out) recovered the excess energy and delivered it back to the main input bulk capacitor. But to eventually cease the switching operation, a sense resistor (or diode in our case) was added that would charge up a capacitor and eventually trigger a small NPN-PNP latch sitting on the base of the lower transistor. Now mains resetting would be required to make the ballast try again. Fail-safe really.

Another critical area of improvement was in the base drive. Historically, many vendors use a single-inductor approach (see first schematic in Fgr. 5). However, we should remember that the key to turning ON a bjt efficiently is a little different from the way we want to turn it OFF. In particular, the turn-on must be a little slower (delayed) and it has been shown that the actual crossover duration is significantly reduced if in fact we don't do a hard turn-on.

On the other hand, for turn-off we do want to create a hard turn-off, yanking the base momentarily, several Volts below the emitter voltage.

The problem with the single-inductor drive is that the turn-on waveform of one transistor is the exact inverse of the turn-off waveform of the other. So there is no possibility of driving them appropriately and differently, and thereby efficiently. The transistors can thus run very hot with the single-inductor base drive. In Fgr. 5, we have 'hi' for the high-side transistor, 'lo' for the low-side, and 'sw' stands for the primary winding, that is, the loop of wire passing through the base-drive current transformer from the switching node. To conquer the limitations of the single-inductor drive, engineers often use the double-toroid approach.

However, if the permeabilities and dimensions of the two toroids are not well-matched, there are again discrepancies during the crossover, resulting in losses. But realizing that the actual permeabilities of the two toroids are not really important, but their relative permeability is, we started using an innovative 'balun' core to drive the transistors. The advantage is that

'both halves' of the balun are created in the same batch, so though the permeability may have a lot of process tolerance, the two halves are still very well-matched. Besides, since the two halves possess uncoupled inductance, that allows us to create the appropriate turn-on and turn-off waveforms by a little 'wave-shaping' circuit as shown in Fgr. 6. Baluns, usually made for RF suppression, and on Ni-Zn ferrite, can be made to order with the more preferred Mn-Zn material. Then they require lesser turns and run very cool themselves too.

With these improvements, we could do away with the heatsinks altogether. The transistors would now run cool-to-the-touch, even while free-standing. No thermal trip was required.

The transistors could also now be the cheaper and higher gain MJE13005. As for the input surge test, we replaced the filter with a smaller differential mode toroidal filter deliberately made of lossier Ni-Zn ferrite material, and simultaneously increased the input bulk capacitance slightly, for that is the only way to really pass the lightening surge test without MOVs, and so on.

Last, I will pass along an interesting e-mail I received from Pat Rossiter in Denver, describing that such reliability problems exist not just in India, but right here in the US too.

He writes:

Dear SMPS-guru [please circle correct choice]: I read with great interest your article in today's Planet Analog Newsletter. I'm not as technically versed as I'd like to be, but it looks like the ballast that you invented (way to GO!) would be just the ticket for our lights here at Yellow Cab. We’re a stone's throw from a sub station and our power fluctuates a bit and in the past month I have replaced 5 ballast packs. The point of this email-and it’s certainly about time that I got to it-is where can I get the ballast packs you described? Eagerly awaiting your illumination on the matter.

With that I put my pen down again for this month.

Fgr. 6: Base-drive Enhancement Circuit

The Organizational Side of Power Management: One Engineer's Perspective

Even with jobs going overseas, the age-old conflict between engineering and marketing has not diminished.

In this departure from diodes, mosfets, and loop currents, SMPS-guru looks at the organizational side of power management projects. The vocal, technology-grounded engineer may too often be the unwanted child in a "King's Court," he worries.

Talking endlessly about quaint base-drive toroids, ticklish lifetime issues with aluminum capacitors, neat though quirky current-sensing techniques, and what have you, has only one purpose in mind: that of advancing technology. But do we, as engineers, usually solidly devoted to our craft, really manage to achieve that to the full extent we hoped and struggled for? Or is it a case of "one step forward" and then "two steps backward" (for reasons beyond our control)? A glass that at best remains tantalizingly half-empty and half-full - all because of a darn leak, that we just forgot to factor into our calculations! But what about the organizational side? Advances in science and technology, especially in power conversion, hinge on a few basic and fairly obvious commonsensical principles. But surprisingly, these are the very ones often overlooked at an organizational level. Let me try to categorize some of my personal observations here - see if you agree:

Communication

A lot can be learned by simply sharing experiences: things that worked and things that didn't.

Why should engineers always end up repeating mistakes and learning the hard way - potential bugs or possible catastrophes that they could've known about beforehand, just by listening and thinking. Couldn't they also take what in fact was already proven to be the best available engineering solution at that time, and develop it further? Creativity has its place, but let's not re-invent the wheel please! However, when organizations grow, they usually start subdividing, and fairly quite arbitrarily too. So the Power team becomes separate "AC/DC" and "DC/DC" groups. Whereas we all know that at the heart of any ac/dc switcher is none other than a dc/dc switcher! At a later stage, the DC/DC group may become "Portable Power" and "Power Management" groups.

But both still use the same topologies duh! Then sooner or later, Power Management may bifurcate into "high power" and "low power." Does all this imply any radical change in engineering principles? Not really! So the end result is that engineers, the ones that are expected to generate products and revenue in the first place, simply don't run into each other anymore, or get to talk about their experiences to each other, even over a coffee machine.

What's worse: if the assignments are on the unimaginative basis of "one project, one engineer," then even within any such finely divided sub-group, there is almost no sharing of engineering information thereafter.

Yes, I agree: marketing or sales or even Field Application engineers may need to be divided to get more "business focus," but for engineering focus, you actually need to get the engineers together, not drive them apart. Engineers always thrive when they share. No one really benefits in the long run if the pinky, For example, no longer knows what even the middle finger is up to. And power conversion is just too tricky an area to take that chance.

Integrity

Engineers are trained to respect only facts and data! That's the key to their success as engineers: standing behind every robust and brilliant product they create. Unfortunately, that strength also sometimes isolates them from the rest of the crowd. Of course there are people whose legitimate job is to slightly blur the boundaries between fact and fiction - to create nebulous and fuzzy perceptions in others minds. Sales, Marketing, PR for example? But conflicts over integrity can turn out to be quite debilitating to an engineer in the long run. Like what to write in a datasheet. Or what can or can't be designed and how soon.

Or "promote this part number please ... I don't care what you are writing in your App Note if you don't manage to sell my part first and foremost!" In fact, I know a company where the entire Applications Engineering department reports to the person who doubles over as the Marketing manager too. That may not have been such a bad idea provided that person was once an engineer, or at least had a thorough grasp of technical details. But that is usually not so. So one can easily envisage a situation where the Marketing person promises the customer the moon and the stars (all at daybreak tomorrow!), and then goes back to drive his engineers to build his (already committed) palace of dreams overnight. Or else! Resources

Engineers relish challenges. Changes to their daily routine excite them. Don’t hand them the humdrum job of simple repetitive power-up testing of 50 odd boards for a customer. That kills their spirit. Yet how often have we seen that, come layoff time, the first persons to go are the CAD guys, then the technicians, and then the documentation expert! On paper, the concerned manager can show impressive savings to his superiors. Headcount is what it’s all about! But a few months later, engineers are still trying to grapple with ORCAD or Protel just to do their simple PCBs.

And in addition, they have to learn a rather complicated language whose only purpose is to transcribe what they already know and have written, into a format suitable for the datasheet standard used by the company. And yes! Those damn 50 boards are still waiting on the bench! Why couldn't these engineers have been doing what they are best trained to do, and also enjoy the most - you guessed it: engineering!? In fact right now, they may have just become the most highly paid technicians around.

Peer Environment

Technology may never gain a foothold in a "king's court," where you are either rewarded with largesse for being vehemently agreeable, or unceremoniously sentenced to the dark dungeons for the rest of your life. Engineers like to speak out - but usually only when they are sure of their facts and have incontrovertible data to back themselves up. They therefore deserve and need a "peer environment," where they are judged (primarily) by the respect received from their peers, the king be damned (on occasion)! It must be kept in mind that this can really bother the king sometimes! So managers who supervise engineers should be fairly competent at a technical level themselves and respect data and facts equally. They can't attempt to win a technical argument by throwing rank on their subordinates. Nor should they ever go around, God forbid, trying to subsequently shoot the "emotional and/or disrespectful" engineer down ("that'll teach him"). Surprisingly that does happen more than we dare admit. Not only does the good engineer pay the price, but so does technology in the long run.

Top of Page

PREV: Part 2 | NEXT: (none) Guide Index | HOME