Category: Iacdrive_blog

Experience: Power Supply

My first big one: I had just joined a large corporation’s central R and D in Mumbai (my first job) and I was dying to prove to them that they were really very wise (for hiring me). I set up my first AC-DC power supply for the first few weeks. Then one afternoon I powered it up. After a few minutes as I stared intently at it, there was a thunderous explosion…I was almost knocked over backwards in my chair. When I came to my senses I discovered that the can of the large high-voltage bulk cap had just exploded (those days 1000uF/400V caps were real big)…the bare metal can had taken off like a projectile and hit me thump on the chest through my shirt (yet it was very red at that spot even till hours later). A shower of cellulose and some drippy stuff was all over my hair and face. Plus a small crowd of gawking engineers when I came to. Plus a terribly bruised ego in case you didn’t notice. Now this is not just a picturesque story. There is a reason why they now have safety vents in Aluminum Caps (on the underside too), and why they ask you never never to even accidentally apply reverse polarity, especially to a high-voltage Al cap. Keep in mind that an Al Elko is certainly damaged by reverse voltage or overvoltage, but the failure mechanism is simply excessive heat generation in both cases. Philips components, in older datasheets, used to actually specify that their Al Elkos could tolerate an overvoltage of 40% for maybe a second I think, with no long-term damage. And people often wonder why I only use 63V Al Elkos as the bulk cap in PoE applications (for the PD). They suggest 100V, and warn me about surges and so on. But I still think 63V is OK here, besides being cheap, and I tend to shun overdesign. In fact I think even ceramic caps can typically handle at least 40% overvoltage by design and test — and almost forever with no long term effects. Maybe wrong here though. Double check that please.

Another historic explosion I heard about after I had left an old power supply company. I deny any credit for this though. My old tech, I heard, in my absence, was trying to document the stresses in the 800W power supply which I had built and left behind. The front-end was a PFC with four or five paralleled PFC FETs. I had carefully put in ballasting resistors in the source and gates of each Fet separately, also diligently symmetrical PCB traces from lower node of each sense resistor to ground (two sided PCB, no ground plane). This was done to ensure no parasitic resonances and good dynamic current sharing too. There was a method to my madness it turns out. All that the tech did was, when asked to document the current in the PFC Fets, placed a small loop of wire in series with the source of one of these paralleled Fets. That started a spectacular fireworks display which I heard lasted over 30 seconds (what no fuse???), with each part of the power supply going up in flames almost sequentially in domino effect, with a small crowd staring in silence along with the completely startled but unscathed tech (lucky guy). After that he certainly never forgot this key lesson: never attempt to measure FET current by putting a current probe in its source— put it on the drain side. It was that simple. The same unit never exploded after that, just to complete the story.

Maximum permissible value of grounding resistance

For grounding in the US it typically goes like this: Utility transformer has one ground rod. Then from the utility to the building you typically have three phase conductors and one neutral/ground conductor landing on the main panel with the utility meter. At that point we drive a ground rod. And we bond the ground rod to the water pipes (generally). And we bond the ground rod to the building steel (generally). Water pipes are generally very well connected to ground and the building steel is a nice user ground. With all these connections you typically have a good ground reference. Now, if that utility neutral wire is bad or too small, then you can have poor reference to ground between phases (a normal sign of that is flickering lights even when the load is not changing much).

Grounding impedance of the transformer and building ground rods is mainly for voltage stabilization and under normal conditions should have nothing to do with our return ground fault current. See NEC 250.1 (5) “The earth shall not be considered as an effective ground-fault current path.”

Let’s say we have a system with the building transformer and panel to ground impedance of 1000 ohms (we built this place on solid rock). Okay, we have a poor 277V reference and we will have flickering lights (that 277 voltage will bounce all over the place). But now, in our system above, if we take a phase wire and connect it to a motor shell, which is also connected to our grounding wire, will the upstream breaker trip? The answer is yes. If our phase-to-ground fault impedance is low we will trip the upstream feeder breaker no matter what the main panel ground rod impedance is. My point here is that is does not matter what our transformer grounding is or what our panel grounding is (ground rod is not important in this case). The breaker must trip because our circuit is complete between the phase conductor and the transformer wye leg.

As long as we have a utility main transformer to panel neutral conductor of proper size to handle our fault current and we size our grounding conductors properly and they are properly connected at each subpanel and each motor in our case, we will apply nearly full phase to ground voltage because our real ground fault path is from that motor, through the grounding conductor, through our sub panels, to our main panel, than back to the transformer. That ground current must flow through our building grounding conductor to the main panel and back to the transformer through that utility neutral wire which is connected to the wye leg of the transformer. And it does not matter what the transformer to ground rod connection is. We could take that out the transformer to ground rod connection and the main panel to ground rod connection completely and we are still connecting that phase wire, through the motor metal to the grounding conductor back to the wye leg of that utility transformer, which will complete our electrical circuit. Current will flow and the breaker will trip.

Power supply prototype failures

I remember my very first power supply. They threw me in the deep end in 1981 building a multi-output 1 kW power supply. I was fresh from college, thought i knew everything, and consumed publications voraciously to learn more. Exciting times.

But nothing prepared me for the hardware trials and tribulations. We built things and they blew up. Literally. We would consume FETs and controllers at an alarming rate. The rep from Unitrode would come and visit and roll his eyes when we told him we needed another dozen controllers since yesterday.

The reasons for failure were all over the map . EMI, heat, layout issues, design issues, bad components (we had some notorious early GE parts – they exited the market shortly afterwards.)
Some of the issues took a few days to fix, some of them took weeks. We had two years to get the product ready, which was faster than the computer guys were doing their part, so it was OK.

90% of the failure issues weren’t talked about in any paper, and to this day, most of them still aren’t.

So, fast forward to today, 32 years later. I still like to build hardware – you can’t teach what you don’t practise regularly, so I keep at it.

With all the benefit of 3 decades of knowledge I STILL blow things up. Everything progresses along fine, then i touch a sensitive circuit node, or miss some critical design point and off it goes. I’m faster now at finding the mistakes but I still find there are new ones to be made. And when it blows up with 400 V applied, it’s a mess and a few hours to rebuild. Or you have to start over sometimes, if the PCB traces are vaporized.

So my first prototype, while on a PC board, always includes the controller in a socket because I know I will need that. Magnetics too, when possible, I know I’ll revise them time and again to tweak performance. PC boards will be a minimum of two passes, probably three.

Is it worth to built-in batteries in electric cars

Energy storage is the issue. Can we make batteries or super caps or some other energy storage technique that will allow an electric car to have a range of 300-500 miles. Motors and drives are already very efficient, so there is not much to be gained by improving their efficiency. As far as converting the entire fleet of cars to electric, I don’t expect to see this happen any time soon. The USA has more oil than all the rest of the world put together. We probably have enough to last 1000 years. Gasoline and diesel engines work very well for automobiles and trucks and locomotives. The USA also has a huge supply of coal, which is a lot cheaper than oil. Electricity is cheaper than gasoline for two reasons: Coal is much cheaper than oil, and the coal fired power plants have an efficiency of about 50%. Gasoline engines in cars have a thermal efficiency of about 17%. Diesel locomotives have an efficiency of 50%+.

I don’t believe the interchangeable battery pack idea is workable. Who is going to own the battery packs and build the charging stations? And what happens if you get to a charging station with a nearly dead battery and there is no charged battery available?

Who is going to build the charging stations; the most logical answer is the refueling station owners as an added service. The more important question is about ownership of the batteries. If as an standard, all batteries are of same size, shape, connectors as well as Amp-Hour (or kWh) rating and a finite life time, lets say 1000 recharging. The standard batteries may have an embedded recharge counter. The electric car owners should pay the service charges plus cost of the kWh energy plus 1/1000 of the battery cost. By that, you pay for the cost of new batteries once you buy or convert to an electric car and then you pay the depreciation cost. This means you always own a new battery. The best probable owner of the batteries should be the battery suppliers or a group or union of them (like health insurance union). The charging stations collecting the depreciation cost should pass it on to the battery suppliers union. Every time a charging station get a dead battery or having its recharge counter full, they will return it to the union and get it replaced with a new one. So, as an owner of electric car you don’t need to worry about how old or new replacement battery you are getting from the charging station. You will always get a fully charged battery in exchange. The charging stations get their energy cost plus their service charges and the battery suppliers get the price of their new battery supplies.

Buddies, these are just some wild ideas and I am sure someone will come up with a better and more workable idea. And we will see most of the cars on our roads without any carbon emission.

Heavily discontinuous mode flyback design

With a heavily discontinuous mode flyback design, the transformer’s ac portion of current can be larger than the dc portion. When a high perm material is used for the transformer core, the required gap can be quite large in order to reach the low composite permeability required while the core size will likely be driven by winding and core loss considerations rather than just simply avoiding saturation. Normally the gap is put in the center leg only (with E type topology cores) in order to minimize the generation of stray fields. However, in designs such as yours (high ac with a high perm core) the needed core gap can lead to a relatively large fringing zone through which foil or solid wire may not pass without incurring excessive, unacceptable loss. Possible solutions are to use Litz wire windings or inert spacers (e.g., tape) around the center leg in order to keep the windings far enough away from the gap (the rule of thumb is 3 to 5 gap lengths, which can eat up a lot of the window area).

It is mainly for these reasons that placing half the gap in an E type core’s outer legs might be worth the trouble of dealing with the magnetic potential between the core halves (and you have seen first hand what trouble an ill designed shield band can be).

To avoid eddy current losses, the shield band should be spaced well away from the outer leg gap, probably 5 gap lengths or more. Also to be a really effective magnetic shield, it should be 3 to 5 gap lengths thick.

Bear in mind that with a high frequency, high ac current inductor design proximity effects in the winding may become very significant. This is why many of these type of inductors have single layer windings or winding wound with Litz wire (foil is the worst winding type here). One advantage of an equally gapped E type core design is that the proximity effect on the windings is significantly less because there are two gaps in series (a quasi distributed gapped core design). Not only layer-to-layer, but turn-to-turn proximity effects can sometimes be problematic in an ac inductor (or flyback) design. Just as with the gap, these are reduced by adding appropriate spacing, for example making the winding coil loose or winding it bifilar with a non-conductive filament.

Friendly system without technicians diagnose

How to make our systems so friendly that they do not need technicians to help diagnose problems? Most of the more obvious answers have been well documented here but to emphasize these points it is normally the case that diagnostics and alarms can dominate the amount of code constructed for an application. That is the amount of code required to fully diagnose an application may be as much if not more than the code required to make the application work in the first place!

I have seen HMIs with diagnostic screens showing an animated version of the Cause & Effects code that allows users to see where the trip condition is. I have also seen screens depicting prestart checks, Operator guides, etc all animated to help the user. Allen-Bradley even have a program viewer that can be embedded on an HMI screen but you would probably need a technician to understand the code.

From a control system problem perspective it is inevitable that you would need to use the vendor based diagnostics to troubleshoot your system. Alarms may indicate that there is a system related problem but it is unlikely that you could build a true diagnostics application in your code to indicate the full spectrum of problems you may encounter. For example if the CPU dies or the memory fails there is nothing left to tell you what the problem is 🙂

From an application/process problem perspective if you have to resort to a technician to go into the code to determine your problem then your alarm and diagnostics code is not comprehensive enough! Remember your alarming guidelines an Operator must be able to deal with an abnormal situation within a given time frame, if the Operator has to rely on a Technician to wade through code then you really do need to perform an alarm review and build a better diagnostics system.

Motor design

When I was doing my PhD in motor design of reluctance machines with flux assistance (switched reluctance machines and flux switching machines with magnets and/or permanently energised coils) my supervisor was doing research on the field of sensorless control (it wasn’t the area of my research but it got me thinking about it all). At the time I had thought (only in my head as a PhD student daydream) that I would have to initially force a phase (or phases) to deliberately set the rotor into a known position due to the phase firing then start a normal phase firing sequences to start and operate the motor for a normal load without the need for any form position detection (all this was assuming I had the motor running from stationary to full speed at normal expected load with use of a position sensor to start with so I could link phase firing, rotor position and timings all together to create a “map” which I could then try to use to re-program a firing sequence with no position detection at all but only if I could force the rotor to “park” itself in the same position every time before starting the machine properly – the “map” having the information to assume that the motor changes speed correctly as it changes the firing sequences as it accelerates to full speed). But any problem such as unusual load condition or fault condition (e.g. short circuit or open circuit in a phase winding) would render useless such an attempt at control with no form of position detection at all. The induction machine being sensorless and on the grid being measured.

How/where do we as engineers need to change?

System Design – A well designed system should provide clear and concise system status indications. Back in the 70’s (yes, I am that old), Alarm and indicator panels provided this information in the control room. Device level indicators further guided the technician to solving the problem. Today, these functions are implemented in a control room and machine HMI interface. Through the use of input sensor and output actuator feedback, correct system operation can be verified on every scan.

Program (software) Design – It has been estimated that a well written program is 40% algorithm and 60% error checking and parameter verification. “Ladder” in not an issue. Process and machine control systems today are programmed in ladder, structured text, function block, etc. The control program is typically considered intellectual property (IP) and in many cases “hidden” from view. This makes digging through the code impractical.

How/where do we as engineers need to change? – The industry as a whole needs to enforce better system design and performance. This initiative will come from the clients, and implemented by the developers. The cost/benefit trade-off will always be present. Developers trying to improve their margins (reduce cost – raise price) and customers raising functionality and willing to pay less. “We as engineers” are caught in the middle, trying to find better ways to achieve the seemingly impossible.

Sensorless control

I am curious about the definition of “sensorless control”.  When you talk about sensorless control, are you in fact meaning a lack of physical position sensor such as e.g. a magnet plus vane plus hall effect? i.e. not having a unit whose sole objective is position detection.
Is the sensorless control based around alternative methods of measurement or detection to predict position using components that have to exist for the machine to function (such as measuring or detecting voltages or currents in the windings)?

I had long ago wondered about designing a motor, fully measuring its voltage and current profiles and phase firing timings for normal operation (from stationary to full speed full load) using a position sensor for getting the motor to work and to determine the best required phase firing sequences and associated voltage/current profiles then program a microprocessor to replicate the entire required profile such that I would attempt to eliminate the need for any sensing or measurement at all (but I concluded it would come very unstuck for any fault conditions or restarting while it was still turning). So in my mind don’t all such machines require a form of measurement (i.e. some form of “sensing”) to work properly so could never be truly sensorless?

A completely sensor-less control would be completely open-loop, which isn’t reliable with some motors like PMSMs. Even if you knew the switching instants for one ideal case, too many “random” variables could influence the system (just think of the initial position), so that those firing instants could be inappropriate for other situations.

Actually, induction machines, thanks to their inherent stability properties, can be run really sensor-less (i.e. just connected to the grid or in V/f). To be honest, even in the simple grid-connection case there is an overcurrent detection somewhere in the grid, which requires some sensing.

But there can also be said the term sensorless relates to el. motor itself. In another words, it means there are not any sensors “attached” to the el. motor (which does not mean sensors cannot be in the inverter, in such a case). In our company we are using the second meaning, since it indicates no sensor connections are needed between the el. motor and the ECU (inverter).

What is true power and apparent power?

KW is true power and KVA is apparent power. In per unit calculations the more predominantly used base, which I consider standard is the KVA, the apparent power because the magnitude of the real power (KW) is variable / dependent on a changing parameter of the cos of the angle of displacement (power factor) between the voltage and current. Also significant consideration is that the rating of transformers are based in KVA, the short circuit magnitudes are expressed in KVA or MVA, and the short circuit duty of equipment are also expressed in MVA (and thousands of amperes, KA ).

In per unit analysis, the base values are always base voltage in kV and base power in kVA or
MVA. Base impedance is derived by the formula (base kV)^2/(base MVA).

The base values for the per unit system are inter-related. The major objective of the per unit system is to try to create a one-line diagram of the system that has no transformers (transformer ratios) or, at least, minimize their number. To achieve that objective, the base values are selected in a very specific way:
a) we pick a common base for power (I’ll come back to this point, if it should be MVA or MW);
b) then we pick base values for the voltages following the transformer ratios. Say you have a generator with nominal voltage 13.8 kV and a step-up transformer rated 13.8/138 kV. The “easiest” choice is to pick 13.8 kV as the base voltage for the LV side of the transformer and 138 kV as the base voltage for the HV side of the transformer.
c) once you have selected a base value for power and a base value for voltage, the base values for current and impedance are defined (calculated). You do not have a degree of freedom in picking base values for current and impedance.

Typically, we calculate the base value for current as Sbase / ( sqrt(3) Vbase ), right? If you are using that expression for the base value for currents, you are implicitly saying that Sbase is a three-phase apparent power (MVA) and Vbase is a line-to-line voltage. Same thing for the expression for base impedance given above. So, perhaps you could choose a kW or MW base value. But then you have a problem: how to calculate base currents and base impedances? If you use the expressions above for base current and base impedance, you are implicitly saying that the number you picked for base power (even if you picked a number you think is a MW) is actually the base value for apparent power, it is kVA or MVA. If you insist on being different and really using kW or MW as the base for power, you have to come up with new (adjusted) expressions for calculating base current and base impedance.

And, surprise!, you will find out that you need to define a “base power factor” to do so. In other words, you will be forced back into defining a base apparent power. So, no, you cannot (easily) use a kW/MW base. For example, a 100 MVA generator, rated 0.80 power factor (80 MW). You could pick 80 as the base power (instead of 100). But if you are using the expressions above for base current and base impedance, you are actually saying that the base apparent power is 80 MVA (not a base active power of 80 MW).