Category: Iacdrive_blog

How to get confidence while powering ON an SMPS prototype?

I never just put power to a first prototype and see what happens. Smoke and loud sounds are the most likely result and then you just know that something was not perfect. So how would you test the next prototype sample?

A good idea is to put supply voltage to your control circuit from an external supply first – often something like 12V. Check oscillator waveform, frequency, gate pulses etc. If possible, use another external power supply to put a voltage to your output. Increasing this voltage slowly, you should see the gate pulses go from max. to min. duty cycle when passing the desired output voltage. If this does not happen, check your feedback path, still without turning main power on.

If everything looks as expected, remove the external supply from the output but keep the control circuit powered from an external source. Then SLOWLY turn up the main input voltage while using your oscilloscope to monitor the voltage waveforms in the power circuit and a DC voltmeter to monitor output voltage etc. Keep an eye on the ampere-meter on the main power source. If something suspicious occurs, stop increasing input further and investigate what’s happening while the circuit is still alive.

With a low load you should normally expect the output voltage to hit the desired value soon, at least in a flyback converter. Check that this happens. Then check what happens with a variable load – preferably electronic.

If you did not calculate your feedback loop, very likely you will see self oscillation (normally not destructive). If you don’t, use the step load function in your electronic load to check stability. If you see a clear ringing after a load step, you still have some work to do in your loop. But feedback and stability is another huge area which Mr. Ridley has taught us a lot about.

And yes – the world needs powerful POWER ENGINEERS desperately!

Avoid voltage drop influence

My cable size and transformer size should give me maximum 3% on the worst 6% to 10%. If it is the single only equipment on the system then maybe you can tolerate 15%. If not, dip factor may affect sensitive equipment and lighting.

This is very annoying for office staff each time a machine starts lights are dimming. It does not matter what standard you quote I cannot accept 10%- 15% make precise calculation and add a 10% tolerance to avoid.

In most cases, this problem comes from cable under sizing so we have to settle with a Standard giving 15% Max.

Just recently I had to order a transformer and cable change for a project which was grossly undersized.
I have had to redesign the electrical portion of a conveyor and crushing system to bring the system design into compliance with applicable safety codes. The site was outdoor at a mine in Arizona where ambient temperatures reach 120F. The electrical calculation and design software did not include any derating of conductor sizes for cable spacing and density within cable trays, number of conductors per raceway, ambient temperature versus cable temperature rating, etc. Few of the cables had been increased in size to compensate for voltage drop between the power source and the respective motor or transformer loads.

Feeder cables to remote power distribution centers were too small, as voltage drop had not been incorporated in the initial design. The voltage drop should not be greater than 3%, as there will be other factors of alternating loads, system voltage, etc. that may result in an overall drop of 5%.

The electrical system had to be re-designed with larger cables, transformer, MCCS, etc, as none of the design software factors in the required deratings specified in the National Electric Code NFPA70 nor the Canadian Electric Code, which references the NEC.

Experience: Power Supply

My first big one: I had just joined a large corporation’s central R and D in Mumbai (my first job) and I was dying to prove to them that they were really very wise (for hiring me). I set up my first AC-DC power supply for the first few weeks. Then one afternoon I powered it up. After a few minutes as I stared intently at it, there was a thunderous explosion…I was almost knocked over backwards in my chair. When I came to my senses I discovered that the can of the large high-voltage bulk cap had just exploded (those days 1000uF/400V caps were real big)…the bare metal can had taken off like a projectile and hit me thump on the chest through my shirt (yet it was very red at that spot even till hours later). A shower of cellulose and some drippy stuff was all over my hair and face. Plus a small crowd of gawking engineers when I came to. Plus a terribly bruised ego in case you didn’t notice. Now this is not just a picturesque story. There is a reason why they now have safety vents in Aluminum Caps (on the underside too), and why they ask you never never to even accidentally apply reverse polarity, especially to a high-voltage Al cap. Keep in mind that an Al Elko is certainly damaged by reverse voltage or overvoltage, but the failure mechanism is simply excessive heat generation in both cases. Philips components, in older datasheets, used to actually specify that their Al Elkos could tolerate an overvoltage of 40% for maybe a second I think, with no long-term damage. And people often wonder why I only use 63V Al Elkos as the bulk cap in PoE applications (for the PD). They suggest 100V, and warn me about surges and so on. But I still think 63V is OK here, besides being cheap, and I tend to shun overdesign. In fact I think even ceramic caps can typically handle at least 40% overvoltage by design and test — and almost forever with no long term effects. Maybe wrong here though. Double check that please.

Another historic explosion I heard about after I had left an old power supply company. I deny any credit for this though. My old tech, I heard, in my absence, was trying to document the stresses in the 800W power supply which I had built and left behind. The front-end was a PFC with four or five paralleled PFC FETs. I had carefully put in ballasting resistors in the source and gates of each Fet separately, also diligently symmetrical PCB traces from lower node of each sense resistor to ground (two sided PCB, no ground plane). This was done to ensure no parasitic resonances and good dynamic current sharing too. There was a method to my madness it turns out. All that the tech did was, when asked to document the current in the PFC Fets, placed a small loop of wire in series with the source of one of these paralleled Fets. That started a spectacular fireworks display which I heard lasted over 30 seconds (what no fuse???), with each part of the power supply going up in flames almost sequentially in domino effect, with a small crowd staring in silence along with the completely startled but unscathed tech (lucky guy). After that he certainly never forgot this key lesson: never attempt to measure FET current by putting a current probe in its source— put it on the drain side. It was that simple. The same unit never exploded after that, just to complete the story.

Maximum permissible value of grounding resistance

For grounding in the US it typically goes like this: Utility transformer has one ground rod. Then from the utility to the building you typically have three phase conductors and one neutral/ground conductor landing on the main panel with the utility meter. At that point we drive a ground rod. And we bond the ground rod to the water pipes (generally). And we bond the ground rod to the building steel (generally). Water pipes are generally very well connected to ground and the building steel is a nice user ground. With all these connections you typically have a good ground reference. Now, if that utility neutral wire is bad or too small, then you can have poor reference to ground between phases (a normal sign of that is flickering lights even when the load is not changing much).

Grounding impedance of the transformer and building ground rods is mainly for voltage stabilization and under normal conditions should have nothing to do with our return ground fault current. See NEC 250.1 (5) “The earth shall not be considered as an effective ground-fault current path.”

Let’s say we have a system with the building transformer and panel to ground impedance of 1000 ohms (we built this place on solid rock). Okay, we have a poor 277V reference and we will have flickering lights (that 277 voltage will bounce all over the place). But now, in our system above, if we take a phase wire and connect it to a motor shell, which is also connected to our grounding wire, will the upstream breaker trip? The answer is yes. If our phase-to-ground fault impedance is low we will trip the upstream feeder breaker no matter what the main panel ground rod impedance is. My point here is that is does not matter what our transformer grounding is or what our panel grounding is (ground rod is not important in this case). The breaker must trip because our circuit is complete between the phase conductor and the transformer wye leg.

As long as we have a utility main transformer to panel neutral conductor of proper size to handle our fault current and we size our grounding conductors properly and they are properly connected at each subpanel and each motor in our case, we will apply nearly full phase to ground voltage because our real ground fault path is from that motor, through the grounding conductor, through our sub panels, to our main panel, than back to the transformer. That ground current must flow through our building grounding conductor to the main panel and back to the transformer through that utility neutral wire which is connected to the wye leg of the transformer. And it does not matter what the transformer to ground rod connection is. We could take that out the transformer to ground rod connection and the main panel to ground rod connection completely and we are still connecting that phase wire, through the motor metal to the grounding conductor back to the wye leg of that utility transformer, which will complete our electrical circuit. Current will flow and the breaker will trip.

Power supply prototype failures

I remember my very first power supply. They threw me in the deep end in 1981 building a multi-output 1 kW power supply. I was fresh from college, thought i knew everything, and consumed publications voraciously to learn more. Exciting times.

But nothing prepared me for the hardware trials and tribulations. We built things and they blew up. Literally. We would consume FETs and controllers at an alarming rate. The rep from Unitrode would come and visit and roll his eyes when we told him we needed another dozen controllers since yesterday.

The reasons for failure were all over the map . EMI, heat, layout issues, design issues, bad components (we had some notorious early GE parts – they exited the market shortly afterwards.)
Some of the issues took a few days to fix, some of them took weeks. We had two years to get the product ready, which was faster than the computer guys were doing their part, so it was OK.

90% of the failure issues weren’t talked about in any paper, and to this day, most of them still aren’t.

So, fast forward to today, 32 years later. I still like to build hardware – you can’t teach what you don’t practise regularly, so I keep at it.

With all the benefit of 3 decades of knowledge I STILL blow things up. Everything progresses along fine, then i touch a sensitive circuit node, or miss some critical design point and off it goes. I’m faster now at finding the mistakes but I still find there are new ones to be made. And when it blows up with 400 V applied, it’s a mess and a few hours to rebuild. Or you have to start over sometimes, if the PCB traces are vaporized.

So my first prototype, while on a PC board, always includes the controller in a socket because I know I will need that. Magnetics too, when possible, I know I’ll revise them time and again to tweak performance. PC boards will be a minimum of two passes, probably three.

Active power losses in electrical motor

Equivalent active power losses during electrical motor’s testing in no-load conditions contain next losses:
1. active power losses in the copper of stator’s winding which are in direct relation with square of no-load current value: Pcus=3*Rs*I0s*I0s,

2. active power losses in ferromagnetic core which are in direct relation with frequency and degree of magnetic induction (which depends of voltage):
a) active power losses caused by eddy currents: Pec=kec*f*(B)x
b) active power losses caused by hysteresis: Ph=(kh*d*d*f*f*B*B)/ρ

3. mechanical power losses which are in direct relation with square of angular speed value: Pmech=Kmech*ωmech*ωmech,

Comment:
First, as you can see, active power losses in ferromagnetic core of electrical motor depend of voltage value and frequency, so by increasing voltage value you will get higher active power losses in ferromagnetic core of electrical motor.

Second, you can’t compare two electrical motors with different rated voltage and different rated power because active power losses in the ferromagnetic core, as I have already said above, depend of voltage value and frequency while active power losses in the copper of stator’s windings depend of square of no-load current value which is different for electrical motors with different rated power.

Third, when you want to compare active power losses in no-load conditions of two electrical motors with same rated voltage and rated power, you need to check design of both electrical motors because it is possible that one of them has different kind of winding, because, maybe in the past, one of them was damaged, so its windings had to be changed, what could be the reason for different electrical design and that has a consequence different no-load current value.

Friendly system without technicians diagnose

How to make our systems so friendly that they do not need technicians to help diagnose problems? Most of the more obvious answers have been well documented here but to emphasize these points it is normally the case that diagnostics and alarms can dominate the amount of code constructed for an application. That is the amount of code required to fully diagnose an application may be as much if not more than the code required to make the application work in the first place!

I have seen HMIs with diagnostic screens showing an animated version of the Cause & Effects code that allows users to see where the trip condition is. I have also seen screens depicting prestart checks, Operator guides, etc all animated to help the user. Allen-Bradley even have a program viewer that can be embedded on an HMI screen but you would probably need a technician to understand the code.

From a control system problem perspective it is inevitable that you would need to use the vendor based diagnostics to troubleshoot your system. Alarms may indicate that there is a system related problem but it is unlikely that you could build a true diagnostics application in your code to indicate the full spectrum of problems you may encounter. For example if the CPU dies or the memory fails there is nothing left to tell you what the problem is 🙂

From an application/process problem perspective if you have to resort to a technician to go into the code to determine your problem then your alarm and diagnostics code is not comprehensive enough! Remember your alarming guidelines an Operator must be able to deal with an abnormal situation within a given time frame, if the Operator has to rely on a Technician to wade through code then you really do need to perform an alarm review and build a better diagnostics system.

Motor design

When I was doing my PhD in motor design of reluctance machines with flux assistance (switched reluctance machines and flux switching machines with magnets and/or permanently energised coils) my supervisor was doing research on the field of sensorless control (it wasn’t the area of my research but it got me thinking about it all). At the time I had thought (only in my head as a PhD student daydream) that I would have to initially force a phase (or phases) to deliberately set the rotor into a known position due to the phase firing then start a normal phase firing sequences to start and operate the motor for a normal load without the need for any form position detection (all this was assuming I had the motor running from stationary to full speed at normal expected load with use of a position sensor to start with so I could link phase firing, rotor position and timings all together to create a “map” which I could then try to use to re-program a firing sequence with no position detection at all but only if I could force the rotor to “park” itself in the same position every time before starting the machine properly – the “map” having the information to assume that the motor changes speed correctly as it changes the firing sequences as it accelerates to full speed). But any problem such as unusual load condition or fault condition (e.g. short circuit or open circuit in a phase winding) would render useless such an attempt at control with no form of position detection at all. The induction machine being sensorless and on the grid being measured.

How/where do we as engineers need to change?

System Design – A well designed system should provide clear and concise system status indications. Back in the 70’s (yes, I am that old), Alarm and indicator panels provided this information in the control room. Device level indicators further guided the technician to solving the problem. Today, these functions are implemented in a control room and machine HMI interface. Through the use of input sensor and output actuator feedback, correct system operation can be verified on every scan.

Program (software) Design – It has been estimated that a well written program is 40% algorithm and 60% error checking and parameter verification. “Ladder” in not an issue. Process and machine control systems today are programmed in ladder, structured text, function block, etc. The control program is typically considered intellectual property (IP) and in many cases “hidden” from view. This makes digging through the code impractical.

How/where do we as engineers need to change? – The industry as a whole needs to enforce better system design and performance. This initiative will come from the clients, and implemented by the developers. The cost/benefit trade-off will always be present. Developers trying to improve their margins (reduce cost – raise price) and customers raising functionality and willing to pay less. “We as engineers” are caught in the middle, trying to find better ways to achieve the seemingly impossible.

Sensorless control

I am curious about the definition of “sensorless control”.  When you talk about sensorless control, are you in fact meaning a lack of physical position sensor such as e.g. a magnet plus vane plus hall effect? i.e. not having a unit whose sole objective is position detection.
Is the sensorless control based around alternative methods of measurement or detection to predict position using components that have to exist for the machine to function (such as measuring or detecting voltages or currents in the windings)?

I had long ago wondered about designing a motor, fully measuring its voltage and current profiles and phase firing timings for normal operation (from stationary to full speed full load) using a position sensor for getting the motor to work and to determine the best required phase firing sequences and associated voltage/current profiles then program a microprocessor to replicate the entire required profile such that I would attempt to eliminate the need for any sensing or measurement at all (but I concluded it would come very unstuck for any fault conditions or restarting while it was still turning). So in my mind don’t all such machines require a form of measurement (i.e. some form of “sensing”) to work properly so could never be truly sensorless?

A completely sensor-less control would be completely open-loop, which isn’t reliable with some motors like PMSMs. Even if you knew the switching instants for one ideal case, too many “random” variables could influence the system (just think of the initial position), so that those firing instants could be inappropriate for other situations.

Actually, induction machines, thanks to their inherent stability properties, can be run really sensor-less (i.e. just connected to the grid or in V/f). To be honest, even in the simple grid-connection case there is an overcurrent detection somewhere in the grid, which requires some sensing.

But there can also be said the term sensorless relates to el. motor itself. In another words, it means there are not any sensors “attached” to the el. motor (which does not mean sensors cannot be in the inverter, in such a case). In our company we are using the second meaning, since it indicates no sensor connections are needed between the el. motor and the ECU (inverter).