Category: Iacdrive_blog

What causes VFD driven motor bearing current?

There are several things involved, all with varying degrees of impact.

Large machines are – generally speaking – made of pieces (segments) because the circle for the stator and/or rotor core is too large to manufacture from a single sheet. This leads to some breaks in the magnetic flux path symmetry, both in the radial (right angles to the shaft) and axial (parallel to the shaft) directions.

For the most part, the windings of large machines are formed and installed by hand. This too can lead to symmetry issues, as the current paths are not identical which in turn will create some differences in the magnetic field flux.

Output waveforms from power electronics are only approximations of true sinusoids. The presence of additional harmonics distorts the sinusoidal nature and results in changes that are not symmetric in the magnetic field strength … which in turn means a non-symmetric flux distribution.

Two other items contribute to potentially damaging bearing currents as well. One of these is the Common Mode Voltage which is present (to some degree) in all drives. Essentially this is a signal that is present at both the drive input and output … I tend to think of it as an offset. It’s not something that traditional grounding addresses, and can create an elevated potential in the shaft which then discharges through the bearing path.

A second item is not related to the presence (or absence) of drives at all; it is related to the mechanical arrangement of the process drive train. For example, a shaft that has a sliding seal (like the felt curtain on a dryer section), or one that turns a blade against a gas or liquid (like a compressor) can generate a static charge at the point of contact. If there is no means of isolating this charge to the portion of the shaft where the sliding is occurring, it can pass through to the motor shaft and thence through the motor bearings.

Lastly – the frequency of the variable frequency drive harmonics in the output waveform is significantly higher than line frequency. This requires specific accommodations for grounding as traditional methods are insufficient due to the attenuation caused by the relatively high resistance ground path.

Constant on-time control

There are three different more or less widely used types of constant on-time control. The first one is where the off-time is varied with an error signal. A loop with this type of control has a control-to-output voltage frequency response (or Bode plot if you prefer) similar to that of the constant-frequency voltage-mode control. The second one is where the off-time is terminated with a comparator that monitors the inductor current, and when that current goes below a level set by the error signal, the switch is turned on. This control (also called constant on-time valley-current control) has a control-to-output voltage frequency response similar to the constant-frequency valley-current control. The main difference is that its inner current-control loop does not suffer from the subharmonic instability of the constant-frequency version, so it does not require a stabilizing ramp and the control-to-output voltage response does not show the half-frequency peaking. The third version is where the off-time is terminated when the output voltage (or a fraction of it) goes below the reference voltage. This control belongs to the family of ripple-based controls and it cannot be characterized with the usual averaging-based control-to-output frequency response, for the reason that the gain is affected by the output ripple voltage itself.

As for the hysteretic control, the current-mode version is a close relative of the constant on-time valley-current-control. The version that uses the output ripple voltage instead of the inductor current ripple for turning on and off the switch (also called “hysteretic regulator”) is a close relative of the constant on-time ripple-based control.

Although the ripple-based control loops cannot be characterized with the usual Bode plots, the converters can still be unstable, but not in the meaning of the traditional control-loop instability that power-supply engineers are used to. Furthermore the hysteretic regulator is essentially unconditionally stable. The instabilities with ripple-based control are called “fast-scale” because the frequency of the instability is closely related to the switching frequency (either subharmonic, similar to the inner-loop instability of some of the current-mode controller, or chaotic in nature).

The paper I wrote a couple of years ago (“Ripple-Based Control of Switching Regulators—An Overview”) is a good introduction to ripple-based control and discusses some of the stability issues. There are also quite a few papers with detailed analyses on the stability of converters with feedback loops where the ripple content of the feedback signal is significant.

Impedance analyzer

A graphical impedance analyzer with good phase resolution is a must. Some brands have all the bells and whistles, but not the phase resolution necessary to accurately measure high Q (100+) components over the instrument’s full frequency range (which should extend at least into the low megahertz). Of course the Agilent 4294A fills the performance bill, but with a $40k+ purchase bill, it also empties the budget (like similar high end new models from Wayne Kerr). Used models from Wayne Kerr work very well, and can be had for under $10K but they are very heavy and clunky with very ugly (but still useable) displays.

Perhaps the best value may be the Hioki IM3570, which works extremely well with superior phase resolution, has a very nice color touch screen display (with all the expected engineering graphing formats), is compact and lightweight, and costs around $10k new. Its only downside is that its fan is annoyingly loud and does not reduce its noise output during instrument idle.

But where should an impedance analyzer rank on the power electronics design engineer’s basic equipment list (and why)?

Beyond the basic lower cost necessities such as DMMs, bench power supplies, test leads, soldering stations, etcetera, I would rank a good impedance analyzer second only to a good oscilloscope. The impedance analyzer allows one to see all of a component’s secondary impedance characteristics and to directly compare similar components. Often overlooked is the information such an instrument can provide by examining component assemblies in situ in a circuit board assembly. Sometimes this can be very revealing of hidden, but influential layout parasitics.

Equally importantly, an impedance analyzer allows accurate SPICE models to be quickly formulated so that simulation can be used as a meaningful design tool. Transformer magnetizing and leakage inductances can be measured as well as inter-winding capacitance and frequency dependent resistive losses. From these measurements and with proper technique, a model can be formulated that nearly exactly matches the real part. Not only does this allow power circuits and control loops to be initially designed entirely by simulation (under the judicious eye of experience, of course), but it even allows one to effectively simulate the low frequency end of a design’s EMI performance.

FETs in ZVS bridge

Had run into a very serious field failure issue a decade ago due to IXYS FETs used in a phase-shifted ZVS bridge topology. Eventually, the problem was tracked to failure of the FETs’ body diode when the unit operated at higher ambient temperature.

When FETs were first introduced for use in hard switching applications, it was quickly discovered that under high di/dt commutating conditions, the parasitic bipolar transistor that forms the body diode can turn on resulting in catastrophic failure (shorting) of the FET. I had run into this issue in the mid ’80s and if memory serves me correctly, IR was a leader in making their FET body diodes much more robust and capable of hard commutation. Having had this experience with FET commutation failures and after exhausting other lines of investigation which showed no problem with the operation of the ZVS bridge, I built a tester which could establish an adjustable current through the body diode of the FET under test followed by hard commutation of the body diode.

Room temperature testing of the suspect FET showed the body diode recovery characteristic similar to that of what turned out to be a more robust IR FET. Some difference was seen in the diode recovery as the IXYS FET was a bit slower and did show higher recovered charge. However, was unable to induce a failure in either the IXYS or IR FET even when commutating high values of forward diode current up to 20A when testing at room temperature.

The testing was then repeated in a heated condition. This proved to be very informative. The IXYS FETs were found to fail repeatedly with a case temperature around 80C and forward diode current prior to commutation as low as 5A. In contrast, the IR devices were operated to 125C case temp with forward diode currents of 10A without failure.

This confirmed a high temperature operating problem of the IXYS FETs associated with the body diode. Changing to the more robust IR devices solved the field failure issue.
Beware when a FET datasheet does not provide body diode di/dt limits at elevated ambient.

A more complete explanation of the FET body diode failure mechanism in ZVS applications can be found in application note APT9804 published by Advanced Power Technology.

I believe FETs can be reliably used in ZVS applications if the devices are carefully selected and shown to have robust body diode commutation characteristics.

Paralleling IGBT modules

I’m not sure why the IGBTs would share the current since they’re paralleled, unless external circuitry (series inductance, resistance, gate resistors) forces them to do so?

I would be pretty leery of paralleling these modules. As far as the PN diodes go, reverse recovery currents in PN diodes (especially if they are hard switched to a reverse voltage) are usually not limited by their internal semiconductor operation until they reach “soft recovery” (the point where the reverse current decays). They are usually limited by external circuitry (resistance, inductance, IGBT gate resistance). A perfect example: the traditional diode reverse recovery measurement test externally limits the reversing current to a linear falling ramp by using a series inductance. If you could reverse the voltage across the diode in a nanosecond, you would see an enormous reverse current spike.

Even though diode dopings are pretty well controlled these days, carrier lifetimes are not necessarily. Since one diode might “turn off” (go into a soft reverse current decreasing ramp, where the diode actually DOES limit its own current) before the other, you may end up with all the current going through one diode for a least a little while (the motor will look like an inductor, for all intents and purposes, during the diode turn-off). Probably better to control the max diode current externally for each driver.

Paralleling IGBT modules where the IGBT but not the diode has a PTC is commonly done at higher powers. I personally have never done more than 3 x 600A modules in parallel but if you look at things like high power wind then things get very “interesting”. It is all a matter of analysis, good thermal coupling, symmetrical layout and current de-rating. Once you get too many modules in parallel then the de-rating gets out of hand without some kind of passive or active element to ensure current sharing. Then you know it is time to switch to a higher current module or a higher voltage lower current for the same power. The relative proportion of switching losses vs conduction losses also has a big part to play.

Voltage transmission & distribution

If you look back over history you will find how things started out from the early engineers and scientists looking at materials and developing systems that would meet their transmission goals. I recall when drives (essentially ac/dc/ac converters) had an upper limit around 200 to 230 volts). In Edison and Tesla days there was a huge struggle to pick DC or AC and AC prevailed mainly because it was economical to make AC machines. Systems were built based on available materials and put in operation. Some worked great some failed. When they failed they were analyzed and better systems built. Higher and higher voltages lowered copper content and therefore cost as insulators improved. Eventually commitees formed and reviewed what worked and developed standards. Then by logical induction it was determined what advances could be made in a cost effective and reliable manner. A lot of “use this” practice crept in. By this I mean for example, I worked at a company and one customer bought 3,000 transformers over the course of ten years, They had a specific size enclosure they wanted.

Due to high volume purchase the cost of the enclosure was low. Other small jobs came thru and this low cost enclosure was used on them to expedite delivery and keep cost minimum. Guess what, that enclosure is now a standard enclosure there because it was used on hundreds of designs over ten years. Is it the most economical box, probably not in the pure engineering sense but changing something that works is seldom a good idea. Today, they are raising voltage levels to new high values. I read of a project in Germany to run HVDC linesover huge distance. They are working to overcome a problem they foresee. How do you break the circuit with HVDC economically. If you ever put DC thru a small contactor maybe 600VDC you find quickly that the arc opening the contactor melts the contacts. Now, what do you do at 800kVDC or 1.2MVDC. What will the cost of the control circuit be to control this voltage level. (Edison and Tesla all over again)And there you have it, my only push for the subject of history to be taught.

Signal processing and communications theory

Coming from a signal processing and communications theory background, but with some experience in power design, I can’t resist the urge to chime in with a few remarks.

There are many engineering methods to deal with sources of interference, including noise from switching converters, and spread spectrum techniques are simply one more tool that may be applied to achieve a desired level of performance.

Spread spectrum techniques will indeed allow a quasi-peak EMC test to be passed when it might otherwise be failed. Is this an appropriate application for this technique?

The quasi-peak detector was developed with the intention to provide a benchmark for determining the psycho-acoustic “annoyance” of an interference on analog communications systems (more specifically, predominantly narrow band AM type communication systems). Spread spectrum techniques resulting in a reduced QP detector reading will almost undoubtedly reduce the annoyance the interference would have otherwise presented to the listener. Thus the intent was to reduce the degree of objectionable interference and the application of spread spectrum meets that goal. This doesn’t seem at all like “cheating” to me; the proper intent of the regulatory limit is still being met.

On the other hand, as earlier posters have pointed out, the application of spectrum spreading does nothing to reduce the total power of the interference but simply spreads it over a wider bandwidth. Spreading the noise over a wider bandwidth provides two potential benefits. The most obvious benefit occurs if the victim of the interference is inherently narrowband. Spreading the spectrum of the interference beyond the victim bandwidth provides an inherent improvement in signal to noise ratio. A second, perhaps less obvious, benefit is that the interference becomes more noise like in its statistics. Noise like interference is less objectionable to the human ear than impulsive noise but it should also be recognized that it is less objectionable to many digital transmission systems too.

However, from an information theoretic perspective the nature of the interference doesn’t matter, but rather only the signal to noise ratio matters. Many modern communication systems employ wide bandwidths. Furthermore they employ powerful adaptive modulation and coding schemes that will effectively de-correlate interference sources (makes the effect noise like); these receivers don’t care whether the interference is narrow band or wide band in terms of bit error rate (BER) and they will be effected largely the same by a given amount of interference power (in theory identically the same, but implementation limitations still provide some gap to the theoretical limits).

It is worth noting however that while spectrum spreading techniques do not reduce the interference power they don’t make it any worse either. Thus these techniques may (I would argue legitimately as per above) help with passing a test which specified the CISPR Quasi-Peak detector and should not make the performance on a test specifying the newer CISPR RMS+Average test any worse.

It should always be an engineering goal to keep interference to a reasonable minimum and I would agree that it is aesthetically most satisfying (and often cheapest and most simple) to achieve this objective by somehow reducing the interference at source (this is a wide definition covering aspects of SMPS design from topology selection to PCB layout and beyond). However, the objective to control noise at the source shouldn’t eliminate alternative methods from consideration in any given application.

There will always be the question of how good is good enough and it is the job of various regulatory bodies to define these requirements and to do so robustly enough such that the compliance tests can’t be “gamed”.

1:1 ratio transformer

A 1:1 ratio transformer is primarily used to isolate the primary from the secondary. In small scale electronics it isolates the noise / interference collected from the primary from being transmitted to the secondary. In critical care facilities it can be used as an isolation transformer to isolate the primary grounding of the supply from the critical grounding system of the load (secondary). In large scale applications it is used as a 3-phase delta / delta transformer equipment to isolate the grounding of the source system (primary) from the ungrounded system of the load (secondary).

In a delta – delta system, the equipment grounding is achieved by installing grounding electrodes of grounding resistance not more 25 ohms (maximum or less) as required by the National electrical code. From the grounding electrodes, grounding conductors are distributed with the feeder circuit raceways and branch circuit raceways up to the equipment where the equipment enclosures and non-current carrying parts are grounded (bonded). This scheme is predominant on installations where most of the loads are motors like industrial plants, or on shipboard installations where the systems are mostly delta-delta (ungrounded). In ships, the hull becomes the grounding electrode. Electrical installations like these have ground fault monitoring sensors to determine if there are accidental line to ground connections to the grounding system.

Self Excited Induction Generator (SEIG)

The output voltage and frequency of a self excited induction generator (SEIG) are totally dependent on the system to which it is attached.

The fact that it is self-excited means that there is no field control and therefore no voltage control, instead the residual magnetism in the rotor is used in conjunction with carefully chosen capacitors at its terminal to form a resonant condition that mutually assists the buildup of voltage limited by the saturation characteristics of the stator. Once this balance point is reached any normal load will cause the terminal voltage to drop.

The frequency is totally reliant upon the speed of the rotor, so unless there is a fixed speed or governor controlled prime mover the load will see a frequency that changes with the prime mover and drops off as the load increases.

The above characteristics are what make SEIGs less than desirable for isolated/standalone operation IF steady well regulated AC power is required. On the other hand if the output is going to be rectified into DC then it can be used. Many of these undesirable “features” go away if the generator is attached to the grid which supplies steady voltage and frequency signals.

The way around all the disadvantages is to use a doubly fed induction generator (DFIG). In addition to the stator connection to the load, the wound rotor is provided with a varying AC field whose frequency is tightly controlled through smart electronics so that a relatively fixed controllable output voltage and frequency can be achieved despite the varying speed of the prime mover and the load, however the costs for the wound rotor induction motor plus the sophisticated control/power electronics are much higher than other forms of variable speed/voltage generation.

Differences of Grounding, Bonding and Ground Fault Protection?

Grounding (or Earthing) – intentionally connecting something to the ground. This is typically done to assist in dissipating static charge and lightning energy since the earth is a poor conductor of electricity unless you get a high voltage and high current.

Bonding is the intentional interconnection of conductive items in order to tie them to the same potential plane — and this is where folks get the confusion to grounding/earthing. The intent of the bonding is to ensure that if a power circuit faults to the enclosure or device, there will be a low-impedance path back to the source so that the upstream overcurrent device(s) will operate quickly and clear the fault before either a person is seriously injured/killed or a fire originates.

Ground Fault Protection is multi-purpose, and I will stay in the Low Voltage (<600 volts) arena. One version, that ends up being seen in most locations where there is low voltage (220 or 120 volts to ground) utilization, is a typically 5-7 mA device that’s looking to ensure that current flow out the hot line comes back on the neutral/grounded conductor; this is to again protect personnel from being electrocuted when in a compromised lower resistance condition. Another version is the Equipment Ground Fault Protection, and this is used for resistive heat tracing or items like irrigation equipment; the trip levels here are around 30 mA and are more for prevention of fires. The final version of Ground Fault Protection is on larger commercial/industrial power systems operating with over 150 volts to ground/neutral (so 380Y/220, 480Y/277 are a couple typical examples) and — at least in the US and Canada — where the incoming main circuit interrupting device is at least 1000 amps (though it’s not a bad idea at lower, it’s just not mandated); here it’s used to ensure that a downstream fault is cleared to avoid fire conditions or the event of ‘Burn Down’ since there’s sufficient residual voltage present that the arc can be kept going and does not just self-extinguish.

In the Medium and High Voltage areas, the Ground Fault Protection is really just protective relaying that’s monitoring the phase currents and operating for an imbalance over a certain level that’s normally up to the system designer to determine.