Category: Iacdrive_blog

Determine coefficient of grounding

Determination of required grounding impedance is based on determination of coefficient of grounding which represents ratio of maximum phase voltage at phases which aren’t exposed by fault and line voltage of power network:

kuz=(1/(sqrt(3)))*max{|e(-j*2*π/3)+(1-z)/(2+z)|; |e(+j*2*π/3)+(1-z)/(2+z)|}
z=Z0e/Zde

where are:

kuz-coefficient of grounding,
z-ratio of equivalent zero sequence impedance viewed from angle of place of fault and equivalent direct sequence impedance viewed from angle of place of fault,
Z0e-equivalent zero sequence impedance viewed from angle of place of fault,
Zde-equivalent direct sequence impedance viewed from angle of place of fault.

So, after this explanation, you can get next conclusions:
if kuz=1 then power network is ungrounded because Z0e→∞, which is a consequence of existing more (auto) transformers with ungrounded neutral point than (auto) transformers with grounded neutral point (when kuz=1 then there aren’t (auto) transformers with grounded neutral point),
if kuz≤0,8 then power network is grounded because Z0e=Zde, which is a consequence of of existing more (auto) transformers with grounded neutral point than (auto) transformers with ungrounded neutral point.

Fault current in grounded power networks is higher than fault current in ungrounded power networks. By other side, in case of ungrounded power networks we have overvoltages at phases which aren’t exposed by fault, so insulation of this conductors could be seriously damaged or in best case it could become older in shorter time than it is provided by design what is the main reason for grounding of power networks.
Coefficient of grounding is very important in aspect of selecting of insulation of lighting arresters and breaking power of breakers, because of two next reasons:
1. in grounded power networks insulation level is lower than insulation level in ungrounded power networks,
2. in grounded power networks value of short circuit current is higher than value of short circuit current in ungrounded power networks.

Hysteretic controller

We can see that the hysteretic controller is a special case of other control techniques. For example, “sliding mode control” usually uses two state variables to determine one switching variable (switch ON or OFF). So the hysteretic controller is a special case of “1-dimensional” sliding mode. In general, there are many techniques under the name of “geometric control” that can be used to prove the stability of a general N-state system under a given switching rule. So I believe that you can apply some of these techniques to prove the stability of the hysteretic controller, although I have not tried to do this myself. The book “elements of power electronics” by Krein discusses that in chapter 17.

But I can talk more about one technique that I have used and in my opinion is the most general and elegant technique for non-linear systems. It is based on Lyapunov stability theory. You can use this technique to determine a switching rule to a general circuit with an arbitrary number of switches and state variables. It can be applied to the simple case of the hysteretic controller (i.e. 1 state variable, 1 switching variable) to verify if the system is stable and what are the conditions for stability. I have done this and verified that it is possible to prove the stability of hysteretic controllers, imposing very weak constraints (and, of course, no linearization needed). In a nutshell, to prove the system stable, you have to find a Lyapunov function for it.

What can expand is to go beyond a simple window comparator for hysteretic control.

#1) control bands, or switching limits can be variable and also part of a loop, especially if one wants to guarantee a nearly fixed frequency.

#2) using a latch or double latch after the comparator(s), one can define (remember) the state and define operations such as incorporating fixed Ton or Toff periods for additional time control… this permits the “voltage boost” scenario you previously said could not be done. This also prevents common “chaos” operation and noise susceptibility that others experience with simpler circuits.

#3) additional logic can assure multiphase topologies locked to a system clock and compete very well with typical POL buck regulators for high-end processors that require high di/dt response.

Time or state domain control systems such as this, can have great advantages over typical topologies. There really is no faster control method that provides a quicker load response without complete predictive processing, yet that can also be applied to hysteretic control.

What causes VFD driven motor bearing current?

There are several things involved, all with varying degrees of impact.

Large machines are – generally speaking – made of pieces (segments) because the circle for the stator and/or rotor core is too large to manufacture from a single sheet. This leads to some breaks in the magnetic flux path symmetry, both in the radial (right angles to the shaft) and axial (parallel to the shaft) directions.

For the most part, the windings of large machines are formed and installed by hand. This too can lead to symmetry issues, as the current paths are not identical which in turn will create some differences in the magnetic field flux.

Output waveforms from power electronics are only approximations of true sinusoids. The presence of additional harmonics distorts the sinusoidal nature and results in changes that are not symmetric in the magnetic field strength … which in turn means a non-symmetric flux distribution.

Two other items contribute to potentially damaging bearing currents as well. One of these is the Common Mode Voltage which is present (to some degree) in all drives. Essentially this is a signal that is present at both the drive input and output … I tend to think of it as an offset. It’s not something that traditional grounding addresses, and can create an elevated potential in the shaft which then discharges through the bearing path.

A second item is not related to the presence (or absence) of drives at all; it is related to the mechanical arrangement of the process drive train. For example, a shaft that has a sliding seal (like the felt curtain on a dryer section), or one that turns a blade against a gas or liquid (like a compressor) can generate a static charge at the point of contact. If there is no means of isolating this charge to the portion of the shaft where the sliding is occurring, it can pass through to the motor shaft and thence through the motor bearings.

Lastly – the frequency of the variable frequency drive harmonics in the output waveform is significantly higher than line frequency. This requires specific accommodations for grounding as traditional methods are insufficient due to the attenuation caused by the relatively high resistance ground path.

Constant on-time control

There are three different more or less widely used types of constant on-time control. The first one is where the off-time is varied with an error signal. A loop with this type of control has a control-to-output voltage frequency response (or Bode plot if you prefer) similar to that of the constant-frequency voltage-mode control. The second one is where the off-time is terminated with a comparator that monitors the inductor current, and when that current goes below a level set by the error signal, the switch is turned on. This control (also called constant on-time valley-current control) has a control-to-output voltage frequency response similar to the constant-frequency valley-current control. The main difference is that its inner current-control loop does not suffer from the subharmonic instability of the constant-frequency version, so it does not require a stabilizing ramp and the control-to-output voltage response does not show the half-frequency peaking. The third version is where the off-time is terminated when the output voltage (or a fraction of it) goes below the reference voltage. This control belongs to the family of ripple-based controls and it cannot be characterized with the usual averaging-based control-to-output frequency response, for the reason that the gain is affected by the output ripple voltage itself.

As for the hysteretic control, the current-mode version is a close relative of the constant on-time valley-current-control. The version that uses the output ripple voltage instead of the inductor current ripple for turning on and off the switch (also called “hysteretic regulator”) is a close relative of the constant on-time ripple-based control.

Although the ripple-based control loops cannot be characterized with the usual Bode plots, the converters can still be unstable, but not in the meaning of the traditional control-loop instability that power-supply engineers are used to. Furthermore the hysteretic regulator is essentially unconditionally stable. The instabilities with ripple-based control are called “fast-scale” because the frequency of the instability is closely related to the switching frequency (either subharmonic, similar to the inner-loop instability of some of the current-mode controller, or chaotic in nature).

The paper I wrote a couple of years ago (“Ripple-Based Control of Switching Regulators—An Overview”) is a good introduction to ripple-based control and discusses some of the stability issues. There are also quite a few papers with detailed analyses on the stability of converters with feedback loops where the ripple content of the feedback signal is significant.

Impedance analyzer

A graphical impedance analyzer with good phase resolution is a must. Some brands have all the bells and whistles, but not the phase resolution necessary to accurately measure high Q (100+) components over the instrument’s full frequency range (which should extend at least into the low megahertz). Of course the Agilent 4294A fills the performance bill, but with a $40k+ purchase bill, it also empties the budget (like similar high end new models from Wayne Kerr). Used models from Wayne Kerr work very well, and can be had for under $10K but they are very heavy and clunky with very ugly (but still useable) displays.

Perhaps the best value may be the Hioki IM3570, which works extremely well with superior phase resolution, has a very nice color touch screen display (with all the expected engineering graphing formats), is compact and lightweight, and costs around $10k new. Its only downside is that its fan is annoyingly loud and does not reduce its noise output during instrument idle.

But where should an impedance analyzer rank on the power electronics design engineer’s basic equipment list (and why)?

Beyond the basic lower cost necessities such as DMMs, bench power supplies, test leads, soldering stations, etcetera, I would rank a good impedance analyzer second only to a good oscilloscope. The impedance analyzer allows one to see all of a component’s secondary impedance characteristics and to directly compare similar components. Often overlooked is the information such an instrument can provide by examining component assemblies in situ in a circuit board assembly. Sometimes this can be very revealing of hidden, but influential layout parasitics.

Equally importantly, an impedance analyzer allows accurate SPICE models to be quickly formulated so that simulation can be used as a meaningful design tool. Transformer magnetizing and leakage inductances can be measured as well as inter-winding capacitance and frequency dependent resistive losses. From these measurements and with proper technique, a model can be formulated that nearly exactly matches the real part. Not only does this allow power circuits and control loops to be initially designed entirely by simulation (under the judicious eye of experience, of course), but it even allows one to effectively simulate the low frequency end of a design’s EMI performance.

Simulation interpretation in automation industry

Related to “automation industry”, there are generally 3 different interpretations of what simulations is:
1) Mechanical Simulations – Via various solid modeling tools and cad programs; tooling, moving mechanisms, end-effectors… are designed with 3D visualizations, connecting the modules to prevent interference, check mass before actual machining…
2) Electronics Simulations – This type of simulations are either related to the manufacturers of “specific instrumentations” used in automation industry (ultrasonic welders, laser marking systems,…) or the designers of circuit boards.
3) Electrical & Controls Simulations.
A) Electrical Schematics, from main AC disconnect switch, down to 24VDC low amps for I/O interface.
Simulation tools allow easy determinations of system’s required amperage, fuse sizes, wire gauges, accordance with standards (CE, UL, cUL, TUV…)…
B) Logic Simulations, HMI interface, I/O exchange, motion controls…
a) If you want to have any kind of meaningful simulations, get in the habit of “modular ladder logic” circuit design. This means, don’t design your ladder like one continuous huge program that runs the whole thing; simulating this type of programs is almost impossible in every case. Break down the logic to sub-systems or maybe even down to stand alone mechanisms (pick & place, motor starter…), simulating and troubleshooting this scenario is fairly easy.
b) When possible, beside automated run mode of the machine or system, build “manual mode logic” for it as well. Then via physical push-buttons or HMI, you should have “step forward” & “step back” for every “physical movement or action”.

Simulating the integrity of the “ladder logic program” and all the components and interfaces will be a breeze if things are done meticulously upfront.

Spread spectrum of power supply

Having lead design efforts for very sensitive instrumentation with high frequency A/D converters with greater than 20-bits of resolution my viewpoint is mainly concerned about the noise in the regulated supply output. In these designs fairly typical 50-mV peak-to-peak noise is totally unacceptable and some customers cannot stand 1-uVrms noise at certain frequencies. While spread spectrum may help the power supply designer it may also raise havoc with the user of the regulated output. The amplitude of the switching spikes (input or output) as some have said, are not reduced by dithering the switching frequency. Sometimes locking the switching time, where in time, it does not interfere with the circuits using the output can help. Some may also think this is cheating but as was said it is very difficult getting rid of most 10megHz noise. This extremely difficulty applies for many of the harmonics above 100kHz. (For beginners who think that being 20 to 100 times higher than the LC filter will reduce the switching noise by 40 to 200 are sadly wrong as once you pass 100kHz many capacitors and inductors have parasitics making it very hard to get high attenuation in one LC stage and often there is not room for more. More inductors often introduce more losses as well.) We should be reducing all the noise we can and then use other techniques as necessary. With spread spectrum becoming more popular we may soon see regulation on its total noise output as well.

One form of troublesome noise is common mode noise coming out of the power inputs to the power supply. If this is present on the power input to the power supply it is very likely it is also present in the “regulated” output power if floating. Here careful design of the switching power magnetics and care in the layout can help minimize this noise enough, that filters may be able to keep the residual within acceptable limits. Ray discusses some of this in his class but many non-linear managers frequently do not think it is reasonable or necessary for the power supply design engineer to be involved in layout or location of copper traces. Why not, the companies that sell the multi-$100K+ software told their bosses the software automatically optimizes and routs the traces.

Spread spectrum is a tool that may be useful to some but not to all. I hope the sales pitch for those control chips do not lull unsuspecting new designers into complacency about their filter requirements.

Voltage transmission & distribution

If you look back over history you will find how things started out from the early engineers and scientists looking at materials and developing systems that would meet their transmission goals. I recall when drives (essentially ac/dc/ac converters) had an upper limit around 200 to 230 volts). In Edison and Tesla days there was a huge struggle to pick DC or AC and AC prevailed mainly because it was economical to make AC machines. Systems were built based on available materials and put in operation. Some worked great some failed. When they failed they were analyzed and better systems built. Higher and higher voltages lowered copper content and therefore cost as insulators improved. Eventually commitees formed and reviewed what worked and developed standards. Then by logical induction it was determined what advances could be made in a cost effective and reliable manner. A lot of “use this” practice crept in. By this I mean for example, I worked at a company and one customer bought 3,000 transformers over the course of ten years, They had a specific size enclosure they wanted.

Due to high volume purchase the cost of the enclosure was low. Other small jobs came thru and this low cost enclosure was used on them to expedite delivery and keep cost minimum. Guess what, that enclosure is now a standard enclosure there because it was used on hundreds of designs over ten years. Is it the most economical box, probably not in the pure engineering sense but changing something that works is seldom a good idea. Today, they are raising voltage levels to new high values. I read of a project in Germany to run HVDC linesover huge distance. They are working to overcome a problem they foresee. How do you break the circuit with HVDC economically. If you ever put DC thru a small contactor maybe 600VDC you find quickly that the arc opening the contactor melts the contacts. Now, what do you do at 800kVDC or 1.2MVDC. What will the cost of the control circuit be to control this voltage level. (Edison and Tesla all over again)And there you have it, my only push for the subject of history to be taught.

Signal processing and communications theory

Coming from a signal processing and communications theory background, but with some experience in power design, I can’t resist the urge to chime in with a few remarks.

There are many engineering methods to deal with sources of interference, including noise from switching converters, and spread spectrum techniques are simply one more tool that may be applied to achieve a desired level of performance.

Spread spectrum techniques will indeed allow a quasi-peak EMC test to be passed when it might otherwise be failed. Is this an appropriate application for this technique?

The quasi-peak detector was developed with the intention to provide a benchmark for determining the psycho-acoustic “annoyance” of an interference on analog communications systems (more specifically, predominantly narrow band AM type communication systems). Spread spectrum techniques resulting in a reduced QP detector reading will almost undoubtedly reduce the annoyance the interference would have otherwise presented to the listener. Thus the intent was to reduce the degree of objectionable interference and the application of spread spectrum meets that goal. This doesn’t seem at all like “cheating” to me; the proper intent of the regulatory limit is still being met.

On the other hand, as earlier posters have pointed out, the application of spectrum spreading does nothing to reduce the total power of the interference but simply spreads it over a wider bandwidth. Spreading the noise over a wider bandwidth provides two potential benefits. The most obvious benefit occurs if the victim of the interference is inherently narrowband. Spreading the spectrum of the interference beyond the victim bandwidth provides an inherent improvement in signal to noise ratio. A second, perhaps less obvious, benefit is that the interference becomes more noise like in its statistics. Noise like interference is less objectionable to the human ear than impulsive noise but it should also be recognized that it is less objectionable to many digital transmission systems too.

However, from an information theoretic perspective the nature of the interference doesn’t matter, but rather only the signal to noise ratio matters. Many modern communication systems employ wide bandwidths. Furthermore they employ powerful adaptive modulation and coding schemes that will effectively de-correlate interference sources (makes the effect noise like); these receivers don’t care whether the interference is narrow band or wide band in terms of bit error rate (BER) and they will be effected largely the same by a given amount of interference power (in theory identically the same, but implementation limitations still provide some gap to the theoretical limits).

It is worth noting however that while spectrum spreading techniques do not reduce the interference power they don’t make it any worse either. Thus these techniques may (I would argue legitimately as per above) help with passing a test which specified the CISPR Quasi-Peak detector and should not make the performance on a test specifying the newer CISPR RMS+Average test any worse.

It should always be an engineering goal to keep interference to a reasonable minimum and I would agree that it is aesthetically most satisfying (and often cheapest and most simple) to achieve this objective by somehow reducing the interference at source (this is a wide definition covering aspects of SMPS design from topology selection to PCB layout and beyond). However, the objective to control noise at the source shouldn’t eliminate alternative methods from consideration in any given application.

There will always be the question of how good is good enough and it is the job of various regulatory bodies to define these requirements and to do so robustly enough such that the compliance tests can’t be “gamed”.

1:1 ratio transformer

A 1:1 ratio transformer is primarily used to isolate the primary from the secondary. In small scale electronics it isolates the noise / interference collected from the primary from being transmitted to the secondary. In critical care facilities it can be used as an isolation transformer to isolate the primary grounding of the supply from the critical grounding system of the load (secondary). In large scale applications it is used as a 3-phase delta / delta transformer equipment to isolate the grounding of the source system (primary) from the ungrounded system of the load (secondary).

In a delta – delta system, the equipment grounding is achieved by installing grounding electrodes of grounding resistance not more 25 ohms (maximum or less) as required by the National electrical code. From the grounding electrodes, grounding conductors are distributed with the feeder circuit raceways and branch circuit raceways up to the equipment where the equipment enclosures and non-current carrying parts are grounded (bonded). This scheme is predominant on installations where most of the loads are motors like industrial plants, or on shipboard installations where the systems are mostly delta-delta (ungrounded). In ships, the hull becomes the grounding electrode. Electrical installations like these have ground fault monitoring sensors to determine if there are accidental line to ground connections to the grounding system.