Category: Iacdrive_blog

Why BLDC motors are noiseless compare to Induction motor?

If referring to the acoustic noise generated at or around the PWM frequency of the PWM frequency. There are more laminations in an induction machine. This may account for some of the difference.

I also don’t know what the relative power difference is between the BLDC and the induction machine. If it’s about a 5Hp BLDC and a 100Hp induction machine, then you can bet that the PWM frequency of the BLDC is likely above the audible range and the PWM frequency of the induction inverter is well within the audible range.

These are just a few reasons that you may find subtle differences between the two. There are many factors and more information is needed to really help understand your specific situation.
I also believe there are simply sophomoric and unprofessional answers. My statement is based on the general rule that there is greater surface area between laminations of squirrel cage induction machine then there are in BLDC machines. Of course, if you want to state that you have a thin lamination on a long stack length design for a BLDC then there may be an argument that such a motor design when compared to a typical induction machine of the same power has a similar surface-to-surface lamination area. It is these laminations moving due to eddy currents at the PWM frequency that causes the audible noise.

BLDC can come with very small inductance which requires a higher PWM frequency, if you compare both them with controller that may cause different.
If you build 2 motor using exact mechanical shapes and electrical parameter they should be very close. You can build 2 induction machines from 2 different vendors to same electrical spec and they will not sound the same.

Are variable speed drives harmful to motor?

Variable speed drive switches very fast which brings high dv/dt on motor. How often do we face with problems coming with VSD? How harmful is the common mode currents in windings and other parts of motor due to high dv/dt. Do we see winding isolation failure? How much does the life of motor reduce? Also, is the filtering of voltage at the output of inverter common or applicable practice in the field?

The waveforms for the INVERTER are not good to the motor…. Makes the motor run hot and less efficient….. and all the above….
In-line filters to reduce harmonics is a must in many cases…
Depending on power levels you can have in line reactors for CM and DM or balanced bridge methods for CM… There is methods of harmonic canceling with reactors called harmonic blockers, where you arrange the 3 phase windings in such a manner to cancel certain harmonics….not all harmonics will be blocked, usually in grouping intervals…you need to be aware of what harmonics are your worst offenders…

Mostly in medium and high voltage motor drives the very fast change of the voltage can induce high capacitive currents inside the motor with harmful results.
A way to reduce this negative effect is to increase the number of voltage steps (levels) such that the dV/dT will decrease proportionally (dT=turn on switching time, dV=one voltage step). The most popular method used is SVPWM (space vector sine PWM) NPC (neutral point clamped) multilevel frequency converter. Line L-C filters are also used for EMC.

The first step in any filter analysis is knowing what harmonic vectors your dealing with.
Mathcad is a great tool for modeling the PWM modulation with the sub carrier and generating the harmonic matrices..vectors…I usually go above the 100th harmonic in some analysis, then doing this over the operating ranges of the motor….you then pick your Worse Case operating point and now you have a matrices to work with…. Summing the harmonic magnitudes will give you an idea of how much garbage your feeding your motor windings.

They could be harmful for high frequency current and voltages which are not economical to be eliminated.
But this weakness is so neglect able to the benefits providing. These benefits are very comprehensive. The harmful harmonics are controlled by the standards, so in order to improve harmonic characteristics, we need an improved standard.

Lighting control panel to distribution board

There are a couple of construction differences which may be present, depending on the style of “lighting control panel”.

First, a distribution board typically has poly-phase branch breakers with the intention of feeding either other sub-panels or large loads — such as a motor with a motor controller.

A lighting control panel will have mostly single-pole breakers with phase-to-neutral branch circuits feeding lighting circuitry. There is the added possibility of having either ‘smart’ breakers or integral contactors included on the branch circuits to allow for a control means for area lighting beyond local control of an individual fixture/small group of fixtures, such as an office or conference room.

In general
1. The final branch circuits to be identified and rating load to be estimated.
2. Adequate utilization/diversity Factor to be applied if applicable (depends on the application).
3. To ensure the load balance over the 3 Phase as possible.
4. For Fluorescent light fixtures arrangement of the said fixtures with respect (RYB phases) is necessary to mitigate rendering/glaring and frequency affect.
5. Then size of cable from DB to LCP can be determined/sized, rating of the protective devices can be selected and type of CB(s) subject to type of lighting fixtures.
6. Verification of Voltage drop within the prescribed limit, otherwise select the next standard cable size.

A distribution board typically has poly-phase branch breakers with the intention of feeding either other sub-panels or large loads and lighting control panel (is also one type of distribution panel) will have mostly single-pole breakers with phase-to-neutral branch circuits feeding lighting circuitry.

Designing Gate drivers for IGBT

Q:
When designing gate drivers for IGBT’s, how reliable are the gate driver IC’s ? Now there are a lot of gate driver IC’s available in the market. For example i am using the Hybrid IC M57962L for driving IGBT’s for 3 phase inverter application. The peak output current of this Hybrid IC is 5A and it’s written in data sheet that it can be used for driving IGBT’s up to 200A, 1200V and many features in it.

For an initial design and for lower power rating the configuration is working fine. But, before going for higher power rating, i want to make sure about the reliability of Gate driver IC’s in general.
Is it advisable to design gate drivers using commercially available IC’s or go for a design which includes a gate drive transformer . What are the issues that may arise when using driver IC’s.

A:
I’ve seen and developed designs using these hybrid gate drives quite successfully with long term field reliability in applications requiring from 800 V to 1.25 kV voltage isolation in power conversion products for the semiconductor capital equipment market. Powerex offers various different isolated drivers like the M57962L – my personal favorite is the VLA-502 which also contains the isolated DC/DC converter used to power the isolated gate drive electronics.

There are only two problems that I remember in the last 10 years with these types of commercial drivers – and both problems, if I remember correctly were with the stand alone DC/DC converter intended to be used with the stand alone isolated driver. One problem was a voltage isolation issue from primary to secondary inside the DC/DC switcher. Powerex acknowledged the problem, and upgraded the design. I simply do not recall the part numbers involved. The second problem was with regard to how the isolated VEE rail was established – the monopolar output of the DC/DC converter was offset negative, and ground referenced with a zener diode – and when the IGBT gate would become active at high frequency (25 kHz for that particular application), the gate charge was high enough to sag the negative supply rail against the zener shunt.

Bottom line: Use a good isolated DC/DC converter, with solid VCC and VEE regulated outputs. The isolated drivers themselves are solid in my experience – a nice, simple solution with typically better rise and fall times than gate drive transformers. They also have the added benefit of being capable of holding positive or negative DC bias if the application requires it.

Difference between PLC and DDC system

PLC is defined as Programmable Logic Controller. It is a hardware, Includes processor, I/P & O/P Modules, Counters, Function Blocks, Timers,,, etc. The I/Os are either Analogue or Digitals or both. PLC can be configured to suit the application and to programmed in a logic manner by using one of the programing language such as Statement List, Ladder Diagram,, etc Interaction in real time between inputs and the resultant of the outputs through the program logic – PID – gives the entire Control System. While the Digital Control System I believe it is Software/ System that uses only Digital Signals for control and PLC/PC/Server/Central Unit may constitutes an Integral part of this system.

Harmonic current

I hate to call them harmonic currents. The do submit to Fourier analysis, but you are probably dealing with AC to DC power supplies. If you look at the current pulses, you will see that each pulse is about 1-2 milliseconds in duration in alternating directions. If you sum these all in the neutral there is the appearance of what looks like 180 Hertz in the neutral. If you use different sized power supplies on each phase, you can see that it is just the addition of the three phases. So the neutral current when you have non power factor corrected power supplies is the sum of the three phases. Unless the current waveforms overlap, there is no cancellation of current in the neutral, hence the neutral current is the sum of the phase currents. The reasoning behind this is the rectifier diodes in the front of the power supply and the DC storage capacitors size relative to the DC load on the capacitor. The general rule of thumb is that the capacitor is about 800 to 1000 microfarads per amp of current in the capacitor.

Realize that the extra heating in the three phase delta-wye transformers is due to the extra circulating current in the primary delta causing excessive heating of the primary conductor. The world calls transformers designed to deal with this “K” factor transformers. Let the world of electrical engineers bury all this simple stuff behind the maze of Fourier analysis. Change the incoming voltage slightly and your Fourier analysis is garbage. The issue here is switches and storage caps— not some magical mathematical garbage.

By the way if someone wanted to use the wire sizing guidelines of the National Electrical Code in the US to size wire for 100% power supply load, the neutral wire would be 8 gauge sizes larger than the phase conductors. People need to start demanding PFC power supplies. Fix a switching problem with switches.

Transmission line low voltages and overload situations

Q: I want to know just what the surge impedance loading (SIL) is but its relevance towards the improvement of stability and reliability of a power network especially an already existing one with various degrees of low voltages and overload situations?

A: The surge impedance loading will provide you with an easy way of determining if your transmission line is operated as a net reactor (above SIL, so external sources of (2) line-voltage-drop limitation
(3) steady-state-stability limitation

In contrast with the line voltage drop limitation, the steady state stability limitation has been discussed quite extensively in the technical literature.

However, one important point is rarely made or given proper emphasis; that is, the stability limitation should take the complete system into account, not just the line alone. This has been a common oversight which, for the lower voltage lines generally considered in the past, has not led to significant misinterpretations concerning line loadability

At higher voltage classes such as 765 kV and above, the typical levels of equivalent system reactance at the sending and receiving end of a line become a significant factor which cannot be ignored in determining line loadability as limited by stability considerations, so surge impedance loading plays a fundamental role in reliability and stability.

Neutral current is less than phase current?

In a balanced 3-phase system with pure sine waves, the neutral current is zero, ideally.
If there is phase imbalance, it shows up in the neutral, so check for imbalance.

The other major cause of high neutral currents is full wave rectification, where the current of each phase is flowing only at its peak voltage. In this case, the neutral current can be as high as three times the phase currents, theoretically.

If you can see the frequency of the neutral current, line frequency currents indicate imbalance. Current due to full wave rectification is high in third harmonics, so it may show mostly 3 x line frequency, or be a ratty square wave at 3 x line frequency.

High neutral currents, and some resulting fires, are largely responsible for the adoption of power factor correction requirements. If your loads are balanced and pfc corrected, you should not have neutral currents.

The neutral current (In) is summation of the phase currents. And obviously, the three phases are decoupled now; and not loading Y makes Iy=0.
So In = Ir + Ib (vectorial sum). Now depending on the amount of loading, nature of loads and their respective power factors, a variety of possibilities (for neutral current magnitude and phase) arise; which may include the case of In being higher.
The statement “neutral current is usually less than phase currents” is naive and not universal.

Nonlinear loads (i.e. rectifiers as Ed mentioned above) draw significant harmonic current. In many cases the current Total Harmonic Distortion (THD) is >100%. In a 3-phase, 4-wire system, the triplen harmonic currents (3, 9, 15, 21…) sum in the neutral wire because they are all in-phase. This is why the neutral current can be much higher than the phase currents even on an otherwise balanced load application. If you can put a current probe on the neutral and look at the waveform – you can see how much fundamental vs. harmonic current there is.

What is the surge impedance load

The surge impedance loading (SIL) of a line is the power load at which the net reactive power is zero. So, if your transmission line wants to “absorb” reactive power, the SIL is the amount of reactive power you would have to produce to balance it out to zero. You can calculate it by dividing the square of the line-to-line voltage by the line’s characteristic impedance.

Transmission lines can be considered as, a small inductance in series and a small capacitance to earth, – a very large number of this combinations, in series. Whatever voltage drop occurs due to inductance gets compensated by capacitance. If this compensation is exact, you have surge impedance loading and no voltage drop occurs for an infinite length or, a finite length terminated by impedance of this value (SIL load). (Loss-less line assumed!). Impedance of this line can be proved to be sqrt (L/C). If capacitive compensation is more than required, which may happen on an unloaded EHV line, then you have voltage rise at the other end, the ferranti effect. Although given in many books, it continues to remain an interesting discussion always.

The capacitive reactive power associated with a transmission line increases directly as the square of the voltage and is proportional to line capacitance and length.

Capacitance has two effects:

1 Ferranti effect
2 rise in the voltage resulting from capacitive current of the line flowing through the source impedances at the terminations of the line.

SIL is Surge Impedance Loading and is calculated as (KV x KV) / Zs their units are megawatts.

Where Zs is the surge impedance….be aware…one thing is the surge impedance and other very different is the surge impedance loading.

Snubber circuit for IGBT Inverter in high frequency applications

Q:
First i had carried out experiments with a single IGBT (IRGPS40B120UP) (TO-247 package) 40A rating without snubber and connected a resistive load, load current was 25A , 400V DC and kept it on continuously for 20min . Then i switched the same IGBT with 10KHz without snubber and the IGBT failed within 1min. Then i connected an RC snubber across the IGBT (same model )and switched at 10KHZ. The load current was gradually increased and kept at 10A. This time the IGBT didn’t fail . So snubber circuits are essential when we go for higher switching frequency.

What are the general guide lines for snubber circuit design in You are not looking close enough at the whole system. My first observation is that you are using the slowest speed silicon available from IR. Even though 10KHz is not fast, have you calculated/measured your switching losses. The second and bigger observation I have, is that you think your circuit is resistive. If your circuit was only resistive, any snubber would have no effect. The whole purpose of a snubber is to deal with the energy stored in the parasitic inductive elements of your circuit. Without understanding how much inductcance your circuit has, you can’t begin designing a cost and size effective snubber.

To echo one of the thoughts of Felipe, you need to know the exact purpose of the snubber. Is it to slow down the dV/dt on turn off or is it to limit the peak voltage? Depending on which you are trying to minimize and your final switching frequency, will dictate which snubber topology will work best for you. The reason that so many snubber configurations exist, is that different applications will require different solutions. I have used snubbers in various configurations up to 100KHz.