Category: Iacdrive_blog

Paralleling IGBT modules

I’m not sure why the IGBTs would share the current since they’re paralleled, unless external circuitry (series inductance, resistance, gate resistors) forces them to do so?

I would be pretty leery of paralleling these modules. As far as the PN diodes go, reverse recovery currents in PN diodes (especially if they are hard switched to a reverse voltage) are usually not limited by their internal semiconductor operation until they reach “soft recovery” (the point where the reverse current decays). They are usually limited by external circuitry (resistance, inductance, IGBT gate resistance). A perfect example: the traditional diode reverse recovery measurement test externally limits the reversing current to a linear falling ramp by using a series inductance. If you could reverse the voltage across the diode in a nanosecond, you would see an enormous reverse current spike.

Even though diode dopings are pretty well controlled these days, carrier lifetimes are not necessarily. Since one diode might “turn off” (go into a soft reverse current decreasing ramp, where the diode actually DOES limit its own current) before the other, you may end up with all the current going through one diode for a least a little while (the motor will look like an inductor, for all intents and purposes, during the diode turn-off). Probably better to control the max diode current externally for each driver.

Paralleling IGBT modules where the IGBT but not the diode has a PTC is commonly done at higher powers. I personally have never done more than 3 x 600A modules in parallel but if you look at things like high power wind then things get very “interesting”. It is all a matter of analysis, good thermal coupling, symmetrical layout and current de-rating. Once you get too many modules in parallel then the de-rating gets out of hand without some kind of passive or active element to ensure current sharing. Then you know it is time to switch to a higher current module or a higher voltage lower current for the same power. The relative proportion of switching losses vs conduction losses also has a big part to play.

Power converter trend

The trend toward lower losses in power converters is not apparent in all of the applications of power converters. It is also not apparent that the power converter solution and its losses for a given market will be the same when it comes to losses. In terms of the market shift that you mention, Prof. the answer is probably that each market is becoming split into a lower efficiency and higher efficiency solution.

From my limited view the reason for this is the effort and time required to do the low loss development. The early developers of low loss converters are now ahead and those that were slower may never catch them. This gap is in a number of converter markets widening, with both higher loss and lower loss offerings continuing to be used and sold. This split is not apparent with different levels of development or geographically.

Some markets already have very efficient solutions, other markets not so efficient and others had high power loss solutions. The customers accepted these solutions. The path to lower loss converters is for some markets not yet clear and in some markets the requirement may never actually become real.

It does seem that there is a real case to make for any power converter market splitting in two as the opportunities presented by lowering the power loss are taken.

All low loss converters present significant challenges and are all somewhat esoteric.

For me power supply EMI control consists of designing filtering for differential and common mode conducted emissions. The differential mode filtering attenuates the primary side differential lower frequency switching current fundamental & harmonic frequencies. The common mode filtering provides a low impedance return path for high frequency noise currents resulting from high dV/dt transitions during switching transitions present on the power semiconductors (switching mosfet drain, rectifier cathods). These noise currents ring at high frequencies as they oscillate in the uncontrolled parasitic inductance and capacitance associated with their return to source path. Shortening and damping this return path allows the high frequency noise currents to return locally instead of via the measurement copper bench and conducted emi current or voltage (LISN) probe as well as providing a more damped ringing frequency. Shorting this return path has the added benefit of decreasing radiated emissions. In addition proper layout of the power train so as to minimize the loop area associated with both the primary and secondary side switching currents minimizes the associated radiated emissions.

When I mentioned the criticism of resonant mode converter as related to the challenges of emi filitering I was referring to the additional differential mode filtering required. For example if you take a square wave primary side current waveform and analyze the differential frequency content the fundamental magnitude with be lower and there will be higher frequency components as compared to a purely resonant approach at the same power level. It is normally the lower frequency content that has to be filtered differentially.

Given these differences the additional emi filtering volume/cost of the resonant approach may pose a disadvantage.

Conditional stability

Conditional stability, I like to think about it this way:

The ultimate test of stability is knowing whether the poles of the closed loop system are in the LHP. If so, it is stable.

We get at the poles of the system by looking at the characteristic equation, 1+T(s). Unfortunately, we don’t have the math available (except in classroom exercises) we have an empirical system that may or may not be reduced to a mathematical model. For power supplies, even if they can be reduced to a model, it is approximate and just about always has significant deviations from the hardware. That is why measurements persist in this industry.

Nyquist came up with a criterion for making sure that the poles are in the LHP by drawing his diagram. When you plot the vector diagram of T(s) is must not encircle the -1 point.

Bode realized that the Nyquist diagram was not good for high gain since it plotted a linear scale of the magnitude, so he came up with his Bode plot which is what everyone uses. The Bode criteria only says that the phase must be above -180 degrees when it crosses over 0 dB. There is nothing that says it can’t do that before 0 dB.

If you draw the Nyquist diagram of a conditionally stable system, you’ll see it doesn’t surround the -1 point.

If you like, I can put some figures together. Or maybe a video would be a good topic.

All this is great of course, but it’s still puzzling to think of how a sine wave can chase itself around the loop, get amplified and inverted, phase shifted another 180 degrees, and not be unstable!

Having said all this about Nyquist, it is not something I plot in the lab. I just use it as an educational tool. In the lab, in courses, or consulting for clients, the Bode plot of gain and phase is what we use.

How to suppress chaotic operation in a DCM flyback at low load

I would like to share these tips with everybody.
A current mode controlled flyback converter always becomes unstable at low load due to the unavoidable leading edge current spike. It is not normally dangerous but as a design engineer I don’t like to look at it and listen to it.

Here are three useful and not patented tips.

First tip:
• Insert a low pass filter, say 1kohm + 100pF between current sense resistor and CS input in your control IC.
• Split the 1kohm in two resistors R1 to the fet and R2 to the control IC. R1 << R2.
• Insert 0,5 – 1pF between drain and the junction R1/R2. This can be made as a layer-to-layer capacitor in the PCB. It does not have to be a specific value.
• Adjust R1 until the spike in the junction in R1/R2 is cancelled.
You will see that the current spike is always proportional to the negative drain voltage step at turn-on. Once adjusted, the cancellation always follows the voltage step, and you some times achieve miracles with it. Cost = one resistor.

Second tip:
Having the low pass filter from first tip, add a small fraction of the gate driver output voltage to the current sense input, say 0,1V by inserting a large resistor from ‘Drive Out’ to ‘CS input’. The added low pass filtered step voltage will more or less conceal the current spike. You should reduce your current sense resistor accordingly. Cost = one resistor.

Third tip:
In a low power flyback, you some times just need an RC network or just an extra capacitor from drain to a DC point, either to reduce overshoot or to reduce noise. Connect the RC network or the capacitor to source, not to ground or Vcc. If you connect it to ground or Vcc, you will measure the added discharge current peak in the current sense resistor. Cost = nothing – just knowledge.

All tips can be used individually or combined => Less need for pre-load resistors on your output.

Right Half Plane Pole

Very few know about the Right Half Plane Pole (not a RHP-Zero) at high duty cycle in a DCM buck with current mode control. Maybe because it is not really a problem.
It is said that this instability starts above 2/3 duty cycle – I think that must be with a resistive load. If loaded with a pure current source, it starts above 50% duty cycle.

Here is a little down-to-earth explanation:
If you run a buck converter at high duty cycle but DCM, it probably works fine and is completely stable. Then imagine you suddenly open the feedback loop, leaving the peak current constant and unchanged. The duty cycle will then rush either back to 50% or to 100% if possible. You now have a system with a negative output resistance – if Voltage goes up, the output current will increase.

You can see it by drawing some triangles on a piece of paper: A steady state DCM current triangle with an up-slope longer than the down-slope and a fixed peak value. Now, if you imagine that the output voltage rises, you can draw a new triangle with the same peak current. The up-slope will be longer, the down-slope will be shorter but the sum of times will be longer than in the steady state case. The new triangle therefore has a larger area than the steady state triangle, which means a higher average output current. So higher output voltage generates higher output current if peak current is constant. Loaded with a current source, it is clear that this is an unstable system, like a flipflop, and it starts becoming unstable above 50% duty cycle.

However, when you close the feedback loop, the system is (conditionally) stable and the loop gain is normally so high at the RHP Pole frequency that it requires a huge gain reduction to make it unstable.

It’s like when you drive on your bike. A bike has two wheels and therefore can tilt to either side – it is a system with a low frequency RHPP like a flipflop. If you stand still, it will certainly tilt to the left or to the right because you have no way to adjust your balance back. But if you drive, you have a system with feedback where you can immediately correct imbalance by turning the handlebars. As we know, this system is stable unless you have drunk a lot of beers.

SCADA & HMI

SCADA will have a set of KPI’s that are used by the PLCs/PACs/RTUs as standards to compare to the readings coming from the intelligent devices they are connected to such as flowmeters, sensors, pressure guages, etc.

HMI is a graphical representation of your process system that is provided both the KPI data and receives the readings from the various devices through the PLC/PAC/RTUs. For example you may be using a PLC that has 24 i/o blocks that are connected to various intelligent devices that covers part of your water treatment plant. The HMI software provides the operator with a graphical view of the treatment plant that you customize so that your virtual devices and actual devices are synchronized with the correct i/o blocks in your PLC. So, when an alarm is triggered, instead of the operator receiving a message that the 15th i/o block on PLC 7 failed, you could see that the pressure guage in a boiler reached maximum safety level, triggering a shutdown and awaiting operator approval for restart.

Here is some more info I got from my colleague who is the expert in the HMI market, this is a summary from the scope of his last market study which is about a year old.

HMI software’s complexity ranges from a simple PLC/PAC operating interface but as plant systems have evolved, HMI functionality and importance has as well. HMI is an integral component of a Collaborative Production Management (CPM) system; simply you can define that as the integration of Enterprise, Operations, and Automation software into a single system. Collaborative Production Systems (CPS) require a common HMI software solution that can visualize the data and information required at this converged point of operations and production management. HMI software is the bridge between your Automation Systems and Operations Management systems.

An HMI software package typically performs functions such as process visualization and animation, data acquisition and management, process monitoring and alarming, management reporting, and database serving to other enterprise applications. In many cases, HMI software package can also perform control functions such as basic regulatory control, batch control, supervisory control, and statistical process control.

“Ergonometrics,” where increased ergonomics help increase KPI and metric results, requires deploying the latest HMI software packages. These offer the best resolution to support 3D solutions and visualization based on technologies such as Microsoft Silverlight. Integrating real-time live video into HMI software tools provide another excellent opportunity to maximize operator effectiveness. Live video provides a “fourth dimension” for intelligent visualization and control solutions. Finally, the need for open and secure access to data across the entire enterprise drives the creation of a single environment where these applications can coexist and share information. This environment requires the latest HMI software capable of providing visualization and intelligence solutions for automation, energy management, and production management systems.

Automation engineering

Automation generally involves taking a manufacturing, processing, or mining process that was previously done with human labor and creating equipment/machinery that does it without human labor. Often, in automation, engineers will use a PLC or DCS with standard I/O, valves, VFDs, RTDs, etc to accomplish this task. Control engineering falls under the same umbrella in that you are automating a process such as controlling the focus on a camera or maintaining the speed of a car with a gas pedal, but often you are designing something like the autofocus on a camera or cruise control on an automobile and oftentimes have to design the controls using FPGA’s or circuits and components completely fabricated by the engineering team’s own design.

When I first started, I started in the DCS side. Many of the large continuous process industries only let chemical engineers like myself anywhere near the DCS. EE landed the instruments and were done. It was all about you had to be process engineer before your became a controls engineer. In the PLC world it was the opposite, the EE dominated. Now it doesn’t line up along such sharp lines anymore. But there are lots people doing control/automation work that are clueless when comes to understanding process. When this happens it is crucial they are given firm oversight by someone who does.

On operators, I always tell young budding engineers to learn to talk to operators with a little advice, do not discount their observations because their analysis as to the cause is unbelievable, their observations are generally spot on. For someone designing a control system, they must be able to think like an operator and understand how operators behave and anticipate how they will use the control system. This is key to a successful project. If the operators do not like or understand the control system, they will kill a project. This is different than understanding how a process works which is also important.

Electronic industry standards

You know standards for the electronic industry have been around for decades, so each of the interfaces we have discussed does have a standard. Those standards may be revised but will still be used by all segments of our respective engineering disciplines.

Note for example back in the early 1990s many big companies HP, Boeing, Honeywell … formed a standards board and developed the Software standards( basic recommendations) for software practices for programming of flight systems. It was not the government it was the industry that took on the effort. The recommendations are still used. So an effort is first needed by a meeting of the minds in the industry.

Now we have plenty of standards on the books for the industry, RS-422, RS-232, 802.1 … and the list goes on and on. The point is most of the companies are conforming to standards that may have been the preferred method when that product was developed.

In the discussion I have not seen what the top preferred interfaces are. I know in many of the developments I have been involved in we ended up using protocol converters, Rs-232 to 802.3, 422 to 485 … that’s the way it’s been in control systems, monitoring systems, Launch systems and factory automation. And in a few projects no technology existed for the interface layer, had to build from scratch. Note the evolution of ARPA net to Ethernet to the many variations that are available today.

So for the short hall if I wanted to be more comparative I would use multiple interfaces on my hardware say usb, wireless, and 422. Note for new developments. With the advancement in PSOCS and other forms of program logic interface solutions are available to the engineer.

Start the interface standards with the system engineers and a little research on the characteristic of the many automation components and select the ones that comply with the goals and the ones that don’t will eventually become obsolete. If anything, work on some system standards. If the customer is defining the system loan him a systems engineer, and make the case for the devises your system or box can support, if you find your product falls short build a new version. Team with other automation companies on projects and learn from each other. It’s easy to find issues as to why you can’t succeed because of product differences, so break down the issues into manageable objectives and solve one issue at a time. As they say divide and concur.

Industrial automation process

My statement “the time it takes to start or stop a process is immaterial’ is somewhat out of context. The complete thought is” the time it takes to start or stop a process is immaterial to the categorization of that process into either the continuous type or the discrete type” which is how this whole discussion got started.

I have the entirely opposite view of automation. “A fundamental practice when designing a process is to identify bottlenecks in order to avoid unplanned shutdowns”.

Don’t forget that the analysis should include the automatic control system. This word of advice is pertinent to whichever “camp” you chose to join.

Just as you have recognized the strong analogies and similarities between “controlling health care systems” and “controlling industrial systems”, there are strong analogies between so-called dissimilar industries as well between the camp which calls itself “discrete” and the camp which waves the “continuous” flag.

You may concern about the time it takes to evaluate changes in parameter settings for your cement kiln is a topic involving economic risks which could include discussions of how mitigate these risks, such as methods of modeling the virtual process for testing and evaluation rather than playing with a real world process. This is applicable to both “camps”.

The same challenge of starting up/shutting down your cement kiln is the same challenge of starting up/shutting down a silicon crystal reactor or wafer processing line in the semiconductor industry. The time scales may be different, but the economic risks may be the same — if not more — for the electronics industry.

I am continuously amazed at how I can borrow methods from one industry and apply them to another. For example, I had a project controlling a conveyor belt at a coal mine which was 2.5 miles long – several millions of pounds of belting, not to mention the coal itself! The techniques I developed for tracking the inventory of coal on this belt laid the basis for the techniques I used to track the leading and trailing edge of bread dough on a conveyor belt 4 feet long. We used four huge 5KV motors and VFDs at the coal mine compared to a single 0.75 HP 480 VAC VFD at the bakery, and startups/shutdowns were order of magnitudes different, but the time frame was immaterial to what the controls had to do and the techniques I applied to do the job.

I once believed that I needed to be in a particular industry in order to feel satisfied in my career. What I found out is that I have a passion for automation which transcends the particular industry I am in at the moment and this has led to a greater appreciation of the various industrial cultures which exist and greater enjoyment practicing my craft.

So these debates about discrete vs. continuous don’t affect me in the least. My concern is that the debates may impair other more impressionable engineers from realizing a more fulfilling career by causing them to embrace one artificial camp over the other. Therefore, my only goal of engaging in this debate is to challenge any effort at erecting artificial walls which unnecessarily drive a damaging wedge between us.

Home automation concept

The concept of home automation on a global scale is a good concept. How to implement such a technology on a global scale is an interesting problem, or I should say issues to be resolved. Before global approval can be accomplished the product of home automation may need a strategy that starts with a look at companies that have succeeded in getting global approval of their products.

If we look at what companies that have the most products distributed around the world we see that Intel is one of these companies. What’s interesting is that this company has used automation in their Fabs for decades. This automation has allowed them to produce their products faster and cheaper than the rest of the industry. The company continues to invest in automation and the ability to evolve with technology and management. We have many companies that compete on the world stage; I don’t think many of these companies distribute as much product. So to compete at a level to make home automation accepted and to accomplish global acceptance the industry and the factories have to evolve to compete. That mission by the automation can be accomplished by adapting a strategy that updates their automation in their factories, stop using products that were used and developed in the 1970s (another way of saying COTS) and progress to current and new systems. A ten years old Factory may be considered obsolete if the equipment inside is as old as the factory.

Now for cost, when I thank of PLC or commercial controllers I see a COTS product that may be using obsolete parts that are not in production any more or old boards. So I see higher cost for manufacturing, a reduction in reliability. Now many procurement people evaluate risk in such a way that may rate older boards lower in risk for the short term, not a good evaluation for the long term. The cost is a function of how much product can be produced at the lowest cost and how efficient and competitive the company that produces the product. So time is money. The responsibility for cost is the company and the ability to produce a competitive product, not the government.

Now into control systems and safety, if the automation system is used in the house safety has to be a major consideration. I know at Intel Fabs if you violate any safety rule you won’t be working at that company long. To address safety the product must conform to the appropriate standards. Safety should be a selling point for home automation. Automation engineers should get and remember safety is one of the main considerations for an engineer. If someone gets hurt or killed because of a safety issue the first person looked at is the engineer.

Now 30% energy saving in my book is not enough, 35 to 40 percent should be a goal. Now solar cells have improved but the most efficient in the south west US. The Sterling engines are 1960 designs and use rare gases such as helium which may not be a renewable resource, Wind generators need space and are electromechanical so reliability and maintenance needs improving.

Now on to the interface standards, most modern factories that produce processors use the Generic equipment Manufacture standard, good deal works. As far as what and when to uses a standard interface, on BOX produced by one company may use RE-422 where another company may use RS 485 so the system engineer should resolve these issues before detailed design starts. Check with IEEE. Or you may be able to find the spec at every spec.com this is a good place to look for some of the specs needed.

So I conclude, many issues exist, and when broken down home automation is viable and needs a concerted effort and commitment from at least the companies and management that produce products for automation and a different model for manufacturing and growing the home systems.
Home automation with a focus on energy savings as a goal is a good thing. We have a lot of work to ma