Category: Iacdrive_blog

Conditional stability

Conditional stability, I like to think about it this way:

The ultimate test of stability is knowing whether the poles of the closed loop system are in the LHP. If so, it is stable.

We get at the poles of the system by looking at the characteristic equation, 1+T(s). Unfortunately, we don’t have the math available (except in classroom exercises) we have an empirical system that may or may not be reduced to a mathematical model. For power supplies, even if they can be reduced to a model, it is approximate and just about always has significant deviations from the hardware. That is why measurements persist in this industry.

Nyquist came up with a criterion for making sure that the poles are in the LHP by drawing his diagram. When you plot the vector diagram of T(s) is must not encircle the -1 point.

Bode realized that the Nyquist diagram was not good for high gain since it plotted a linear scale of the magnitude, so he came up with his Bode plot which is what everyone uses. The Bode criteria only says that the phase must be above -180 degrees when it crosses over 0 dB. There is nothing that says it can’t do that before 0 dB.

If you draw the Nyquist diagram of a conditionally stable system, you’ll see it doesn’t surround the -1 point.

If you like, I can put some figures together. Or maybe a video would be a good topic.

All this is great of course, but it’s still puzzling to think of how a sine wave can chase itself around the loop, get amplified and inverted, phase shifted another 180 degrees, and not be unstable!

Having said all this about Nyquist, it is not something I plot in the lab. I just use it as an educational tool. In the lab, in courses, or consulting for clients, the Bode plot of gain and phase is what we use.

How to suppress chaotic operation in a DCM flyback at low load

I would like to share these tips with everybody.
A current mode controlled flyback converter always becomes unstable at low load due to the unavoidable leading edge current spike. It is not normally dangerous but as a design engineer I don’t like to look at it and listen to it.

Here are three useful and not patented tips.

First tip:
• Insert a low pass filter, say 1kohm + 100pF between current sense resistor and CS input in your control IC.
• Split the 1kohm in two resistors R1 to the fet and R2 to the control IC. R1 << R2.
• Insert 0,5 – 1pF between drain and the junction R1/R2. This can be made as a layer-to-layer capacitor in the PCB. It does not have to be a specific value.
• Adjust R1 until the spike in the junction in R1/R2 is cancelled.
You will see that the current spike is always proportional to the negative drain voltage step at turn-on. Once adjusted, the cancellation always follows the voltage step, and you some times achieve miracles with it. Cost = one resistor.

Second tip:
Having the low pass filter from first tip, add a small fraction of the gate driver output voltage to the current sense input, say 0,1V by inserting a large resistor from ‘Drive Out’ to ‘CS input’. The added low pass filtered step voltage will more or less conceal the current spike. You should reduce your current sense resistor accordingly. Cost = one resistor.

Third tip:
In a low power flyback, you some times just need an RC network or just an extra capacitor from drain to a DC point, either to reduce overshoot or to reduce noise. Connect the RC network or the capacitor to source, not to ground or Vcc. If you connect it to ground or Vcc, you will measure the added discharge current peak in the current sense resistor. Cost = nothing – just knowledge.

All tips can be used individually or combined => Less need for pre-load resistors on your output.

Right Half Plane Pole

Very few know about the Right Half Plane Pole (not a RHP-Zero) at high duty cycle in a DCM buck with current mode control. Maybe because it is not really a problem.
It is said that this instability starts above 2/3 duty cycle – I think that must be with a resistive load. If loaded with a pure current source, it starts above 50% duty cycle.

Here is a little down-to-earth explanation:
If you run a buck converter at high duty cycle but DCM, it probably works fine and is completely stable. Then imagine you suddenly open the feedback loop, leaving the peak current constant and unchanged. The duty cycle will then rush either back to 50% or to 100% if possible. You now have a system with a negative output resistance – if Voltage goes up, the output current will increase.

You can see it by drawing some triangles on a piece of paper: A steady state DCM current triangle with an up-slope longer than the down-slope and a fixed peak value. Now, if you imagine that the output voltage rises, you can draw a new triangle with the same peak current. The up-slope will be longer, the down-slope will be shorter but the sum of times will be longer than in the steady state case. The new triangle therefore has a larger area than the steady state triangle, which means a higher average output current. So higher output voltage generates higher output current if peak current is constant. Loaded with a current source, it is clear that this is an unstable system, like a flipflop, and it starts becoming unstable above 50% duty cycle.

However, when you close the feedback loop, the system is (conditionally) stable and the loop gain is normally so high at the RHP Pole frequency that it requires a huge gain reduction to make it unstable.

It’s like when you drive on your bike. A bike has two wheels and therefore can tilt to either side – it is a system with a low frequency RHPP like a flipflop. If you stand still, it will certainly tilt to the left or to the right because you have no way to adjust your balance back. But if you drive, you have a system with feedback where you can immediately correct imbalance by turning the handlebars. As we know, this system is stable unless you have drunk a lot of beers.

Experience: Flyback

My first SMPS design was a multiple output flyback. This was in 1976, when there were no PWM controllers. So I used a 556 (1/2 osc -30 kHz, and 1/2 PWM generator) plus used a 3904 NPN where the VBE was the reference and also provided gain for the error amp function. I hap-hazardly wound the windings on a 25 mm torroid. It ranglike a tank circuit. I quickly abandoned the transformer and after a year, and many hours on the bench, I had a production-grad SMPS.
Since it went into a private aircraft weather reader system, I needed an exterier SMPS which was a buck converter. I used an LM105 linear regulator with positive feedback to make it oscillate (one of nationals ap notes). It worked, but I soon learned that the electrolytic capacitors lost all of their capacitance at -25 deg C. It later worked with military-grade capacitors.

I had small hills of dead MOSFETs and the directly attached controllers. When the first power MOSFETs emerged in 1979, I blew-up so many that I almost wrote them off. They had some real issues with D-S voltage overstress. They have come a long way since.

As far as very wide range flyback converter, please dig-up AN1327 on the ONSEMI website. This describes a control strategy (fixed off-time, variable on-time) and the transformer design.
The processor to that was a 3W flyback that drove 3 floating gate drive circuits and had an input range of 85 VAC to 576 VAC. It was for a 3 phase industrial motor drive. The toughest area was the transformer. To meet the isolation requirements of the UL, and IEC, it would have required a very large core, and bobbin plus a lot of tape. The PCB had the dimensions of 50 mm x 50 mm and 9 mm thick A magnetics designer named Jeff Brown from Cramerco.com is now my magnetics God. He designed me a custom core and bobbin that was 10 mm high on basically an EF15 sized core. The 3 piece bobbin met all of the spacing requirements without tape. The customer was expecting a 2 – 3 tier product offering for the different voltage ranges, but instead could offer only one. They were thrilled.

Can be done, watch your breakdown voltages, spacings and RMS currents. I found that around 17 -20 watts is about the practical limit for an EF40 core before the transformer RMS currents get too high.

Experience: Design

I tell customers that at least 50% of the design effort is the layout and routing by someone who knows what they are doing. Layer stackup is very critical for multiple layer designs. Yes, a solid design is required. But the perfect design goes down in flames with a bad layout. Rudy Severns said it best in one of his early books that you have to “think RF” when doing a layout. I have followed this philosophy for years with great success. Problems with a layout person who wants to run the auto route or doesn’t understand analog layout? No problem, you, as the design engineer, do not have to release the design until it is to your satisfaction.

I have had Schottky diodes fail because the PIV was exceeded due to circuit inductance causing just enough of a very high frequency ring (very hard to see on a scope) to exceed the PIV. Know your circuit Z’s, keep your traces short and fat.

Fixed a number of problems associated with capacitor RMS ratings on AC to DC front ends. Along with this is the peak inrush current for a bridge rectifier at turn on and, in some cases, during steady state. Unit can be turned on at the 90 deg phase angle into a capacitive load. This must be analyzed with assumptions for input resistance and/or a current inrush circuit must be added.

A satellite power supply had 70 deg phase margin on the bench, resistive load, but oscillated on the real load. Measured the loop using the AP200 on the load and the phase margin was zero. Test the power supply on the real load before going to production and then a random sampling during the life of the product.

I used MathCAD for designs until the average models came out for SMPS. Yes, the equations are nice to see and work with but they are just models none the less. I would rather have PsPice to the math while I pay attention to the models used and the overall design effort. Creating large closed form equations is wrought with pitfalls, trapdoors, and landmines. Plus, hundreds of pages of MathCAD, which I have done, is hard to sell to the customer during a design review (most attendees drift off after page 1). The PsPice schematics are more easily sold and then modified as needed with better understanding all around.

Home automation concept

The concept of home automation on a global scale is a good concept. How to implement such a technology on a global scale is an interesting problem, or I should say issues to be resolved. Before global approval can be accomplished the product of home automation may need a strategy that starts with a look at companies that have succeeded in getting global approval of their products.

If we look at what companies that have the most products distributed around the world we see that Intel is one of these companies. What’s interesting is that this company has used automation in their Fabs for decades. This automation has allowed them to produce their products faster and cheaper than the rest of the industry. The company continues to invest in automation and the ability to evolve with technology and management. We have many companies that compete on the world stage; I don’t think many of these companies distribute as much product. So to compete at a level to make home automation accepted and to accomplish global acceptance the industry and the factories have to evolve to compete. That mission by the automation can be accomplished by adapting a strategy that updates their automation in their factories, stop using products that were used and developed in the 1970s (another way of saying COTS) and progress to current and new systems. A ten years old Factory may be considered obsolete if the equipment inside is as old as the factory.

Now for cost, when I thank of PLC or commercial controllers I see a COTS product that may be using obsolete parts that are not in production any more or old boards. So I see higher cost for manufacturing, a reduction in reliability. Now many procurement people evaluate risk in such a way that may rate older boards lower in risk for the short term, not a good evaluation for the long term. The cost is a function of how much product can be produced at the lowest cost and how efficient and competitive the company that produces the product. So time is money. The responsibility for cost is the company and the ability to produce a competitive product, not the government.

Now into control systems and safety, if the automation system is used in the house safety has to be a major consideration. I know at Intel Fabs if you violate any safety rule you won’t be working at that company long. To address safety the product must conform to the appropriate standards. Safety should be a selling point for home automation. Automation engineers should get and remember safety is one of the main considerations for an engineer. If someone gets hurt or killed because of a safety issue the first person looked at is the engineer.

Now 30% energy saving in my book is not enough, 35 to 40 percent should be a goal. Now solar cells have improved but the most efficient in the south west US. The Sterling engines are 1960 designs and use rare gases such as helium which may not be a renewable resource, Wind generators need space and are electromechanical so reliability and maintenance needs improving.

Now on to the interface standards, most modern factories that produce processors use the Generic equipment Manufacture standard, good deal works. As far as what and when to uses a standard interface, on BOX produced by one company may use RE-422 where another company may use RS 485 so the system engineer should resolve these issues before detailed design starts. Check with IEEE. Or you may be able to find the spec at every spec.com this is a good place to look for some of the specs needed.

So I conclude, many issues exist, and when broken down home automation is viable and needs a concerted effort and commitment from at least the companies and management that produce products for automation and a different model for manufacturing and growing the home systems.
Home automation with a focus on energy savings as a goal is a good thing. We have a lot of work to ma

PPE (Personal Protective Equipment)

When I think of using PPE as a controls engineer, I think
about electrical shock and arc-flash safety in working with electrical devices.

The PPE (Personal Protective Equipment) requirements to work on live electrical
equipment is making doing commissioning, startup, and tuning of electrical
control systems awkward and cumbersome. We are at a stage where the use of PPE
is now required but practice has not caught up with the requirements. While
many are resisting this change, it seems inevitable that we will need to wear
proper PPE equipment when working on any control panel with exposed voltages of
50 volts or more.

With many electrical panels not labeled for shock and arc-flash hazard levels,
the default PPE requires a full (Category 2+) suit in most cases, which is very
awkward indeed. What can we do to allow us to work on live equipment in a safe
manner that meets the now not so new requirements for shock and arc-flash
safety?

Increasingly the thinking is to design our systems for shock and arc-flash
safety. Typically low voltage (less than 50 volts), 120VAC, and 480 VAC power
were often placed in the same control enclosure. While this is cost effective,
it is now problematic when wanting to do work on even the low voltage area of
the panel. The rules do not appear to allow distinguishing areas of a panel as
safe, while another is unsafe. The entire panel is either one or the other. One
could attempt to argue this point, but wouldn’t it be better to just design our
systems so that we are clearly on the side of compliance?

Here are my thoughts to improve electrical shock and arc flash safety by
designing this safety into electrical control panels.

1. Keep the power components separate from the signal level components so that
maintenance and other engineers can work on the equipment without such hazards
being present. That’s the principle. What are some ideas for putting this into
practice?

2. Run as much as possible on 24VDC as possible. This would include the PLC’s
and most other panel devices. A separate panel would then house only these shock
and arc-flash safe electrical components.

3. Power Supplies could be placed in a separate enclosure or included in the
main (low voltage) panel but grouped together and protected separately so that
there are no exposed conductors or terminals that can be reached with even a
tool when the control panel door is opened.

4. Motor Controls running at anything over 50 volts should be contained in a
separate enclosure. Try remoting the motor controls away from the power devices
where possible. This includes putting the HIM (keypad) modules for a VFD
(Variable Frequency Drive) for example on the outside of the control panel, so
the panel does not have to be opened. Also, using the traditional MCC (Motor
Control Centers) enclosures is looking increasing attractive to minimize the
need for PPE equipment.

For example “finger safe” design does not meet the requirements for arc-flash
safety. Also making voltage measurements to check for power is considered one
of, if not the most hazardous activity as far as arc-flash goes.

OPC drivers advantage

A few years back, I had a devil of time getting some OPC Modbus TCP drivers to work with Modbus RTU to TCP converts. The OPC drivers could not handle the 5 digit RTU addressing. You need to make sure your OPC driver that you try actually works with your equipment. Try before you buy is definite here. Along with some of the complications, like dropping connections due minor network cliches, a real headache and worth a topic all its own, is the ability us tag pickers and the like. The best thing to happen to I/O addressing is the use of Data Objects in the PLC and HMI/SCADA. The other advantage OPC can give you the ability to get more Quality Information on your I/O. Again, check before you buy. In my experience, the only protocol worse than Modbus in the Quality Info department is DDE and that pretty well gone. This still does not help when the Modbus slave still reports stale data like its fresh. No I/O driver can sort that out, you need a heartbeat.

A shout out to all you Equipment manufactures that putting Modbus RTU into equipment because its easy, PLEASE BUILD IN A HEATBEAT us integrators can monitor so we can be sure the data is alive and well.

Also, while you try before you buy, you want your HMI/SCADA to be able to tell the difference between, Good Read, No Read and Bad Read, particularly with a RTU network.

High voltage power delivery

You already know from your engineering that higher voltages results to less operational losses for the same amount of power delivered. The bulk capacity of 3000MW has a great influence on the investment costs obviously, that determines the voltage level and the required number of parallel circuit. The need for higher voltage DC levels has become more feasible for bulk power projects (such as this one) especially when the transmission line is more than 1000 km long. So on the economics, investment for 800kV DC systems have been much lower since the 90’s. Aside from reduction of overall project costs, HVDC transmission lines at higher voltage levels require lesser right-of-way. Since you will be also requiring less towers as will see below, then you will also reduce the duration of the project (at least on the line).

Why DC not AC? From a technical point of view, there are no special obstacles against higher DC voltages. Maintaining stable transmission could be difficult over long AC transmission lines. The thermal loading capability is usually not decisive for long AC transmission lines due to limitations in the reactive power consumption. The power transmission capacity of HVDC lines is mainly limited by the maximum allowable conductor temperature in normal operation. However, the converter station cost is expensive and will offset the gain in reduced cost of the transmission line. Thus a short line is cheaper with ac transmission, while a longer line is cheaper with dc.
One criterion to be considered is the insulation performance which is determined by the overvoltage levels, the air clearances, the environmental conditions and the selection of insulators. The requirements on the insulation performance affect mainly the investment costs for the towers.

For the line insulation, air clearance requirements are more critical with EHVAC due to the nonlinear behavior of the switching overvoltage withstand. The air clearance requirement is a very important factor for the mechanical design of the tower. The mechanical load on the tower is considerably lower with HVDC due to less number of sub-conductors required to fulfill the corona noise limits. Corona rings will be always significantly smaller for DC than for AC due to the lack of capacitive voltage grading of DC insulators.

With EHVAC, the switching overvoltage level is the decisive parameter. Typical required air clearances at different system voltages for a range of switching overvoltage levels between 1.8 and 2.6 p.u. of the phase-to-ground peak voltage. With HVDC, the switching overvoltages are lower, in the range 1.6 to 1.8 p.u., and the air clearance is often determined by the required lightning performance of the line.

How generator designers determine the power factor?

The generator designers will have to determine the winding cross section area and specific current/mm2 to satisfy the required current, and they will have to determine the required total flux and flux variation per unit of time per winding to satisfy the voltage requirement. Then they will have to determine how the primary flux source will be generated (excitation), and how any required mechanical power can be transmitted into the electro-mechanical system, with the appropriate speed for the required frequency.
In all the above, we can have parallel paths of current, as well as of flux, in all sorts of combinations.

1) All ordinary AC power depends on electrical induction, which basically is flux variations through coils of wire. (In the stator windings).
2) Generator rotor current (also called excitation) is not directly related to Power Factor, but to the no-load voltage generated.
3) The reason for operating near unity Power Factor is rather that it gives the most power per ton of materials used in the generating system, and at the same time minimises the transmission losses.
4) Most Generating companies do charge larger users for MVAr, and for the private user, it is included in the tariff, based on some assumed average PF less than unity.
5) In some situations, synchronous generators has been used simply as VAr compensators, with zero power factor. They are much simpler to control than static VAr compensators, can be varied continuously, and do not generate harmonics. Unfortunately they have higher maintenance cost.
6) When the torque from the prime mover exceeds a certain limit, it can cause pole slip. The limit when that happens depends on the available flux (from excitation current), and stator current (from/to the connected load).