Category: Iacdrive_blog

How/where do we as engineers need to change?

System Design – A well designed system should provide clear and concise system status indications. Back in the 70’s (yes, I am that old), Alarm and indicator panels provided this information in the control room. Device level indicators further guided the technician to solving the problem. Today, these functions are implemented in a control room and machine HMI interface. Through the use of input sensor and output actuator feedback, correct system operation can be verified on every scan.

Program (software) Design – It has been estimated that a well written program is 40% algorithm and 60% error checking and parameter verification. “Ladder” in not an issue. Process and machine control systems today are programmed in ladder, structured text, function block, etc. The control program is typically considered intellectual property (IP) and in many cases “hidden” from view. This makes digging through the code impractical.

How/where do we as engineers need to change? – The industry as a whole needs to enforce better system design and performance. This initiative will come from the clients, and implemented by the developers. The cost/benefit trade-off will always be present. Developers trying to improve their margins (reduce cost – raise price) and customers raising functionality and willing to pay less. “We as engineers” are caught in the middle, trying to find better ways to achieve the seemingly impossible.

Sensorless control

I am curious about the definition of “sensorless control”.  When you talk about sensorless control, are you in fact meaning a lack of physical position sensor such as e.g. a magnet plus vane plus hall effect? i.e. not having a unit whose sole objective is position detection.
Is the sensorless control based around alternative methods of measurement or detection to predict position using components that have to exist for the machine to function (such as measuring or detecting voltages or currents in the windings)?

I had long ago wondered about designing a motor, fully measuring its voltage and current profiles and phase firing timings for normal operation (from stationary to full speed full load) using a position sensor for getting the motor to work and to determine the best required phase firing sequences and associated voltage/current profiles then program a microprocessor to replicate the entire required profile such that I would attempt to eliminate the need for any sensing or measurement at all (but I concluded it would come very unstuck for any fault conditions or restarting while it was still turning). So in my mind don’t all such machines require a form of measurement (i.e. some form of “sensing”) to work properly so could never be truly sensorless?

A completely sensor-less control would be completely open-loop, which isn’t reliable with some motors like PMSMs. Even if you knew the switching instants for one ideal case, too many “random” variables could influence the system (just think of the initial position), so that those firing instants could be inappropriate for other situations.

Actually, induction machines, thanks to their inherent stability properties, can be run really sensor-less (i.e. just connected to the grid or in V/f). To be honest, even in the simple grid-connection case there is an overcurrent detection somewhere in the grid, which requires some sensing.

But there can also be said the term sensorless relates to el. motor itself. In another words, it means there are not any sensors “attached” to the el. motor (which does not mean sensors cannot be in the inverter, in such a case). In our company we are using the second meaning, since it indicates no sensor connections are needed between the el. motor and the ECU (inverter).

What is true power and apparent power?

KW is true power and KVA is apparent power. In per unit calculations the more predominantly used base, which I consider standard is the KVA, the apparent power because the magnitude of the real power (KW) is variable / dependent on a changing parameter of the cos of the angle of displacement (power factor) between the voltage and current. Also significant consideration is that the rating of transformers are based in KVA, the short circuit magnitudes are expressed in KVA or MVA, and the short circuit duty of equipment are also expressed in MVA (and thousands of amperes, KA ).

In per unit analysis, the base values are always base voltage in kV and base power in kVA or
MVA. Base impedance is derived by the formula (base kV)^2/(base MVA).

The base values for the per unit system are inter-related. The major objective of the per unit system is to try to create a one-line diagram of the system that has no transformers (transformer ratios) or, at least, minimize their number. To achieve that objective, the base values are selected in a very specific way:
a) we pick a common base for power (I’ll come back to this point, if it should be MVA or MW);
b) then we pick base values for the voltages following the transformer ratios. Say you have a generator with nominal voltage 13.8 kV and a step-up transformer rated 13.8/138 kV. The “easiest” choice is to pick 13.8 kV as the base voltage for the LV side of the transformer and 138 kV as the base voltage for the HV side of the transformer.
c) once you have selected a base value for power and a base value for voltage, the base values for current and impedance are defined (calculated). You do not have a degree of freedom in picking base values for current and impedance.

Typically, we calculate the base value for current as Sbase / ( sqrt(3) Vbase ), right? If you are using that expression for the base value for currents, you are implicitly saying that Sbase is a three-phase apparent power (MVA) and Vbase is a line-to-line voltage. Same thing for the expression for base impedance given above. So, perhaps you could choose a kW or MW base value. But then you have a problem: how to calculate base currents and base impedances? If you use the expressions above for base current and base impedance, you are implicitly saying that the number you picked for base power (even if you picked a number you think is a MW) is actually the base value for apparent power, it is kVA or MVA. If you insist on being different and really using kW or MW as the base for power, you have to come up with new (adjusted) expressions for calculating base current and base impedance.

And, surprise!, you will find out that you need to define a “base power factor” to do so. In other words, you will be forced back into defining a base apparent power. So, no, you cannot (easily) use a kW/MW base. For example, a 100 MVA generator, rated 0.80 power factor (80 MW). You could pick 80 as the base power (instead of 100). But if you are using the expressions above for base current and base impedance, you are actually saying that the base apparent power is 80 MVA (not a base active power of 80 MW).

PMBLDC motor in MagNet

You can build it all in MagNet using the circuit position controlled switch. You will have to use motion analysis in order to use the position controlled switches. You can also use the back EMF information to find what the optimal position for the rotor should be with respect to the stator field. The nice thing about motion is that even if you do not have the rotor in the proper position you can set the reference at start up.

Another way of determining that position is to find the maximum torque with constant current (with the right phase relationship between phases of course) and plot torque as a function of rotor position. The peak will correspond to the back EMF waveform information.

If you want to examine the behavior of the motor with an inverter then another approach works very well. There are 2 approaches you can use with MagNet: 1) co-simulation, and, 2) reduced order models. The former can be used with matlab with Simulink or Simpower Systems and runs both Matlab and MagNet simultaneously. The module linking the two systems allows 2 way communication between the modules hence sharing information. The latter requires that you get the System Model Generator (SMG) from Infolytica. The SMG will create a reduced order model of you motor which can then be used in Matlab/Simulink or any VHDL-AMS capable system simulator. A block to interpret the data file is required and is available when you get the SMG. Reduced order models are very interesting since they can very accurately simulate the motor and hook up to complex control circuits.

Transformer uprating

I once uprated a set of 3x 500KVA 11/.433kv ONAN transformers to 800KVA simply by fitting bigger radiators. This was with the manufacturers blessing. (not hermetically sealed – there were significant logistical difficulties in changing the transformers, so this was an easy option). Limiting factor was not the cooling but the magnetic saturation of the core at the higher rating. All the comments about uprating the associated equipment are relevant, particularly on the LV side. Increase in HV amps is minimal. Pragmatically, if you can keep the top oil temperature down you will survive for at least a few years. Best practice of course is to change the transformer!

It is true that you can overload your transformer say 125 %, 150 % or even greater on a certain length of time but every instance of that overloading condition reflects a degradation on the life of your transformer winding insulation. Overload your transformer and you also shorten the life of your winding insulation. The oil temperature indicated on the temperature gauge of the transformer is much lower than the hotspot temperature of the transformer winding which is a critical issue when considering the life of the winding insulation. Transformers having rating of 300 KVA most probably do not even have temperature indicating gauge. The main concern is how effectively can you lower the hotspot temperature in order that it does not significantly take away some of the useful life of your transformer winding insulation.

AM & FM radio

For AM & FM radio & some data communications adding the QP filter make sense.
Now that broadband, wifi, data communications of all sizes & flavours exist – any peak noise is very likely to cause interuptions & loss of integrity of data – all systems are being ‘cost reduced’ ensuring that they will be more susceptible to noise.
I can understand the reasons for the tightening of the regulations.
BUT, it links in to the other big topic of the moment – the non-linearity of managers.
William is obviously his own manager – I bet if his customer was to ask him to spend an indefinite amount of time fixing all the root causes to meet the spec perfectly without any additional cost it would be a different matter.

Unfortunately for most of us the realities of supervisors wanting projects closed & engineering costs minimized we have to be careful in the choice of phrasing.
Any suggestion that one prototype is ‘passing’ suddenly can be translated to job finished, & even in our case where the lab manager mostly understands, his boss rarely does & the accountant above him – not at all.

It gets worse than that – at the beginning of a project (RFQ) – the question is “how long will EMC take to fix?” with the expectation if a deterministic answer; the usual response of a snort of derision & how long is a piece of string generally translates to 2 weeks & once set in stone becomes a millstone (sorry mile-stone).

We already have a number of designs that while not intentionally using dithering, do use boundary mode PFC circuits which automatically force the switch frequency to vary over the mains cycle. These may become problematic at some future variation of the wording of the EMC specs.

While I have a great deal of sympathy for the design it right first time approach, the bottom line for any company is – it meets the requirement (today) – sell it!!

Simulation interpretation in automation industry

Related to “automation industry”, there are generally 3 different interpretations of what simulations is:
1) Mechanical Simulations – Via various solid modeling tools and cad programs; tooling, moving mechanisms, end-effectors… are designed with 3D visualizations, connecting the modules to prevent interference, check mass before actual machining…
2) Electronics Simulations – This type of simulations are either related to the manufacturers of “specific instrumentations” used in automation industry (ultrasonic welders, laser marking systems,…) or the designers of circuit boards.
3) Electrical & Controls Simulations.
A) Electrical Schematics, from main AC disconnect switch, down to 24VDC low amps for I/O interface.
Simulation tools allow easy determinations of system’s required amperage, fuse sizes, wire gauges, accordance with standards (CE, UL, cUL, TUV…)…
B) Logic Simulations, HMI interface, I/O exchange, motion controls…
a) If you want to have any kind of meaningful simulations, get in the habit of “modular ladder logic” circuit design. This means, don’t design your ladder like one continuous huge program that runs the whole thing; simulating this type of programs is almost impossible in every case. Break down the logic to sub-systems or maybe even down to stand alone mechanisms (pick & place, motor starter…), simulating and troubleshooting this scenario is fairly easy.
b) When possible, beside automated run mode of the machine or system, build “manual mode logic” for it as well. Then via physical push-buttons or HMI, you should have “step forward” & “step back” for every “physical movement or action”.

Simulating the integrity of the “ladder logic program” and all the components and interfaces will be a breeze if things are done meticulously upfront.

Spread spectrum of power supply

Having lead design efforts for very sensitive instrumentation with high frequency A/D converters with greater than 20-bits of resolution my viewpoint is mainly concerned about the noise in the regulated supply output. In these designs fairly typical 50-mV peak-to-peak noise is totally unacceptable and some customers cannot stand 1-uVrms noise at certain frequencies. While spread spectrum may help the power supply designer it may also raise havoc with the user of the regulated output. The amplitude of the switching spikes (input or output) as some have said, are not reduced by dithering the switching frequency. Sometimes locking the switching time, where in time, it does not interfere with the circuits using the output can help. Some may also think this is cheating but as was said it is very difficult getting rid of most 10megHz noise. This extremely difficulty applies for many of the harmonics above 100kHz. (For beginners who think that being 20 to 100 times higher than the LC filter will reduce the switching noise by 40 to 200 are sadly wrong as once you pass 100kHz many capacitors and inductors have parasitics making it very hard to get high attenuation in one LC stage and often there is not room for more. More inductors often introduce more losses as well.) We should be reducing all the noise we can and then use other techniques as necessary. With spread spectrum becoming more popular we may soon see regulation on its total noise output as well.

One form of troublesome noise is common mode noise coming out of the power inputs to the power supply. If this is present on the power input to the power supply it is very likely it is also present in the “regulated” output power if floating. Here careful design of the switching power magnetics and care in the layout can help minimize this noise enough, that filters may be able to keep the residual within acceptable limits. Ray discusses some of this in his class but many non-linear managers frequently do not think it is reasonable or necessary for the power supply design engineer to be involved in layout or location of copper traces. Why not, the companies that sell the multi-$100K+ software told their bosses the software automatically optimizes and routs the traces.

Spread spectrum is a tool that may be useful to some but not to all. I hope the sales pitch for those control chips do not lull unsuspecting new designers into complacency about their filter requirements.

Voltage transmission & distribution

If you look back over history you will find how things started out from the early engineers and scientists looking at materials and developing systems that would meet their transmission goals. I recall when drives (essentially ac/dc/ac converters) had an upper limit around 200 to 230 volts). In Edison and Tesla days there was a huge struggle to pick DC or AC and AC prevailed mainly because it was economical to make AC machines. Systems were built based on available materials and put in operation. Some worked great some failed. When they failed they were analyzed and better systems built. Higher and higher voltages lowered copper content and therefore cost as insulators improved. Eventually commitees formed and reviewed what worked and developed standards. Then by logical induction it was determined what advances could be made in a cost effective and reliable manner. A lot of “use this” practice crept in. By this I mean for example, I worked at a company and one customer bought 3,000 transformers over the course of ten years, They had a specific size enclosure they wanted.

Due to high volume purchase the cost of the enclosure was low. Other small jobs came thru and this low cost enclosure was used on them to expedite delivery and keep cost minimum. Guess what, that enclosure is now a standard enclosure there because it was used on hundreds of designs over ten years. Is it the most economical box, probably not in the pure engineering sense but changing something that works is seldom a good idea. Today, they are raising voltage levels to new high values. I read of a project in Germany to run HVDC linesover huge distance. They are working to overcome a problem they foresee. How do you break the circuit with HVDC economically. If you ever put DC thru a small contactor maybe 600VDC you find quickly that the arc opening the contactor melts the contacts. Now, what do you do at 800kVDC or 1.2MVDC. What will the cost of the control circuit be to control this voltage level. (Edison and Tesla all over again)And there you have it, my only push for the subject of history to be taught.

Signal processing and communications theory

Coming from a signal processing and communications theory background, but with some experience in power design, I can’t resist the urge to chime in with a few remarks.

There are many engineering methods to deal with sources of interference, including noise from switching converters, and spread spectrum techniques are simply one more tool that may be applied to achieve a desired level of performance.

Spread spectrum techniques will indeed allow a quasi-peak EMC test to be passed when it might otherwise be failed. Is this an appropriate application for this technique?

The quasi-peak detector was developed with the intention to provide a benchmark for determining the psycho-acoustic “annoyance” of an interference on analog communications systems (more specifically, predominantly narrow band AM type communication systems). Spread spectrum techniques resulting in a reduced QP detector reading will almost undoubtedly reduce the annoyance the interference would have otherwise presented to the listener. Thus the intent was to reduce the degree of objectionable interference and the application of spread spectrum meets that goal. This doesn’t seem at all like “cheating” to me; the proper intent of the regulatory limit is still being met.

On the other hand, as earlier posters have pointed out, the application of spectrum spreading does nothing to reduce the total power of the interference but simply spreads it over a wider bandwidth. Spreading the noise over a wider bandwidth provides two potential benefits. The most obvious benefit occurs if the victim of the interference is inherently narrowband. Spreading the spectrum of the interference beyond the victim bandwidth provides an inherent improvement in signal to noise ratio. A second, perhaps less obvious, benefit is that the interference becomes more noise like in its statistics. Noise like interference is less objectionable to the human ear than impulsive noise but it should also be recognized that it is less objectionable to many digital transmission systems too.

However, from an information theoretic perspective the nature of the interference doesn’t matter, but rather only the signal to noise ratio matters. Many modern communication systems employ wide bandwidths. Furthermore they employ powerful adaptive modulation and coding schemes that will effectively de-correlate interference sources (makes the effect noise like); these receivers don’t care whether the interference is narrow band or wide band in terms of bit error rate (BER) and they will be effected largely the same by a given amount of interference power (in theory identically the same, but implementation limitations still provide some gap to the theoretical limits).

It is worth noting however that while spectrum spreading techniques do not reduce the interference power they don’t make it any worse either. Thus these techniques may (I would argue legitimately as per above) help with passing a test which specified the CISPR Quasi-Peak detector and should not make the performance on a test specifying the newer CISPR RMS+Average test any worse.

It should always be an engineering goal to keep interference to a reasonable minimum and I would agree that it is aesthetically most satisfying (and often cheapest and most simple) to achieve this objective by somehow reducing the interference at source (this is a wide definition covering aspects of SMPS design from topology selection to PCB layout and beyond). However, the objective to control noise at the source shouldn’t eliminate alternative methods from consideration in any given application.

There will always be the question of how good is good enough and it is the job of various regulatory bodies to define these requirements and to do so robustly enough such that the compliance tests can’t be “gamed”.