Category: Iacdrive_blog

Industrial automation process

My statement “the time it takes to start or stop a process is immaterial’ is somewhat out of context. The complete thought is” the time it takes to start or stop a process is immaterial to the categorization of that process into either the continuous type or the discrete type” which is how this whole discussion got started.

I have the entirely opposite view of automation. “A fundamental practice when designing a process is to identify bottlenecks in order to avoid unplanned shutdowns”.

Don’t forget that the analysis should include the automatic control system. This word of advice is pertinent to whichever “camp” you chose to join.

Just as you have recognized the strong analogies and similarities between “controlling health care systems” and “controlling industrial systems”, there are strong analogies between so-called dissimilar industries as well between the camp which calls itself “discrete” and the camp which waves the “continuous” flag.

You may concern about the time it takes to evaluate changes in parameter settings for your cement kiln is a topic involving economic risks which could include discussions of how mitigate these risks, such as methods of modeling the virtual process for testing and evaluation rather than playing with a real world process. This is applicable to both “camps”.

The same challenge of starting up/shutting down your cement kiln is the same challenge of starting up/shutting down a silicon crystal reactor or wafer processing line in the semiconductor industry. The time scales may be different, but the economic risks may be the same — if not more — for the electronics industry.

I am continuously amazed at how I can borrow methods from one industry and apply them to another. For example, I had a project controlling a conveyor belt at a coal mine which was 2.5 miles long – several millions of pounds of belting, not to mention the coal itself! The techniques I developed for tracking the inventory of coal on this belt laid the basis for the techniques I used to track the leading and trailing edge of bread dough on a conveyor belt 4 feet long. We used four huge 5KV motors and VFDs at the coal mine compared to a single 0.75 HP 480 VAC VFD at the bakery, and startups/shutdowns were order of magnitudes different, but the time frame was immaterial to what the controls had to do and the techniques I applied to do the job.

I once believed that I needed to be in a particular industry in order to feel satisfied in my career. What I found out is that I have a passion for automation which transcends the particular industry I am in at the moment and this has led to a greater appreciation of the various industrial cultures which exist and greater enjoyment practicing my craft.

So these debates about discrete vs. continuous don’t affect me in the least. My concern is that the debates may impair other more impressionable engineers from realizing a more fulfilling career by causing them to embrace one artificial camp over the other. Therefore, my only goal of engaging in this debate is to challenge any effort at erecting artificial walls which unnecessarily drive a damaging wedge between us.

Home automation concept

The concept of home automation on a global scale is a good concept. How to implement such a technology on a global scale is an interesting problem, or I should say issues to be resolved. Before global approval can be accomplished the product of home automation may need a strategy that starts with a look at companies that have succeeded in getting global approval of their products.

If we look at what companies that have the most products distributed around the world we see that Intel is one of these companies. What’s interesting is that this company has used automation in their Fabs for decades. This automation has allowed them to produce their products faster and cheaper than the rest of the industry. The company continues to invest in automation and the ability to evolve with technology and management. We have many companies that compete on the world stage; I don’t think many of these companies distribute as much product. So to compete at a level to make home automation accepted and to accomplish global acceptance the industry and the factories have to evolve to compete. That mission by the automation can be accomplished by adapting a strategy that updates their automation in their factories, stop using products that were used and developed in the 1970s (another way of saying COTS) and progress to current and new systems. A ten years old Factory may be considered obsolete if the equipment inside is as old as the factory.

Now for cost, when I thank of PLC or commercial controllers I see a COTS product that may be using obsolete parts that are not in production any more or old boards. So I see higher cost for manufacturing, a reduction in reliability. Now many procurement people evaluate risk in such a way that may rate older boards lower in risk for the short term, not a good evaluation for the long term. The cost is a function of how much product can be produced at the lowest cost and how efficient and competitive the company that produces the product. So time is money. The responsibility for cost is the company and the ability to produce a competitive product, not the government.

Now into control systems and safety, if the automation system is used in the house safety has to be a major consideration. I know at Intel Fabs if you violate any safety rule you won’t be working at that company long. To address safety the product must conform to the appropriate standards. Safety should be a selling point for home automation. Automation engineers should get and remember safety is one of the main considerations for an engineer. If someone gets hurt or killed because of a safety issue the first person looked at is the engineer.

Now 30% energy saving in my book is not enough, 35 to 40 percent should be a goal. Now solar cells have improved but the most efficient in the south west US. The Sterling engines are 1960 designs and use rare gases such as helium which may not be a renewable resource, Wind generators need space and are electromechanical so reliability and maintenance needs improving.

Now on to the interface standards, most modern factories that produce processors use the Generic equipment Manufacture standard, good deal works. As far as what and when to uses a standard interface, on BOX produced by one company may use RE-422 where another company may use RS 485 so the system engineer should resolve these issues before detailed design starts. Check with IEEE. Or you may be able to find the spec at every spec.com this is a good place to look for some of the specs needed.

So I conclude, many issues exist, and when broken down home automation is viable and needs a concerted effort and commitment from at least the companies and management that produce products for automation and a different model for manufacturing and growing the home systems.
Home automation with a focus on energy savings as a goal is a good thing. We have a lot of work to ma

PPE (Personal Protective Equipment)

When I think of using PPE as a controls engineer, I think
about electrical shock and arc-flash safety in working with electrical devices.

The PPE (Personal Protective Equipment) requirements to work on live electrical
equipment is making doing commissioning, startup, and tuning of electrical
control systems awkward and cumbersome. We are at a stage where the use of PPE
is now required but practice has not caught up with the requirements. While
many are resisting this change, it seems inevitable that we will need to wear
proper PPE equipment when working on any control panel with exposed voltages of
50 volts or more.

With many electrical panels not labeled for shock and arc-flash hazard levels,
the default PPE requires a full (Category 2+) suit in most cases, which is very
awkward indeed. What can we do to allow us to work on live equipment in a safe
manner that meets the now not so new requirements for shock and arc-flash
safety?

Increasingly the thinking is to design our systems for shock and arc-flash
safety. Typically low voltage (less than 50 volts), 120VAC, and 480 VAC power
were often placed in the same control enclosure. While this is cost effective,
it is now problematic when wanting to do work on even the low voltage area of
the panel. The rules do not appear to allow distinguishing areas of a panel as
safe, while another is unsafe. The entire panel is either one or the other. One
could attempt to argue this point, but wouldn’t it be better to just design our
systems so that we are clearly on the side of compliance?

Here are my thoughts to improve electrical shock and arc flash safety by
designing this safety into electrical control panels.

1. Keep the power components separate from the signal level components so that
maintenance and other engineers can work on the equipment without such hazards
being present. That’s the principle. What are some ideas for putting this into
practice?

2. Run as much as possible on 24VDC as possible. This would include the PLC’s
and most other panel devices. A separate panel would then house only these shock
and arc-flash safe electrical components.

3. Power Supplies could be placed in a separate enclosure or included in the
main (low voltage) panel but grouped together and protected separately so that
there are no exposed conductors or terminals that can be reached with even a
tool when the control panel door is opened.

4. Motor Controls running at anything over 50 volts should be contained in a
separate enclosure. Try remoting the motor controls away from the power devices
where possible. This includes putting the HIM (keypad) modules for a VFD
(Variable Frequency Drive) for example on the outside of the control panel, so
the panel does not have to be opened. Also, using the traditional MCC (Motor
Control Centers) enclosures is looking increasing attractive to minimize the
need for PPE equipment.

For example “finger safe” design does not meet the requirements for arc-flash
safety. Also making voltage measurements to check for power is considered one
of, if not the most hazardous activity as far as arc-flash goes.

OPC drivers advantage

A few years back, I had a devil of time getting some OPC Modbus TCP drivers to work with Modbus RTU to TCP converts. The OPC drivers could not handle the 5 digit RTU addressing. You need to make sure your OPC driver that you try actually works with your equipment. Try before you buy is definite here. Along with some of the complications, like dropping connections due minor network cliches, a real headache and worth a topic all its own, is the ability us tag pickers and the like. The best thing to happen to I/O addressing is the use of Data Objects in the PLC and HMI/SCADA. The other advantage OPC can give you the ability to get more Quality Information on your I/O. Again, check before you buy. In my experience, the only protocol worse than Modbus in the Quality Info department is DDE and that pretty well gone. This still does not help when the Modbus slave still reports stale data like its fresh. No I/O driver can sort that out, you need a heartbeat.

A shout out to all you Equipment manufactures that putting Modbus RTU into equipment because its easy, PLEASE BUILD IN A HEATBEAT us integrators can monitor so we can be sure the data is alive and well.

Also, while you try before you buy, you want your HMI/SCADA to be able to tell the difference between, Good Read, No Read and Bad Read, particularly with a RTU network.

High voltage power delivery

You already know from your engineering that higher voltages results to less operational losses for the same amount of power delivered. The bulk capacity of 3000MW has a great influence on the investment costs obviously, that determines the voltage level and the required number of parallel circuit. The need for higher voltage DC levels has become more feasible for bulk power projects (such as this one) especially when the transmission line is more than 1000 km long. So on the economics, investment for 800kV DC systems have been much lower since the 90’s. Aside from reduction of overall project costs, HVDC transmission lines at higher voltage levels require lesser right-of-way. Since you will be also requiring less towers as will see below, then you will also reduce the duration of the project (at least on the line).

Why DC not AC? From a technical point of view, there are no special obstacles against higher DC voltages. Maintaining stable transmission could be difficult over long AC transmission lines. The thermal loading capability is usually not decisive for long AC transmission lines due to limitations in the reactive power consumption. The power transmission capacity of HVDC lines is mainly limited by the maximum allowable conductor temperature in normal operation. However, the converter station cost is expensive and will offset the gain in reduced cost of the transmission line. Thus a short line is cheaper with ac transmission, while a longer line is cheaper with dc.
One criterion to be considered is the insulation performance which is determined by the overvoltage levels, the air clearances, the environmental conditions and the selection of insulators. The requirements on the insulation performance affect mainly the investment costs for the towers.

For the line insulation, air clearance requirements are more critical with EHVAC due to the nonlinear behavior of the switching overvoltage withstand. The air clearance requirement is a very important factor for the mechanical design of the tower. The mechanical load on the tower is considerably lower with HVDC due to less number of sub-conductors required to fulfill the corona noise limits. Corona rings will be always significantly smaller for DC than for AC due to the lack of capacitive voltage grading of DC insulators.

With EHVAC, the switching overvoltage level is the decisive parameter. Typical required air clearances at different system voltages for a range of switching overvoltage levels between 1.8 and 2.6 p.u. of the phase-to-ground peak voltage. With HVDC, the switching overvoltages are lower, in the range 1.6 to 1.8 p.u., and the air clearance is often determined by the required lightning performance of the line.

How generator designers determine the power factor?

The generator designers will have to determine the winding cross section area and specific current/mm2 to satisfy the required current, and they will have to determine the required total flux and flux variation per unit of time per winding to satisfy the voltage requirement. Then they will have to determine how the primary flux source will be generated (excitation), and how any required mechanical power can be transmitted into the electro-mechanical system, with the appropriate speed for the required frequency.
In all the above, we can have parallel paths of current, as well as of flux, in all sorts of combinations.

1) All ordinary AC power depends on electrical induction, which basically is flux variations through coils of wire. (In the stator windings).
2) Generator rotor current (also called excitation) is not directly related to Power Factor, but to the no-load voltage generated.
3) The reason for operating near unity Power Factor is rather that it gives the most power per ton of materials used in the generating system, and at the same time minimises the transmission losses.
4) Most Generating companies do charge larger users for MVAr, and for the private user, it is included in the tariff, based on some assumed average PF less than unity.
5) In some situations, synchronous generators has been used simply as VAr compensators, with zero power factor. They are much simpler to control than static VAr compensators, can be varied continuously, and do not generate harmonics. Unfortunately they have higher maintenance cost.
6) When the torque from the prime mover exceeds a certain limit, it can cause pole slip. The limit when that happens depends on the available flux (from excitation current), and stator current (from/to the connected load).

Induction machines testing

Case: We got by testing 3 different machines under no-load condition.
The 50 HP and 3 HP are the ones which behave abnormally when we apply 10% overvoltage. The third machine (7.5 HP) is a machine that reacts normally under the same condition.
What we mean by abnormal behavior is the input power of the machine that will increase dramatically under only 10% overvoltage which is not the case with most of the induction machines. This can be seen by the numbers given below.

50 HP, 575V
Under 10% overvoltage:
Friction & Windage Losses increase 0.2%
Core loss increases 102%
Stator Copper Loss increases 107%

3 HP, 208V
Under 10% overvoltage:
Friction & Windage Losses increase 8%
Core loss increases 34%
Stator Copper Loss increases 63%

7.5 HP, 460V
Under 10% overvoltage:
Friction & Windage Losses decrease 1%
Core loss increases 22%
Stator Copper Loss increases 31%

Till now, we couldn’t diagnose the exact reason that pushes those two machines to behave in such way.
Answer: A few other things I have not seen (yet) include the following:
1) Are the measurements of voltage and current being made by “true RMS” devices or not?
2) Actual measurements for both current and voltage should be taken simultaneously (with a “true RMS” device) for all phases.
3) Measurements of voltage and current should be taken at the motor terminals, not at the drive output.
4) Measurement of output waveform frequency (for each phase), and actual rotational speed of the motor shaft.

These should all be done at each point on the curve.

The reason for looking at the phase relationships of voltage and current is to ensure the incoming power is balanced. Even a small voltage imbalance (say, 3 percent) may result in a significant current imbalance (often 10 percent or more). This unbalanced supply will lead to increased (or at least unexpected) losses, even at relatively light loads. Also – the unbalance is more obvious at lightly loaded conditions.

As noted above, friction and windage losses are speed dependent: the “approximate” relationship is against square of speed.

Things to note about how the machine should perform under normal circumstances:
1. The flux densities in the magnetic circuit are going to increase proportionally with the voltage. This means +10% volts means +10% flux. However, the magnetizing current requirement varies more like the square of the voltage (+10% volt >> +18-20% mag amps).
2. Stator core loss is proportional to the square of the voltage (+10% V >> +20-25% kW).
3. Stator copper loss is proportional to the square of the current (+10% V >> +40-50% kW).
4. Rotor copper loss is independent of voltage change (+10% V >> +0 kW).
5. Assuming speed remains constant, friction and windage are unaffected (+10% V >> +0 kW). Note that with a change of 10% volts, it is highly likely that the speed WILL actually change!
6. Stator eddy loss is proportional to square of voltage (+10% V >> +20-25% kW). Note that stator eddy loss is often included as part of the “stray” calculation under IEEE 112. The other portions of the “stray” value are relatively independent of voltage.

Looking at your test results it would appear that the 50 HP machine is:
a) very highly saturated
b) has damaged/shorted laminations
c) has a different grade of electrical steel (compared to the other ratings)
d) has damaged stator windings (possibly from operation on the drive, particularly if it has a very high dv/dt and/or high common-mode voltage characteristic)
e) a combination of any/all of the above.

One last question – are all the machines rated for the same operating speed (measured in RPM

Active power losses in electrical motor

Equivalent active power losses during electrical motor’s testing in no-load conditions contain next losses:
1. active power losses in the copper of stator’s winding which are in direct relation with square of no-load current value: Pcus=3*Rs*I0s*I0s,

2. active power losses in ferromagnetic core which are in direct relation with frequency and degree of magnetic induction (which depends of voltage):
a) active power losses caused by eddy currents: Pec=kec*f*(B)x
b) active power losses caused by hysteresis: Ph=(kh*d*d*f*f*B*B)/ρ

3. mechanical power losses which are in direct relation with square of angular speed value: Pmech=Kmech*ωmech*ωmech,

Comment:
First, as you can see, active power losses in ferromagnetic core of electrical motor depend of voltage value and frequency, so by increasing voltage value you will get higher active power losses in ferromagnetic core of electrical motor.

Second, you can’t compare two electrical motors with different rated voltage and different rated power because active power losses in the ferromagnetic core, as I have already said above, depend of voltage value and frequency while active power losses in the copper of stator’s windings depend of square of no-load current value which is different for electrical motors with different rated power.

Third, when you want to compare active power losses in no-load conditions of two electrical motors with same rated voltage and rated power, you need to check design of both electrical motors because it is possible that one of them has different kind of winding, because, maybe in the past, one of them was damaged, so its windings had to be changed, what could be the reason for different electrical design and that has a consequence different no-load current value.

Friendly system without technicians diagnose

How to make our systems so friendly that they do not need technicians to help diagnose problems? Most of the more obvious answers have been well documented here but to emphasize these points it is normally the case that diagnostics and alarms can dominate the amount of code constructed for an application. That is the amount of code required to fully diagnose an application may be as much if not more than the code required to make the application work in the first place!

I have seen HMIs with diagnostic screens showing an animated version of the Cause & Effects code that allows users to see where the trip condition is. I have also seen screens depicting prestart checks, Operator guides, etc all animated to help the user. Allen-Bradley even have a program viewer that can be embedded on an HMI screen but you would probably need a technician to understand the code.

From a control system problem perspective it is inevitable that you would need to use the vendor based diagnostics to troubleshoot your system. Alarms may indicate that there is a system related problem but it is unlikely that you could build a true diagnostics application in your code to indicate the full spectrum of problems you may encounter. For example if the CPU dies or the memory fails there is nothing left to tell you what the problem is 🙂

From an application/process problem perspective if you have to resort to a technician to go into the code to determine your problem then your alarm and diagnostics code is not comprehensive enough! Remember your alarming guidelines an Operator must be able to deal with an abnormal situation within a given time frame, if the Operator has to rely on a Technician to wade through code then you really do need to perform an alarm review and build a better diagnostics system.

Motor design

When I was doing my PhD in motor design of reluctance machines with flux assistance (switched reluctance machines and flux switching machines with magnets and/or permanently energised coils) my supervisor was doing research on the field of sensorless control (it wasn’t the area of my research but it got me thinking about it all). At the time I had thought (only in my head as a PhD student daydream) that I would have to initially force a phase (or phases) to deliberately set the rotor into a known position due to the phase firing then start a normal phase firing sequences to start and operate the motor for a normal load without the need for any form position detection (all this was assuming I had the motor running from stationary to full speed at normal expected load with use of a position sensor to start with so I could link phase firing, rotor position and timings all together to create a “map” which I could then try to use to re-program a firing sequence with no position detection at all but only if I could force the rotor to “park” itself in the same position every time before starting the machine properly – the “map” having the information to assume that the motor changes speed correctly as it changes the firing sequences as it accelerates to full speed). But any problem such as unusual load condition or fault condition (e.g. short circuit or open circuit in a phase winding) would render useless such an attempt at control with no form of position detection at all. The induction machine being sensorless and on the grid being measured.