Category: Iacdrive_blog

Motor starting time to reach full speed

It is not easily answered since there are many variables at play which will affect the starting time. For a large medium voltage motor, it is recommended that a motor starting analysis be performed so that proper control and protection of the motor can be set. The motor manufacturer is a good place to start to find a motor data sheet and torque curve responses; that should give you some good starting point data. Such an analysis can provide inrush current, voltage dip, and starting time.

The time that any motor to run up will depend on the actual load on the shaft. In broad terms the larger the load (related to the rated output) the longer it will take to run up. I would have expected 2 – 2.5MW motors to be manufactured to run on 10-11Kv and DoL. The startup times of these motors would typically be between 45 seconds (No Load) and 3 or 4 Minutes (dependent on the type and magnitude of the load).
I also tend to agree if the feed value is shut the motor will not initially see a significant load and should run up quite quickly.

I would start with Te time constant of the motor as the starting time in the worst case. If you intend let your motor live for long, you should design its protection to avoid starting times longer than Te and nor even close to it. As for specific application, it’s always try and error, but the guiding line should be: start at minimum load and increase it gently (some motor protection relays guard load increase rate).

Popularization of SPICE

I am currently writing a bullet point history of the popularization of SPICE in the engineering community. The emphasis is on the path SPICE has taken to arrive on the most engineering desktops. Because of this emphasis, my history begins with the original Berkeley SPICE variants, continues onto PSpice (its limited, but free student version made SPICE ubiquitous) and culminates with LTspice (because, at over three million downloads, it has reached many more users than all other SPICE variants combined).

I have contacted Dr. Laurence Nagel (the father of Berkeley SPICE) and Mike Engelhardt (LTspice) in order to verify the accuracy of the historical account (haven’t had a chance to fold in Dr. Nagel’s corrections yet), but I am lacking solid information about the beginnings of PSpice (I don’t even know who the technical founders of MicroSim were). Ian Wilson was an early technical V.P. Also, I am not sure what the PSpice acronym means. (Seems to me that it started out as uPspice?)

Here is what I have recently found about PSpice (more info appreciated):

User’s Guide to PSpice, Version 4.05, January 1991
From Chapter 1: INTRODUCTION, Section 1.1 Overview, starting with paragraph 2 (page 3):

“PSpice is a member of the SPICE family of circuit simulators. The programs in this family come from the SPICE2 circuit simulation program developed at the University of California at Berkeley during the early 1970’s. The algorithms of PSICE2 were considerably more powerful and faster than their predecessors. The generality and speed of SPICE2 led to its becoming the de facto standard for analog circuit simulation. PSpice uses the same numeric algorithms as SPICE2 and also conforms to the SPICE2 format for input and output files. For more information on SPICE2, see the references listed in section 13.2.1.4 (page 427, especially the thesis by Laurence Nagel.

“PSpice, the first SPICE-based simulator available on the IBM-PC, started being delivered in January of 1984.

“Convergence and performance is what sets PSpice apart from all the other ‘alphabet’ SPICEs. Many SPICE programs became available on the IBM-PC around mid-1985, after Microsoft released their FORTRAN complier version 3.0. For the most part, these SPICEs are little modified from the U.C. Berkeley code. Using benchmark circuits, we find that PSpice runs anywhere from 1.3 to 30 times faster than our imitators. In the area of convergence, PSpice has a two-year lead in improving convergence and a customer base that is larger than all of the other SPICE vendors combined (including those SPICEs offered for workstations and mainframes). This larger customer base provides more feedback, sooner, than any other SPICE program is likely to receive.”

From Chapter 1: INTRODUCTION, Section 1.4 Standard Features, last paragraph (page 7):

“PSpice, version 3.00 (Dec. 1986) and later, is a complete re-write of the simulator into the ‘C’ pro-gramming language. It is not a version of SPICE3, from U.C. Berkeley, which is also written in ‘C’. MicroSim has overhauled the data structures and code, however the analog simulation algorithms are similar and the numeric results are consistent with SPICE2 and SPICE3. Having the simulator re-written in ‘C’ allows faster development, allowing our team to reliably modify and extend the simulator in sev-eral directions at once.”

From the January 1987 Newsletter: PSpice went from version 2.06 (Fortran) to version 3.00 (C). Speed increased by 20%. PSpice 3.01 (Dec 86) introduced the non-linear Jiles and Atherton core model.

From the April 1987 Newsletter: PSpice 3.03 (Apr 87) introduced ideal switches.

From the July 1991 Newsletter: PSpice announced Schematics at the June 1991 Design Automation Conference. (Became available when PSpice 5.0 shipped in July 91?)

Solving Differential Equations with Mic

High starting torque, synchronous motor, induction motor or DC motor?

It depends on so much more than the simple requirements listed of high starting torque and variable speed. What kind of application are you using it for? Is it on an automobile (where you have DC already), a factory, and do you have the budget and/or space for a variable frequency drive. A synchronous servo motor gives great dynamic control and great starting torque per volume, but its speed range is limited (unless you’re field weakening by the back EMF). Servo-motors are also the most expensive due to their position sensors and more intelligent drives.

With a proper soft drive you can go with an induction motor, but it depends. if power is small you can go to step motor also. But dc series motor’s starting torque is high as expressed others.
DC Series motors have high starting torque but induction motors have wide range of speed control. So, If DC motor is used, then DC drives you can use, although it will be expensive and DC motors are tough to maintain than ac motors due to commutation Problem.

DC series motor would provide both the high starting torque and adjustable speed BUT beware that DC motors have high maintenance cost and also require AC-DC conversion. You could use other available options e.g. double wound induction motors etc, depending upon your requirements.

But today, there is no application where you cannot apply AC motors, asynchronous or synchronous. If the motor and the associated power electronics are correctly rated, you can have any starting torque you want.

The typical application of DC series motors was in locomotives. This technology has been replaced by AC motors since 20 years. The latest generation of high speed trains use synchronous, permanent magnet motors.

Simulator history

Power electronics has always provided a special challenge for simulation. As Hamish mentioned above, one of the problems encountered is inductor cutsets, and capacitor loops that lead to numerical instability in the simulation matrices.

In the 80s, Spice ran so slowly that is was not an option unless you wanted to wait hours or days for results, and frequently it failed to converge anyway. It was never intended to handle the large swings of power circuits, and coupled with the numerical problems above, was just not a feasible approach.

Ideal-switch simulations were used with other software to get rid of many of the nonlinearities of devices that slowed simulation down, but Spice really hated ideal switches as it would try to converge on the infinite slope edges.

Three universities started writing specialized software for converter simulation to address this shortcomings of Spice. Virginia Tech had COSMIR, which I helped write with a grad student, Duke University had the program which later became Simplis, and the University of Lowell had their program, the name of which I don’t recall (anyone remember?).

All of these programs started before Windows came along, and they were fast and efficient. With windows, the programming overhead to maintain programs like these moved beyond the scope of what university research groups in power electronics could handle. Only the Duke program survived, with Ron Wong leading the effort at a private company. The achievements of Simplis are remarkable, but it is a massive effort to keep this program going for a relatively small marketplace (power supply companies are notoriously cheap, so the potential market does not get realized), and that keeps the price quite high. If you can afford it, you should have this program.

Spice now runs at a reasonable pace on the latest PCs, so it is back in the game. LT Spice is leading the charge because it is free, and the models are relatively rugged. Now that speed is less of a factor, you can put real switches in, and Spice can handle them in a reasonable amount of time. (Depending on your definition of “reasonable”.)

PSIM was another ideal switch model, and they eliminated the convergence headaches that plagued all the other programs by not having convergence at all. You just cut the step size down to get the accuracy you needed, and this worked fine for exploring power stages and waveforms, but was not good for fast transient feedback loops. As the digital controller people quickly realized, the resolution on the PWM output needed to avoid numerical oscillations is very fine, and PSIM couldn’t handle that without slowing down too much.

When I left Virginia Tech, I felt the bulk of the industry needed a fast simulation and design solution so engineers did not have to add to their burdens with worrying about convergence and other problems. This is a hardware-driven field, and we all have our hands full dealing with real life blowups that simulation just doesn’t begin to predict.

I have observed in teaching over the years that engineers in a hurry to get to the hardware have very little tolerance for waiting for simulation. If you are building a well-known topology, about 2 seconds is as long as they will wait before they become impatient.

This is the gap that POWER 4-5-6 plugs. The simulation is practically instantaneous, and the program has no convergence issues so you design and simulate rapidly before moving to a breadboard. It is intended for the working engineer who is under severe time pressure, but would like some simulation to verify design integrity.

Power electronics design

If you are interested in power electronics design at the board or system level, I would recommend LTspice (note the correct spelling) by far above all the others. In addition to being superb for IC design (Linear Tech uses LTspice to design all their own ICs), it also has been specifically designed to run board level, switched mode simulations.

Because of its robust, excellent performance and because it is available at zero cost, LTspice has become the de facto standard SPICE with by far more engineers using it than any other flavor of SPICE. LTspice allows 100 percent transportability and work sharing, i.e., anyone, even those who have not been previous users, can open your files and run your simulations (the free download is well under 10Mb, installs very quickly and is very system friendly – not cookies, messy registry alterations, scattering of installation folders, etc. – removal, if you so choose, is easy and complete).

Like most versions of SPICE today, LTspice has a fine user interface, but that feature should be low on your list. Schematic entry is NOT where you will be spending most your time when doing serious design work. Beyond a point, desktop eye-candy does nothing to help you understand your design and see its flaws and weaknesses (in fact, too many layers of hand holding can just get in the way of that).

Personally, I never breadboard a design anymore until it has proven itself in LTspice (unlike with a breadboard, a simulated circuit’s internals are ALL easily viewable – a great boon for understanding tricky operation). For me, first hardware is always a complete layout (and matches the simulation every time). Of course, the old axiom “garbage-in, garbage-out” very much applies, which means I often spend a lot of upfront time verifying (and modifying and/or making) models to match their components’ data sheets. In fact, I would recommend doing that as a very worthwhile exercise and as something that should impress a potential employer.

When developing a design in SPICE, you will want to spend your time debugging your design, not your simulation or your simulator, therefor it is worthwhile to learn what a simulator needs to run smoothly (with LTspice, all that means is that the input has to be realistic). It was years of working with simulators and a lot of sweat and aggravation before the keys to problem-free simulations gradually crept into my understanding.

1. If possible, make all nonlinear circuit elements be functionally continuous with continuous derivatives (this is not possible for some component behaviors), and

2. *always* craft your simulations so that the nonlinear bits become linear at high frequencies (this is always possible). Non linear devices should never be strict voltage sources. They should be Nortonized and be shunted with small capacitances such that the capacitances (which are linear elements) dominate at small time steps.

3. Always verify that the building blocks of your simulation behave realistically (GIGO).

Follow these guidelines and you will never see the “time step too small” message (I have never met a simulation that couldn’t be made to run well). Note that many (if not most) vendor supplied models fail to meet these guidelines and will give you nothing but headaches if you try to use them “as is.”

Calculate current setting of overcurrent relay

You can calculate current setting of overcurrent relay by using next expression:

Isetting ≥ (ks*Imaxopam)/(a*pi)
Imaxopam=kam*Imaxoptr

where are:

Isetting-current setting of overcurrent relay
ks-safety coefficient
Imaxopam-maximum operational current under which overcurrent relay shouldn’t to act
a-coefficient of layoffs overcurrent relay (0,85-0,95)
pi-ratio of current transformer
kam-coefficient which describes influence of common starting of all asynchronous electrical motors in the appropriate power network after elimination of fault (1-6)
Imaxoptr-maximum operational current of power transformer

Besides I have already explained meanings of all appropriate sizes, I would like to underline differentiate between Imaxopam and Imaxoptr.
Imaxoptr is maximum operational current of power transformer in normal conditions while Imaxopam is maximum operational current of power transformer after interruption of fault and mentioned current includes influence of common starting of all asynchronous electrical motors in the appropriate power network after elimination of fault. It is very important to say that appearing of fault in power network leads to significant decreasing of voltage what has a consequence deceleration of all asynchronous electrical motors in appropriate power network. After interruption of fault, it comes to appearing of process during which all asynchronous electrical motors are starting in some parts of power network which are still turned on. This is situation which is significant different from situation where asynchronous electrical motors are starting one by one while in this case all asynchronous electrical motors are starting at the same time. Because of this fact, after elimination of fault, value of current isn’t same as value of current before appearing of fault. After elimination of fault, value of current, which I called Imaxopam, is higher than value of operational current of power transformer, which I called Imaxoptr, but under those conditions there is no fault, so current setting of overcurrent relay should to be set on that value of current and mentioned relay shouldn’t act under those conditions.

After calculation of current setting of overcurrent relay, you need to check coefficient of sensitivity of acting of overcurrent relay by using next expression:

ksens=Ifmin/(Isetting*pi)

where are:

ksens-coefficient of sensitivity of acting of overcurrent relay
Ifmin-minimum fault current (1 phase fault to the earth or 2 phases fault)
Isetting-current setting of overcurrent relay
pi-ratio of current transformer

Value of coefficient of sensitivity of acting of overcurrent relay should to be higher or equal with 1,5 in case when is fault at opposite busbars (busbars where isn’t overcurrent relay) or higher or equal with 1,2 in case when is fault at the end of the longest feeder which begins at those opposite busbars. Time setting should to be selected like that overcurrent relay needs to wait acting of another protection which is on the feeders (for example distance protection).

Insulating resistance measurement

Please remember that Insulating resistance (IR) measurement and associated polarization index tests is just one of the many tools used for insulation system integrity analysis. Its value and repeatability is dependent on the environmental condition at the time it is taken; as mentioned temperature, humidity contaminations all contribute/effect the reading.

The baseline figure should be obtained from either factory or during initial commissioning (as per factory condition). So performing commissioning in the rain, dirty surface, high humidity may result in low values for both dry type and oil filled equipment. Low reading in itself does not indicate bad insulation where the machine cannot be returned to service.

The bottom line is assessment lacking or other data would be:
1. The machine was running at it was running ok before the test.
2. The leakage value at operating voltage will be V/R; therefore the heat loss will be I^2R. Is that OK or warrant some corrective measure.
3. PI may approach 1, is that OK or not? Is this mtruly and indication of wet insulation or of resistive value but will still be OK when energized as per 3. above?

IR, PI measurement along with Cap bridge / dissipation tests, PF test and others are performed to ensure the insulation integrity for maintenance and commissioning.

If cable and equipment have gone through routine maintenance, it is good practice to perform these tests and making sure no ground are left before energizing.

Please read a “a stich in time” by Megger.

It by itself is just a test. The test is meaningful.

Excitation system in generator

The excitation system requires a very small fraction of the total power being generated. If we could simply increase the excitation (a very small amount of power) and increase the generator’s real power output, the world’s energy problems would be solved, because we would have a perpetual motion machine.

In the case of a generator connected to a large grid, the generator will inject any desired amount of power into the grid if its prime-mover is fed the desired power (plus a small additional amount of power to take care of losses). This is true, regardless of the total load on the grid, because the generator’s output is an extremely small fraction of the total grid power, and it alone cannot make drastic changes to the grid’s frequency.

Normally, the load varies by a very small fraction of the total grid power. If the load increases, the frequency of the entire grid (including the generator in question) lowers a very small amount, generally less than one-hundredth of one Hz. The frequency slew (that is, the rate-of-change of frequency) is very low, because there is a massive amount of energy that is stored as the kinetic energy of the rotors of all of the generators. At this point, nothing needs to be done; the system simply runs a little faster or slower.

Over time, as the load changes a greater amount, the frequency moves further from the nominal frequency (50 Hz or 60 Hz). When the difference between the actual frequency and the nominal frequency becomes greater than about 0.01 Hz, action is taken to make changes to the output of the grid’s generators.

The specific action may be determined by the regulating authority (for instance a power pool in the US) and it is usually based on economics, subject to other constraints. If the load has increased (and the frequency is less than the nominal frequency), the generators that have the lowest incremental cost of power will be asked to increase their output, or if all generators are near their limits, new generators (with the lowest incremental cost) are asked come on line. It’s important to note that a generator’s limit is usually 80% or 90% of its rating. The 10% or 20% of unused capacity is the system’s “spinning reserve”, which is used to maintain grid stability for sudden, large power variations.

The same thing happens with a generator connected only to its load or a weak grid with just a few other generators. However, because there is relatively little kinetic energy stored in the rotors of the one or few generators, the change in frequency associated with a load change is much greater, so frequency variations are much greater and corrective actions may not be implemented before the frequency varies by more than a few Hz.

Phase rotation errors

Phase rotation errors are not as rare as they ought to be. I’ve seen more than one building with a systematic phase rotation error. This can be prevented by carefully following the color coding system (Yellow Orange Brown and Red Blue Black for 480 volt and 208 volt systems in the US for example) and tagging feeders at both ends to assure proper connections.

To check for proper phase rotation sequencing (ABC and not ACB) you can use a phase rotation meter. Without that you can bump a three phase motor that should be correctly connected to see if it turns in the right direction. If it’s wrong, reverse any two phase wires from the source to the distribution equipment. However, if you have a tie breaker and intend to operate the secondaries of two transformers in parallel by closing it that is not good enough. Both transformer distribution networks have to be connected correctly on all three phases. You have to check the voltage across each corresponding pair of terminals on the tie breaker and be certain they are all about zero volts. If you don’t and there is an error, closing the tie breaker if that is possible at all (some electronic breakers may lock you out) will result in a phase to phase bolted fault that can result in severe damage to your distribution equipment. Phase rotation errors are invariably the result of incompetent installation, inadequate specifications for feeder identification, and inadequate inspection.

There are times when the phase rotation error is made on the primary side of the transformer. If this happens it can be compensated for by reversing the phase rotation error from the secondary side. This is less desirable but it will work. If you have multiple phase rotation errors in the same distribution network you have quite a mess to clean up. It will be time consuming and expensive tracking all of them down to be certain you have eliminated them. False economies by cutting corners on the initial installation of substations and distribution equipment will result in necessitating very expensive and inconvenient repairs. If it is not corrected you risk severe damage to three phase load equipment.

Moving data around within memory of an individual PLC

The first question would have to be – why do want to do it? If the data already exists in one location that is accessible by all parts of the program, why are you going to use up more PLC memory with exactly the same data?

Well, there are a couple of candidate reasons. One might be recipe data. You have an area of memory with a set of stored recipes for different products, and at an appropriate moment you want to copy a specific recipe from the storage area to the working area. The first thing to be said about that is that if your recipes are at all complex and you have a requirement to have a significant number of different recipes, then PLC memory is probably not the right place to be storing them. The ultimate, these days, of course, is that recipes are created by techies on PCs away from the production area, in nice quite, comfortable labs or whatever, and are stored on a SQL server. Only the recipe for today’s actual production run gets transferred to the PLC. But there are some applications where there is a limited number of different recipes required and the recipes themselves are quite simple, when it can be reasonable to store the recipes in PLC memory.

A second reason for copying memory areas within the same PLC is for procedures, sub-routines or whatever. But again, these days, all PLC languages have some sort of in-built facility for procedures – what Rockwell uniquely call Add On Instructions, what everyone else calls UDFBs – user defined function blocks. In any case, the point is that these facilities usually make all that memory management stuff transparent to the programmer. You just configure the UDFB and call it as required. The compiler takes care of all the memory data moves for you.

Another reason for copying memory, actually related to the previous, is a technique much used by PLC programmers where they use an area of memory as a ‘scratch-pad’. So they will copy some unprocessed data to the scratchpad area, all of the operations performed on the data take place using the scratchpad, and at the end, they copy the processed data back again. Again, it is questionable how much this technique is actually required these days, I would suggest that it most cases, there probably is a better way using a UDFB. But I have seen some programmers who routinely include a scratchpad area within any UDFBs they define.