Sign in to follow this  
Followers 0
tcpdump

Alarm conditions and threholding

6 posts in this topic

Hi all, I would like to know something. With the PLC and Master Station, which is more common to determine if a signal is in alarm state? E.g. if pressure is above say 45 Kpa, does the PLC itself determine that it is in alarm, and then either interrupts the MS to notify this or the MS polls PLC to find the alarm condition or Perhaps the PLC captures the pressure, and then at the next poll by the MS, the value of 55Kpa is greater than 45Kpa so it is really the TC that is determining alarm states? All I want to know, is which is more common based on the SCADA systems that you have worked on?

Share this post


Link to post
Share on other sites
If you're dealing with a system where the MS (PC) sends the alarm there isn't much difference between the 2 conditions you stated. Either the exact alarm is determined in the PLC or MS - that's largely up to the PLC programmer. "Severity" is something else that most software packages include, but you could implement your own system at the PLC level. In both cases the SCADA software is probably polling/subscribing to the PLC registers at some regular interval. The tough part about event driven communication is generalizing the handler on the PC receiving end - I can't think of an application that works this way that isn't custom written. It turns out that a polled system, with a driver that implements caches, etc, works well for most applications. It may be chatty on the network, but is functional.

Share this post


Link to post
Share on other sites
depends on how much reliance you place on the MS and the comms system if 55kpa means plant goes burp if MS offline and gives operators time to hit big red button and run then alarm OK in MS if at 55kpa the plant goes bang then suspect alarm handling better off in PLC

Share this post


Link to post
Share on other sites
I have tried to implement alarms as much as possible on the SCADA side of things. This works well in a process environment. In a discrete manufacturing environment however, it falls apart. The basic problem is event capturing. Fast events which the PLC may react to may be missed entirely by the SCADA. There are really only two solutions. The first one is to poll the PLC from the SCADA system at a high rate, eating up network bandwidth needlessly, especially when a PLC may be reacting to events that occur at <10-20 milliseconds. The second option is to do the event capturing and alarming within the PLC and the results are interpreted and handled at the SCADA level. As an example, we had a problem where a fan was periodically shutting down inexplicably in a large annealing furnace...what is essentially a "process" system so the discrete manufacturing concern is completely out of the picture. Any one of about a dozen possible interlocks could have been the culprit but the event was so fast that the SCADA system missed it entirely every time. Once I switched over to doing the actual alarm handling at the PLC level with the SCADA system simply providing pretty pictures and support, it became very obvious that there was a CT with a loose connection that was tripping the fan controls. The loose connection was acting up only 10-20 milliseconds every few hours but this was enough to cause the problem.

Share this post


Link to post
Share on other sites
thanks for the information guys, just for your info, the SCADA system I work on, has most of the alarms being PLC generated. The MS which polls for information from the PLCs, can then collect data and then generate its own alarms, on a more systemi basis. For example, if the PLC at the power generator is giving low power alarms, then the MS can then send instruction to the PLC at the motor to stop running so as to preserve back battery power. Does anyone else use a more systemic approach to control, rather than localised and independent PLCs controlling their own domains????

Share this post


Link to post
Share on other sites
As you create larger systems, you also become more vulnerable to communication failures. It becomes harder to shut down a portion of the system for maintenance without taking out the entire system. And communication speed becomes a serious bottleneck. For example, consider a large, high speed assembly line. If it is truly high speed, you will find that communication speeds become the bottleneck. Localized PLC's are needed purely to achieve subsecond reaction rates. If nothing else, you could probably put in a very large PLC with all Ethernet I/O or huge wiring runs doing all local I/O and easily achieve the same thing with one PLC for perhaps $20K-$50K compared to using 4 or 5 little Omron PLC's for under $1000 each. I don't know about you, but I'd rather be troubleshooting the Omron's and not sorting through wiring cabinet after wiring cabinet with prints that are never, ever up to date anyways. In a large steam plant on the other hand, it is possible to have one PLC for a cooling tower, another for the boiler & recuperator, perhaps one for each burner or bank of burners, and so forth. But since you are essentially running one very large piece of equipment and reaction rates of around 1 second are plenty fast, there's really no reason to have a dozen PLC's. It becomes much more complicated designing inter-PLC communication to deal with system-wide changes and control over the process. This is an example where one large PLC often with lots of distributed IO modules makes the most sense. If you've ever tried troubleshooting inter-PLC communication with intermittent problems, you can very easily appreciate why a single large PLC is easier to work with. On the other hand, it only takes one "oops" which faults a processor or causes some unintended consequence in the same steam plant while doing online edits to convince everyone that the system needs to be divided either logically, physically, or both. This is an example where maintainability becomes a big issue. Taking that same example, if there's an accident and a communication line gets destroyed, now you've lost control if it's all one PLC. If you had a bunch of little ones, you could perhaps put everything in one area in "manual" or "semi-auto" control and drive it all temporarily until the required repairs are made. Sometimes if you plan for these things, you can do it "on the fly". I had a situation a couple years ago where the SCADA/HMI is very tightly integrated into the melt shop for a large foundry. For all practical purposes, this is a necessity (calculations and database interface for chemistry control; very large distributed control system). It was linked via a fiber backbone designed as a star network. The fiber network was accidentally damaged and we lost the link to the system. My first action was to take a laptop up to the area that we lost and connect it up to the PLC with the PLC programming software. It was a bit tricky to do but in about 5 minutes we were able to crudely keep the system running without shutting down the casting shop or causing any other hiccups. The next step was to go get one of the redundant servers from the server stack and physically set it up in the melt shop. After setting a bunch of PC's up to operate without regular network support (no DNS, DHCP, etc.), we had everything up and running more or less "normally" in about 90 minutes. That gave us the 3-4 hours to get a new fiber cable in, pull the new run, terminate the ends, and switch everything over to the new cable. Using your example of one very large system, we would have lost thousands of dollars and perhaps been shutdown for a week. There would be "failsafes" but fail safe does not necessarily mean that the equipment would be 100% operational after something as simple as breaking a fiber (or failure of a switch)...it just means that nobody gets hurt as the equipment shuts down. Another example is on a large kiln for cement or lime. The "front" and "back" ends are physically located about 1000 feet apart. Frequently there may be 3 or more kilns side-by-side. The most logical setup for these operations is to have one PLC for the "front" and one for the "back" ends for each kiln. It takes roughly 3-5 days to "start up" or "shut down". A "hot idle" situation is a serious issue and not taken lightly at all. It is simply not practical to do small PLC modifications except during the shutdowns that occur once every year or two. If you had one big PLC running all the kilns, maintenance would be effectively impossible in many cases. An in between case would be say 4 small manufacturing lines in parallel. Each one may consist of about 100-200 IO's. You could go for 4 individual PLC's, or perhaps buy two larger PLC's and configure them as a redundant pair. This adds complication to the system overall but could allow you to survive some types of hardware failures. This would also allow you to temporarily "split" the 4 production lines into 3 lines running the "old" software and one line running a new experimental version. The "old" DCS model is that you have a single large distributed "DCS" system that may encompass say an entire refinery or some other sort of production plant. The "reaction rate" of the DCS could be 10 seconds. The DCS runs dozens of PLC's by reading information and adjusting set points at each of the PLC's. The PLC's in turn run the localized equipment and carry out the marching orders from the DCS. In those days, a PLC didn't even have a "PID" instruction and couldn't run a totalizer (easily) so a DCS was almost a requirement. These days, the PLC's have absorbed almost every function of a DCS. The rest have been absorbed by SCADA/HMI systems. The result is that there are very few operating DCS systems left, although the new breed of "MES" systems look a lot like the DCS model. The PLC's are changing, too. The new Allen Bradley ControlLogix PLC's allow you to have multiple "tasks". Each task is independent of the others (potentially) with it's own faults, etc. You can also very easily have multiple processors which share I/O, memory, etc. It is much easier to think of a ControlLogix system conceptually as one very large system with multiple "compute modules". Edited by paulengr

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0