Sign in to follow this  
Followers 0
Pierre

1784-U2CN

6 posts in this topic

Thinking of setting up a RSLinx Gateway with 4 of those connections to 4 separate Control Net networks. Should be able to serve the data fast to other users through OPC. But can I use 4 x 1784-U2CN on 1 PC? AB sales rep says 'It should, I don't see why not' But then again, they have said many things like this in the past and before I spend 8K ... Any body has done this? PS: Just like James Bond said "There word is not enough" or something like that.

Share this post


Link to post
Share on other sites
Cannot comment on the 1784-U2CN but have you considered a 1756-A7 Rack with 1756-PA72 Power Supply. Place 4 1756-CNB and a 1756-ENBT into that rack and you can use RsLinx Gateway with Ethernet from the PC and bridge thru the rack to all 4 CNET Networks. List on this setup is $8300 but I can say it works.

Share this post


Link to post
Share on other sites
In fact what we are trying to do is to get away from the Ethernet. There is 45000 tags read from 10 PC (in the plant with RSView) ... IT is now comming in with there superb ideas ... and the system slows down to much. We already have RSLinx Gateway with OPC transfers but this is still slow... I am looking into having a Gateway through ControlNet .. talking to the PLC in the production area ... and another Gateway talking to the PLC through the ENET sidecar which they all have ... the ENET would be reserved for them IT people ... let them mess around and slow down there network ... while we surf fast on the Gateway with 3 controlnet links... Any suggestions ... Thanks

Share this post


Link to post
Share on other sites
Now this is a horse of a different color than I understood from your original post. You have a plant of PLC's and 10 PC's each running RSView gathering data from the PLC's. You also have RSlinx Gateway doing OPC from the RSView Stations to get you data for other troubleshooting and scada work. Problem in your data update / response speed is sluggish and poor. This calls for a network bandwidth analysis. first blush you probably want to set up a central data collector PC and serve data from it to all your RSView and other needs systems.

Share this post


Link to post
Share on other sites
Based on this, if I had to guess, I'd say that when you said "45000 tags", my first guess would be RS-Linx. I'd still check everything else but I'd start there. It's easy to check. Open the task manager and look at the process load for RS-Linx. If it is getting much more 5-10% CPU usage, that is your problem. Watch it for several minutes because the processor load often fluctuates quite a bit with RS-Linx. RS-Linx is well known to have scaling issues beyond about 5,000 to 10,000 tags, and you are at 4-10 times that number. Throwing a faster or multicore processor at it won't help. The answer is not Controlnet. If anything, since you are going from a 10 or 100 Mbps network which scales linearly with bandwidth (assuming it is configured correctly) compared to a 5 Mbps network that does not scale, you are heading for a disaster. If everything on your current network is being done right, Ethernet is definitely not your enemy. Switching to Controlnet is going to be a very expensive solution to something that is probably not going to be an issue. It's something else. First, let's address the bandwidth issue head on. When most controls folks think of "slow", serial communications comes to mind. In a serial world, bandwidth=speed. More bandwidth=more speed. So first let's do a quick check on whether raw bandwidth is really an issue. In Ethernet networks, bandwidth is almost never an issue. Let me repeat that, bandwidth is not an issue on Ethernet networks. OK, I can tell you don't believe me... Let's calculate what the raw bandwidth you would actually need would be. This is extremely unrealistic. You didn't tell me what the tags are so I'll assume worst case, double-precision floats which are 8 bytes. At 45,000 tags and assuming we are operating just below the speed of average human reaction times (350 ms), we need to push all 45,000 tags 3 times a second (333 ms), for a total bandwidth requirement of 8 bytes x 8 bits/byte x 45,000 tags x 3 scans / second / 1,000,000 = 8.64 Mbps. This is already beyond the capability of a single Controlnet (5 Mbps raw bandwidth) but a 100 Mbps Ethernet network should be pretty much just loafing along with not even 10% of the bandwidth in use. The protocols you are referring to (PCCC, Ethernet/IP, and OPC) are not even close to optimal in terms of bandwidth compared to the likes of say iSCSI which can push hundreds of megabits per second, but these are the same protocols you'd be using across Controlnet anyways so this isn't going to fix anything. If anything if bandwidth is the issue, it would make it worse since Controlnet is only 5 Mbps. With current Ethernet hardware running at 100 Mbps, the size of the packets is almost immaterial, and this is even more true with 1 Gbps (where the overhead actually exceeds data transmission time). Most of the communication delays (and bandwidth) are limited by the overhead of the communication itself; transmitter startup delays, synchronization pulses, and physical hardware switching times for the packets to pass through the internal switch fabric. On the software side, data payload is also not critical. The majority of the CPU cycles are spent looking at the packet headers and determining what to do with the packet, rather than on processing the data payload. Thus, the focus should be on packets per second, NOT bits per second, which are less important in the overall picture. Let me repeat that again. Your primary focus should be on optimizing the number of packets per second. Bandwidth should not be your primary concern. My previous ridiculous example shows that bandwidth even in your system with numbers totally blown out of proportion is not a limiting factor. Something else is, and that something else is going to involve a variety of sources. If I had to point to one single issue, I'd guess it's RS-Linx. However, that's not the only culprit. So let's go through the whole issue of optimizing your communication. The second argument that you might have heard about Controlnet vs. Ethernet is that Controlnet (Arcnet) is "deterministic" whereas Ethernet is "nondeterministic". About the only thing deterministic about Arcnet is that the purveyors of it are hell bent on selling $1200 NIC cards vs. $200 NIC cards. They tried to make this argument to the IT community in the 80's. Today, you'd have a hard time even purchasing an Arcnet card for a PC. I don't know if a PCI Arcnet card was ever even produced. Allen Bradley is regurgitating this old myth. The argument bounced around academic circles in the IEEE committees when Arcnet and Ethernet hatched. Unfortunately the issue was long ago settled but the urban myth lives on. Ethernet can be just as nondeterministic or deterministic as you want to make it. My vote is deterministic. The vote of every network engineer out there is for deterministic, but they aren't lining up to rip out their Cisco equipment and put in Arcnet. So you really need to examine the source of this myth. When Ethernet started, it was a hub-based architecture and/or connected by coaxial cables. In fact, electrically, Arcnet (err, Controlnet) and Ethernet were identical. Same physical cabling standards. The way that the cabling was used was quite a bit different. From a hardware perspective, all the nodes on the coaxial or hub-connected cabling are on a wired "radio link"...all nodes hear the packet broadcasted from any other node on the same connection. This means that communication is only limited by the propagation delay of the physical cabling. However, there is no "FDMA" or other means of allowing each receiver to listen to more than one packet at a time. If anything, both Ethernet and Arcnet are TDMA...each packet uses almost the full cable bandwidth, and timing is used to keep packets from garbling each other. When any node on the network sends a packet, all other nodes have to remain silent to hear the packet. With Controlnet, this was handled by scheduling. You can use the entire capacity of the network, which is 5 Mbps, because every little millisecond of time is accounted for. Scheduling means deterministic. With Ethernet, it was more a of "Wild West" scenario. A transmitter first listened to make sure no other nodes were transmitting and then fired away! This introduced a delay in communication compared to Arcnet. Occasionally, two nodes would attempt to send a packet at the same time due to propagation delays. When this happens, nobody would receive the packet. Because of the electrical characteristics of Ethernet, the transmitters would detect the collision too and reschedule the packet for retransmission at a later time (randomly). On Arcnet, latency is always high regardless of bandwidth because all nodes have their time slots pre-allocated. You can adjust this somewhat (with the NUT) but in the end, latency on a lightly loaded Arcnet network is high (but adjustable). With Ethernet, latency was almost nonexistent with a lightly loaded network. Latency was simple a non-issue compared to Arcnet. In fact in the bad old days of 10 Mbps Ethernet you might have to buy a switch which was a fairly expensive piece of hardware, but it was always easy to scale up Ethernet...just add enough switches when the latency gets bad. As the amount of traffic on Arcnet increased, latency remained fixed. Eventually the NUT would have to be updated. So effectively if you look at the overall picture, latency on Arcnet was pretty close to linear with the amount of traffic, which is what you expect. 2 Mbps worth of traffic should have twice the latency of 1 Mbps worth of traffic. Ethernet was different. First, there's no NUT. Second, packet delays were somewhat random (jittery). Third, due to the nature of the collision-based "scheduling", as bandwidth increased, latency did not go up linearly. In fact, it went up exponentially! At roughly 1/3 of the raw bandwidth, or 3 Mbps, hub-based Ethernet in theory goes to infinite latency. In practice, Ethernet was limited to about 1-2 Mbps before the latency became completely unmanageable. The solution for Ethernet came with Ethernet switches. With a switch, each port on the switch is electrically isolated from the others. Collisions do not happen "across" the switch, only with the nodes on a given connection on each port. At the time, switches were slow and very expensive. Two things happened to change this. First, volume increased. As more switches were sold, manufacturing costs decreased and so did the prices. Second, the original switches were microprocessor based. Latencies across switches were horrible. Eventually, switches were designed based on ASIC's. These switch designs operate at or near "wire speed". Switch latency these days is very close to the propagation delay of a wire, making the latency difference between hubs and switches meaningless. As switches came down in price, CAT 3 (and later CAT 5) cabling displaced coaxial cable. Twisted pair cabling is much cheaper to manufacture, but you can't (easily) make "Tees" with twisted pair cabling so it can only be used if there are direct connections between devices and hubs or switches. Once the price of switches came down, in combination with CAT 3 (later, CAT 5) cabling it became cost effective to build a 100% switch-based network. Now the size of the collision domains (number of nodes that can interfere with each other) dropped to 2 (the switch and the device connected to it). Almost all devices and switches today also support full duplex mode (except older PLC-5's and SLC's). If there is only one device connected to a switch and no hubs, then the two devices can send and receive 2 packets at the same time (one going in each direction) with no collisions at all. The size of the collision domain is reduced to 1...collisions cannot occur, and the switches and devices can transmit at full speed in both directions with no interference at all. The resulting full duplex Ethernet network is deterministic. The packet delay is now linear with the bandwidth, and it is deterministic. Knowing the packets per second and the network latency tells you the latency of the packet. About the only argument that can still be made is for motion control...there is still some timing jitter because the scheduling of the packet is not controlled. The slight random delay in when a packet gets transmitted through the various buffers in the system can affect the sub-millisecond requirements of a motion control system. This can be overcome through measuring the delays (via IEEE 1588). When the network traffic is in terms of HMI's however, this argument is meaninglesss. With that out of the way... PLC's are not really designed to handle much in excess of perhaps 1000 packets per second for the older generation PLC's. Even ControlLogix is specified to top out at about 5,000 packets per second, and it's really not a good idea to even get close to that limit (you will notice). Bandwidth is a factor but packets are the major consideration. PC's can usually handle 10K-20K packets per second, and a friend of mine that routinely pushes the limits running a region telecom pushes PC's out to 50K-60K packets per second running some low end protocols such as DHCP and DNS using BSD or Linux-based software (Windows is too slow and too unreliable for these applications). If you look at file I/O however as an example, PC's can easily handled several tens of megabits per second or more. The overall limitation isn't the data itself. It's the overhead involved in processing individual packets rather than the payloads. You hit this brick wall well before bandwidth gets over 1-2% if you are watching the task manager, which only measures Mbps. If you are experiencing a slow control network should be to focus on getting some of the obvious bottlenecks out of the way in terms of hardware/software. Then spend all your time focussing on reducing the number of packets per second. PLC protocols by nature are designed to operate with very small packets (often 100 bytes or less), and lots of them. In terms of hardware... First, check all your hardware. The goal is to make the network a non-issue. Check if you have any hubs in your Ethernet network. If you have any, remove them and throw them in the trash can. Hubs are limited to 3 Mbps at most with capacity for nearly infinite latency and lots and lots of timing jitter due to collisions. In these days of sub $200 for an unmanaged industrial DIN-mount switch with cut-through switching (for 10 Mbps, this isn't even needed at 100 Mbps), hubs are only useful if you are too cheap to buy a managed switch for troubleshooting. If you can't afford to get rid of them, you can't afford to maintain a 45,000 tag system either. They should NEVER be found in an operating production system. Second, check all your cabling. It should all be either CAT 5, CAT 5E, or CAT 6. If you find any flat telephone cable (CAT 3), cut it up and throw it away too. Do not use CAT 3 for anything other than phone lines, and even then, CAT 5 is cheaper and superior in every way. It is limited to 3 Mbps at best, and even then, you're really pushing what it is capable of. Cisco switches perform very badly with this cabling in particular. If you can, buy CAT 5E which is just CAT 5 with the specs tightened up. Don't waste your money on CAT 6. It is supposed to be "future proof" but it isn't...the next higher speed (10 Gbps) that needs a better cable than CAT 5E runs on CAT 7. There are no network standards developed to use the extra bandwidth of CAT 6. Third, check all your connection speeds. If you have older PLC-5's or SLC's, most of them are only capable of half duplex 10 Mbps bandwidths, and that's just the way it is. At over $10K for a new PLC-5/40E (which does support 100 Mbps full duplex Ethernet), it is less expensive to upgrade to a 1756-L61 processor. Effectively, there's nothing you can do about the SLC's and PLC-5's in your system to live with them until it comes time to replace them entirely. But you will still get almost no collisions if you connect it to a switch with CAT 5E cabling, and the PLC's themselves aren't capable of handling 10-20K packets per second anyways so even if you had a newer PLC-5, you're not going to get much more throughput than you can out of an older one. Beyond those, however, all your connections should be 100 Mbps full duplex minimum, and 1 Gbps where practical. I try to always run 100 Mbps to individual PC's except the server, and run 1 Gbps between switches whenever practical except for "local" switches that are doing things like forming the backbone of a distributed I/O network where I'm not ever going to let the bandwidth get even close to 10 Mbps. Check to make sure that you aren't funnelling everything through a 10 Mbps link somewhere. I know I said earlier that bandwidth doesn't matter. It really doesn't but your backbones should still be significantly larger than your individual links. It minimizes the effect of "cross traffic" such as if you go and upload/download new software to a PLC or pull up your E-mail from the plant floor. Even though your HMI I/O is low bandwidth, sharing a link with a high bandwidth data transfer can still mess up a carefully balanced system at least temporarily, and bandwidth is relatively cheap to fix. I like to run 1 Gbps fiber or copper links for the backbone because the extra switch cost is not that much, and 100 Mbps drops to everything except servers that really can utilize a full gigabit connection. I strongly suggest that you separate your controls network from your IT network. There are a couple ways of doing this. The simplest way is to just put in your own switches and maintain your own Ethernet network. Provide exactly ONE connection between the IT network and the controls network. If you can trust the IT people (and a simple network analysis tool such as Wireshark will verify if you can) not to screw up their configurations, then you can use a switch for that. Otherwise, you need some heavy duty protection and you need to look into using either a router, or better yet, a firewall. Hirschmann makes an excellent and very easy to configure industrial network firewall. I'm much less impressed with pretty much anything Cisco makes except for office grade equipment, and that includes the relabelled stuff that they are selling under the Allen Bradley name. The other way of separating office and industrial LAN's would be to use a VLAN. If you do this, pay particular attention to how the switches are configured. I highly suggest that you personally learn about Cisco IOS and look at the configuration settings on the Cisco switches and verify all the settings even if you don't control them. One screwed up setting from IT can completely screw up your chances of having a successful system. Among other things, you need bandwidth guarantees. This means either using QoS (make sure your control network has priority over office traffic) or bandwidth guarantees. Since IT thinks their PC's are more important, you may be forced into buying and maintaining your own network hardware if for no other reason than if you can't play well together. I've been able to successfully play well with IT in most cases simply because I control something they need/want (more resources in terms of money and manpower for maintaining network hardware). So usually we have been able to come to a mutually beneficial arrangement, although it usually means that I end up responsible for maintaining the wiring and they maintain the switch software settings (per mutually agreed upon specs), and I end up stuck with a lot of Cisco hardware on the controls network. IT guys are territorial, and your E-mail says you are too. Usually I can get them to agree to a common specification for settings, allocated of IP addresses, etc. I will foot the bill for a few switches if I dictate some hardware specs (port speeds and network layout) unless they start getting ridiculous and want to put stackable $10K 3560 switches on everything. I let them argue endlessly about whether to run fiber or copper, and which model of Cisco switch to buy, and whether to use GBIC or SFP ports. I just ignore all that as long as they don't go crazy putting fiber to everything from the pole to the coffee maker (at about $350 per port), or doing anything that makes maintenance on off hours impossible (I threaten to make them sleep with their crackberries and answer 3 AM phone calls any time there's a PC/network issue). I set the requirements for QoS and bandwidth on my equipment. If I have an I/O network, I run my own switches and make exactly one uplink back to the IT network/backbone. By the way, I'm a troll in the IT world. Me carry big, mean, brutish bag of dirty, rusty Ethernet bridge fixin tools. Me got keys to everything. Me got project dollars and maintenance resources. Me get dirty fixing and pulling cable, and me no care. IT spend all day playing WoW at work, me no care. Troll pay for high speed porn service. Troll pay for crackberry fix. Troll cover for IT. Troll go to bat for IT. Me want good Ethernet bridge. Me want solid Ethernet bridge. Me want best troll bridge in business. Good bridge, troll happy. Production happy, troll happy. Production not happy, maintenance not happy. Maintenance not happy, troll NOT HAPPY. Troll can't fix, you help fix. You no help fix bridge, troll VERY UNHAPPY. If troll too unhappy, troll take toys away, and you fix bridge. Troll drag you through dirt fixing cable. Troll fix broken switch...troll use screw driver and wire to break switch more. Troll still unhappy, troll call you at 3 AM. Troll call boss on Sunday at 2 AM. Troll call wife when you at bar. Unhappy troll gives keys to IT doors to production and maintenance. Unhappy troll report WoW and porn. If troll mad and you no fix bridge, troll builds new bridge, makes production happy, throws IT off old bridge, and keeps both. Mad trolls very, very bad. No make trolls mad. The biggest concern and the reason for keeping your networks logically or physically separated, other than if you need to grow warts and turn into a troll, is controlling packet broadcasting. Some protocols like to send out everything as a broadcast packet and can cause a significant packet load. VoIP phones in particular can be very bad about this. If IGMP is not configured correctly and you are using ControlLogix PLC's and Ethernet-based I/O, even Ethernet/IP can be very bad about this. Even "low bandwidth" protocols like DHCP can become monsters. Imagine the storm of DHCP packets that every office PC in the plant causes when the power goes out and comes back on for instance. Suddenly very innocuous office PC's flood the network with DHCP packets which can spill over into your controls network and screw it up, even if the controls network never lost power. Also watch out for cross-traffic. A classic example is my office is about 6 miles from the main plant. We use a dedicated point-to-point wireless link to the main plant for all traffic. We've been noticing lots of trouble lately. Finally, we narrowed it down to a new security camera that was installed that had it's bandwidth settings set to "unlimited". The camera was eating up ALL the wireless bandwidth. It was just a matter of adjusting it, and setting up a separate VLAN and bandwidth limiting that VLAN so that in the future if something like this happened again, a badly configured camera may cause problems, but it won't interfere with the other traffic. You will want the same kinds of protection for PLC traffic vs. PC traffic. So...just be aware that this problem exists. Physically or logically separating your network into distinct areas isn't just for that favorite IT word, "security". It's to deal with the very real and practical problem of making sure that traffic goes where it is intended to go. This addresses the hardware issues. Now for software issues. First up, RS-Linx. RS-Linx is NOT, I repeat, NOT scalable. It chokes once you get to around 10K-20K tags, and you are WAY beyond that limit. You can visibly tell this if you check your task manager for processor usage. If RS-Linx is sucking down more than 5-10% CPU on a single core, then you have gone beyond the tag limit of RS-Linx. Running multiple drivers won't help you. Architecturally, you are done. So before you do much else, you need to switch to a 3rd party OPC server. Of the ones that are out there, Matrikon has scalability issues too but not as bad as RS-Linx. I suggest you go to Kepware. They are well known for their scalability and performance. That's step one. In fact if you take nothing else away from what I'm saying, I'm about 80% certain that RS-Linx is the biggest bottleneck in your system. Controlnet isn't making this go away and neither is Ethernet. Only a scalable OPC server is a viable solution. Step two. You need one, or perhaps two (for redundancy) OPC servers running as data aggregators. If you are running two of them, carefully think this out and come up with a solution that comes as close as possible to appearing as a single aggregator on the network such as installing Matrikon RedundancyMaster on all of your RS-View 32 clients. I'd prefer to just virtualize everything but this is a much more complicated subject. Do NOT poll your PLC's directly from each RS-View 32 PC. If you do this, it will be SLOW. PLC's simply cannot handle the load of both serving out packets through a dozen connections AND doing what they are meant to do. Older PLC's are capable of around 1000-2000 packets per second at most. ControlLogix can get to about 5000 packets per second (double that with an EN2T). A typical PC with just a 100 Mbps Ethernet card is easily capable of doing 20,000 packets per second or more. My friend in the telecom business routinely pushes pretty much stock PC's used as DHCP or DNS servers running BSD or Linux to 50,000 packets per second or more without any sort of hardware assistance (such as a TOE NIC card). So as long as the software is written properly, you need just one PC (or perhaps two at most) gathering data from your various PLC's. Those PC's in turn can handle dozens if not hundreds of PC's requesting data. The OPC server PC's need to be dedicated to this use so that you don't see unpredictable slow downs or other things happening that have nothing to do with your PLC communications. Do NOT run RS-View 32 on them and try to use them as operator stations. Don't use it for anything else unless you monitor it carefully to verify that it can handle the extra load, such as acting as a BOOTP or name server or some other relatively low bandwidth, innocuous task. And be sure that this "minor" task isn't heavy on bandwidth, and that it doesn't grow in terms of bandwidth and CPU power down the road to become a major task, either. I don't recommend using them as file servers, running RS-Asset Centre, web servers, databases, or anything else that is or has the potential to be either a heavy bandwidth or heavy CPU application. If you do this, you will introduce unpredictable delays and slowdowns into your system either now or at some point in the future. I also don't recommend running virus checkers or backup software on this machine for the same reason because both are notorious for causing problems. In fact I'd even consider running the whole thing off a fanless, diskless industrial PC and booting off a CompactFlash card with Windows Embedded so that I can prevent writes to the file system and treat it essentially as a "brick". It is after all more of a piece of equipment than a "PC". Once configured, all of your tags are dynamically created at run time on an OPC server anyways (changes occur as you upgrade the PLC software, not the OPC server). The diskless, fanless PC would push your MTBF times out to the point where the redundant OPC server might actually result in lower MTBF's. The one exception to the data aggregator rule would be for instance if your plant is very easy to isolate. For instance if you have two distinctly different production areas, say area A and area B, that do not in general share data, then this calls for two different data aggregators and two different tag databases. There can be a common historian shared between them, but it is best to set up two totally different OPC servers. If you do so however, remember to avoid duplicating data on both OPC servers at all costs. The best way to do this is to configure each PLC once on one of the two OPC servers. Do not configure PLC's to be monitored by multiple OPC servers, or you will easily fall right back into the trap of duplicating data. Redundancy with OPC servers is not easy. Kepware offers "RedundancyMaster" but this requires an extra OPC client (server) on each client PC. Matrikon calls theirs Redundancy Broker. I don't know which HMI you are using but Wonderware allows you to configure multiple paths for redundancy, as a built-in function. If you used the Kepware solution, then you will be facing roughly $1300 per HMI client. At around 10-20 HMI's, running some sort of virtualization system to support redundancy becomes more cost effective than using the multiple OPC "client" solution that Kepware and Matrikon sell. OK, now let's get down to bandwidth optimization. First off, you do not need to scan a temperature sensor every 100 ms, generating a minimum of 10 packets/second. You need to go through all of your tags and look at first what the scan rates are set to. You need to establish a few different traffic classes and set everything accordingly. I suggest that parameter settings that rarely change get bumped down to maybe 3-5 seconds, thermocouples and other "slow" readings get bumped down to 1-2 seconds, totalizers and data tables that are informational and not used for control get bumped down to 3-5 seconds, and so forth. Another category would be operator push buttons and set points, and alarms. These rarely change more than once a minute but when they do change, it is nice to have very fast responses. On-demand updates can reduce the otherwise high required polling rates for these types of values. Keep in mind that although humans can visibly notice flicker up to around 10 ms, and they can recognize images enough to read values in most cases at around 30-50 ms, human reaction speeds (round trip...from recognizing an image to pushing a button for instance) are limited to around 350 ms, and even baseball players have reaction times only in the neighborhood of about 250 ms. For something that needs to appear smooth such as an animation of moving machinery, you are probably stuck with a polling rate of around 100 ms, and these are the fastest polling rates required. For anything that an operator is going to react to with an itchy trigger finger, 350 ms is plenty fast. For anything else, you should be considering the speed of the process. And see below about handling polling rates for things like push buttons that can often be the slowest updates of all depending on how the HMI handles writes vs. reads. You also need to look at data table organization. First off, go find all the dead and unused tags and delete them. This often sounds easier than it is. But pretty much any system of the size you have described often has as much as 30-50% of the tags unused. Purging them gives you an immediate benefit in terms of maintenance and bandwidth, and makes all the succeeding steps much easier. I suggest you do it in conjunction with the rest of the optimizations however, since you are going to be spending some time looking at how each of your tags (or groups of tags) are actually used. A data packet is usually good for around 500-1000 bytes. So if you have say a 2000 booleans stored in an array of DINT's or a single UDT on a ControlLogix PLC, or if you have all your bits in a single data table such as B3:0 to B3:100 on a PLC-5 or SLC, and the scan rates are the same, you will get ONE data packet to transfer all those bits at once. If you have them all defined as separate tags, or located in different data tables or widely spaced apart (N7:1, N7:1000) on a PLC-5 or SLC, you will get multiple data packets. The bandwidth (other than packet overhead which is definitely not insignificant) is similar. But, remember it's packets per second that kill latency. So you need to organize your data for optimal data transfers. HMI-related tags should be clumped together to help the OPC server use as few packets as possible to do work. If you clump things together like this, you'd be astounded at the kind of response time you can get even with thousands of tags. You also need to consider on-demand data transfers vs. polling. This is important both between your data aggregator and your HMI's, as well as between the data aggregator and the PLC's. Polling means that you get periodic updates of the data REGARDLESS of whether any data has actually changed or not. On demand means that the data source sends updated data ONLY when the data changes. This is also called change-of-state on ControlLogix PLC I/O. This update method can get rid of 95% of the traffic very easily. For instance, when you only have say 5-10 alarms per second out of a total of perhaps 2000 alarms, you get 5-10 updates per second instead of 2000 updates per second for the exact same alarm data. Similarly, pretty much any tag that an operator interacts with often gets updated only once every few minutes at most. With the possible exception of configuring MSG write blocks (which generates THREE packets per MSG) which requires a lot of upkeep on your part, it is not possible to do on-demand data transfers with PLC-5's and SLC's. Polling is the order of the day, but you can still use on-demand transfers between the data aggregator server and the HMI's. On-demand can also give mediocre results if you have say a UDT with several analog tags. Imagine for instance a UDT with 100 thermocouple readings in it plus a bunch of other data related to the thermcouples (such as the min/max readings). Individually, each thermocouple may only update very slowly. Let's imagine that you have 1000 thermocouple tags plus mins and maxes and that on average each thermocouple changes value only once every 10 seconds. If they are configured as individual tags with polling every second, you'd get 3000 packets per second (3000 tags including the readings plus mins and maxes). If they are configured as individual on-demand tags, you'd get roughly 1000/10 = 100 packets per second, 10% of the individual polled rate, on average (the mins/maxes only update once on startup). If they are all loaded into a single UDT updated on demand, then the update rate will be a multiple of 100 packets per second because although individually each thermocouple changes values only once every 10 seconds, as a lot, approximately 100 of them are changing per second (1000 tags / (10 seconds/tag) = 100 /second), and the UDT contains 3000 tags in total, so it will require several packets to transfer that much data. If they are loaded into a single UDT and polled once per second, you will get the number of packets required to transfer all 3000 tags, once per second. This is likely to be around 3 to 6 packets per second at most. So in this instance, the combination of both a UDT and polling outperforms on-demand data transfers. Don't be afraid of PLC-assisted data reduction techniques, either. For instance, if you have something similar to a batch system. For instance, if you have 100 different widgets and the PLC changes settings depending on which widget is being produced, the easiest way (short of setting up a batch system in the HMI) is to create a data table in the PLC for all 100 widgets. If there are 10 different parameter settings, then you end up with 1000 data tags in the table if you access it directly from the PLC. However, if you have a parameter screen with the 10 parameters, a widget number, and load/save buttons, that is just 13 tags. You can implement the load/save functions in the PLC and only allow the HMI to access the data table indirectly with the aid of the PLC doing the appropriate data table copying commands based on the load/save buttons. This PLC-assisted solution reduces the number of tags in this case from 1000 to 13 (98.7% reduction). Using an HMI-based batch function gives similar reductions, but makes the PLC dependent on the HMI for operation (depending on your plant, this may or may not be desirable). Another very common use for this is overcoming problems with momentary push buttons with HMI's. An HMI is generally speaking, event based. It does something based on an event. On the other hand a PLC is scan-based. The trouble is that the two are not necessarily 100% in synchronization. One place this rears it's ugly head is with HMI-based push buttons which from time-to-time tend to "stick". There are roughly 3 ways to handle momentary push buttons on HMI's: 1. When button is pushed, write a 1 to the PLC. When it is released, write a 0. 2. When a button is pushed, write a 1 to the PLC and start a timer. When the timer expires, write a 0. 3. When a button is pushed, write a 1 to the PLC. When the PLC recognizes a 1 on the input, it responds to the command, and also writes a 0 back into the input (OTU instruction without a corresponding OTL because the HMI is doing the OTL). Item 1 and 2 have timing and communication issues. If the "1" packet gets delayed for some reason (not necessarily PLC related delay), the "0" may catch up to it and overwrite the "1" packet before the PLC even sees the "1" show up. Conversely if for some reason the "0" packet fails due to a communication error, the button may get "stuck". Either case is bad, and I recommend for these reasons always configuring HMI push buttons with version #3. Comm errors result in a "no harm, no foul" situation (the PLC just may not recognize the button). In addition, you can often configure the HMI scan rate extremely slow since usually HMI writes are done out of band (no dependent on polling rates), or in the worst case, around 350 ms to 2 seconds depending on the device, since the operator does not expect "instant" responses. At least in the case of OPC itself, reads are handled either by polling or "on demand" updates. Writes are always handled "out of band" with a special write command sent on-demand (never polled) at the moment that it is required. Now, there are some tools to help you out. OPC servers have diagnostics that show you how your data is being organized and polled. Each PLC also publishes similar information though you might have to look at packet totals and use a stopwatch and a calculator to estimate it by hand. Each PC is also equipped with the task manager (keep your OPC server to below 25% of total CPU usage as a general rule of thumb), as well as a network and CPU usage bar graph. You can also use Wireshark (www.wireshark.org), which is recommended for network analysis by Allen Bradley, to troubleshoot Ethernet networks. There are two ways to use it. First, you can load it onto an existing PC and sniff all traffic in/out of that PC. For hardware devices like PLC's, you have two choices. You can place a hub between the PLC and the switch, and this is the only time I can recommend using a hub, and then plug a laptop into another hub port and sniff packets. Or you can use a managed switch and configure port mirroring to create a diagnostic port to sniff packets into Wireshark. Try to target under 2000-3000 packets per second on your server for data traffic, and shoot for under 500 packets for PLC's. This probably sounds like an awful lot of work, and it is. It works out much better if you establish these things ahead of time and develop your network specifically for scalability so that you don't end up in this situation in the first place. Of course it's easy to say that but when you are working with perhaps 1000-2000 tags, worrying about data optimization is furthest from your mind. It's only once you get to 5000-10,000 tags that data optimization suddenly becomes a concern, and it usually happens when you go from isolated to a merged network with a historian (data logger) that everything comes to a screeching halt and all the sins of the past must be dealt with. Edited by paulengr

Share this post


Link to post
Share on other sites
Thanks paulengr I read this and printed it. Now I must read again and again to reply correctly. You provide a very detailed answer and I thank you again for it. Loved the way you wrote about Trolls :) I will give a detailed description of what has been going on and what are the next steps. Pierre D.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0