paulengr

MrPLC Member
  • Content count

    1416
  • Joined

  • Last visited

Community Reputation

44 Excellent

About paulengr

  • Rank
    Propeller Head

Contact Methods

  • Website URL http://
  • ICQ 0

Profile Information

  • Gender Male
  • Location North Carolina
  • Country United States

Recent Profile Visitors

8788 profile views
  1. Anyone have any experience doing this? Details of the specific scenario: I work for a large mine in the U.S. (roughly 7 miles long by about the same width). Our power comes in at 230 kV, stepped down to 23 kV, and then dsitributed over open ACSR from there. At point of use, we usually step down to 4160 or 2300, rarely going directly to 480 V because of the large frame motors that are in common use at the point of use. The area that it is in is flat as a pancake except for the "canyons" that we build ourselves. We do about 60% of our power generation ourselves (cogen). All of the equipment including substations is mobile and constantly gets rearranged as the mine advances. Trying to run a wired network in the mine is traditionally futile. About 20 years ago they went down the wireless route. Needless to say this by itself is slowly becoming more and more difficult to maintain as the mine grows ever larger. Granted there's technically no difference between running power lines and communication lines but for whatever reason, the mine production folks will destroy control power without hesitation compared to power distribution lines. Probably has something to do with the fact that one causes downtime and the other can also kill you. I've used powerline communication before. Years ago, I handled a college radio station (FM carrier current box). My house still has powerline modems in it to eliminate the wiring and hassle of trying to find a decent wireless bridge back to the wireless router from fixed equipment (TV, game machines). And I've used the pilot carrier utility stuff on occasion. I'm aware of the big "ARRL vs. everyone else" fight. The ARRL hasn't made a technically solid case (and I've run into far too many HAM operators myself that were bleeding all over an adjacent site and claiming that they were "within FCC rules" despite running such high power that they blasted their way through a 10.7 MHz crystal IF filter with harmonics from their transmitters). Fortunately because of geography the site is so isolated that from a practical point of view we don't have to deal with this. I'm also aware that the way the FCC rules are currently written, you sort of have to be a "utility" though I'm not sure if the site qualifies. Regardless because of geography, the site won't have a chance of causing interference in the first place. I've done some searching and it's hard to find manufacturers/distributers because it seems like everyone is scurrying for cover from the ARRL and similar law suit nazis. Does anyone have experience with this stuff? Any information where I can pursue looking at whether it's a viable option.
  2. 9030 communication problems

    When you work in a mine and have around 25 electricians and these things get used (and occasionally abused) a lot, you go through one once in a while. If I could find a laptop for $300 a piece that had a real serial port, real USB, almost no hard drive (no need), and perhaps sporting two Ethernet ports (though one is OK), that would support VMWare tightly enough so that I could just dump a Windows image onto it as needed (probably just boot from a high speed USB), I'd be happy. We need a commodity troubleshooting tool in this business. I'd want VMWare so that I could "fix" anything that gets messed up and so that I could continue to support Windows XP instead of Vista 7.
  3. Knowledgebase on Rockwell's web site. Also google Lynn Linse.
  4. 1794 Flex I/O Oddity

    One confirmed oddity to watch out for: All analog modules are OK with an ASB. All digital modules are OK, too. If you mix analog and digital, watch out. The first slow next to the ASB can be digital or it can be empty. It CANNOT be analog. There is some sort of timing issue that then causes the ASB module to go goofy and the config lights to flash.
  5. Not familiar with that particular printer. Most of these things are very much like an Epson/Star Micronics printer. Here are some suggestions. Note that I believe your problem is item #1. 1. Make sure that your cable is wired properly. A printer is NORMALLY wired as a DCE. A PC (or PLC) is wired as a DTE almost always. So if you have it wired so that the PLC and PC will communicate normally (two DTE devices) and then try to use the printer it won't work. A more valid test would be to connect the PC directly to the printer. The difference is in the way that the transmit/receive wires are connected. If you get this wrong, the printer will be unresponsive. You can try swapping the pins (2 and 3) yourself, or simply buy a "RS-232 Mini Tester" from B&B Electronics (www.bb-elec.com). Cost is $59. Value is priceless for what it does. Connect it to each device in turn and verify what the transmit and receive lights indicate and wire appropriately. Any light (green or red) means that the corresponding pin is a transmitter. No light means it's a receiver. All RS-232C signals (except grounds) come in pairs, and googling for RS-232C will quickly locate a diagram of this. Wire transmitters to receivers. Receivers don't talk to other receivers and transmitters don't talk to other transmitters. Using this simple logic tells you how to connect up ALL signal pins correctly. 2. Make sure all the dip switches are set right. If you have the wrong baud/parity/start/stop bit settings, it will simply ignore the bad data in most cases. 3. Similarly, check the "flow control" switch. How you set this and how you wire your cabling is important. If you use hardware flow control, the problem will be identical to item #1. In practice I find that this is also the reason that most off-the-shelf "null modem adapters", "gender changers", etc., are nothing but trouble. I've found it's more reliable to build your own cable if you are doing anything more than a straight through cable. RS-232C has been a standard since 1969 but for some reason the further we get from that date, the poorer the knowledge based seems to be getting and even the cable builders don't seem to understand anything anymore. 4. Similarly, there are a number of other dip switches that may prevent you from getting any output at all based on things like paper misalignment. For instance DSW3-5 if set will prevent the printer from doing anything until an external signal causes it to trigger. You may want to disable/turn off as much of this stuff as possible until you get the basic control stuff working. 5. I'm unfamliar with Sato (it looks like Star Micronics or Epson). In the case of Star, Epson, and Zebra, depending on what mode you are in, you have to send a CR/LF at the end of a line to get the printer to react. Some printers require a "end of page" or similar signal before they do anything. Read the entire manual (all manuals), and try any examples first before going on to set everything up.
  6. Look at the BTR data, word 5, bits 0-3 which indicate whether the scaling/input data is valid. Second, don't necessarily believe what Logix 5 is telling you. Look at the words in words 1-4 for the actual RAW data that is being output to the DAC (digital-to-analog converter) on the card. I've found numerous cases where the wizard (the display in Logix 5) is dead wrong. Third, be careful about just what numbers you put into the max/min values. You can't put ANYTHING in there. They have to be right, and if I read the card manual correctly, it is limited to +/-9999 but it also depends on whether or not you are in BCD mode or binary. No point in doing anything but 2's complement binary on a PLC-5 (the BCD stuff was for PLC-2/3 compatibility). Fourth, read over the calibration procedure in the user manual (literare.rockwellautomation.com, enter a search for 1771-OFE). Be sure that someone hasn't recalibrated it incorrectly if the ranges are all off and the software side is working correctly according to the input data you've put in and the outputs, if you can't get your outputs to change even with defaults (0 and 4095, or +/-4095 depending on whether you are using 4-20mA or +/-10V).
  7. 9030 communication problems

    Sounds all well and good. Good luck finding one to meet this specification. It's getting harder all the time because unfortunately, Bill Gates published some stupid standard that said that RS-232C is officially obsolete according to Microsoft back in the days of "Plug-and-Play" (another hardware debacle that lives on) and that USB is now the standard. This happened back in the 1990's. PC manufacturers listened because although at first USB was somewhat more expensive to support (the USB chips were initially much more expensive than UART), the power supplies are almost nonexistant (same supply you have tor a disk/CD/DVD/floppy drive) so it made the overall port cheaper to manufacture, and they began shipping without RS-232C about 10 years ago since the only people still asking for them were crusty old industrial controls people (a niche market to say the least). Today, you cannot buy a PC with real serial ports these days from HP, Dell, or any other major manufacturer. Like it or not, we now live in a world where we are required to deal with the vagaries of the USB/serial converter. Even the Panasonic Toughbooks don't come with one any more. Now my knowledge of this stuff comes from living THROUGH the plug-and-play transition (and UART's and ISA bus, and so forth...been working with this stuff since the late 1970's). My engineering degree was specifically in analog electronics so I'm very aware of the board-level designs, but I went "native" somewhere along the way and started working on the industrial controls side rather than sticking with the hardware manufacturing side of things. We've got over 2 dozen 90/30's still where I'm at, and this specification is very hard to meet. Periodically I have to buy a couple different serial adapters, test them, and then buy a bunch more. About once every 6-12 months, whatever USB/serial adapter worked before is obsoleted. Again...I'm nursing these things along until they can all be replaced. I've already managed to get rid of 5 of them in about a year and will be getting rid of 2 or 3 more this year. GE Fanuc has obsoleted only of the 16 bit output cards and all but one PLC model so that helps tremendously with convincing management that these things are going to eventually bite us. Also note that not all GE Fanuc power supplies are compatible with all GE Fanuc PLC's. You can also get some side/background information about these things by looking at information on Toyo PLC's. 90/30's at least originally were rebranded Toyo PLC's. Any product that has no differentiating features like a serial/USB adapter also means that in a free market society, the profit margin on the product drops to effectively zero (2-3%)...so nobody wants to sell these things because there's no money in it. Standards can be a good thing and obviously commodity products command very low prices which is good for the buyer, but a horrible situation for the supplier, especially one that is supposed to be delivering 10%+ annually to the stock holders. Mass production only gets you so far. Hence there is a huge market drive to make USB/serial adapters cheap by cutting corners. Unfortunately there are no labelling standards either so there's no way to tell which ones properly implement the standards and which ones are junk from the packaging. There are three problems with these things: cutting corners on the EIA/TIA specification, cutting corners on the USB specification, and cutting corners of the USB specification by the laptop manufacturers themselves. 1969 EIA RS-232C specs are that a voltage of 3 to 15 volts (either positive or negative) is a valid signal. Anything greater than 15 volts is invalid (although the spec allows for open loop voltages as high as 25 volts to overcome long cable distances) and anything below +/-3 volts to be invalid. Specs also say maximum of 20 kbps but obviously nobody follows that either. In practice, "true" (as in following the specification to the letter) systems were originally built to use +/-12 volts. However, in the days of the ISA bus architecture (the original "PC" bus), the power supplies were fairly large. They had a standard connector that worked on almost any kind of disk drive that had 4 pins in it: ground, +5 volts, -5 volts, and +12 volts (unregulated). That same power was available on the motherboard as well (I don't recall what was available on the ISA bus itself). So the first step towards cutting corners was that PC manufacturers started implementing RS-232C COMPATIBLE ports. It wasn't a real RS-232C specification port. It only used +/-5 volts instead of +/-12 volts. So long cables were out. This also become something of a necessity because cable capacitance limited speeds, especially with UART chips began supporting speeds well above the 20 kbps specification. Still, at least +/-5 volts was within the specification range, although this became a problem with higher speeds and/or long cables (cable capacitance puts pressure on the power supply and/or transmitter chip to drain the capacitor faster when switching). That was then. USB provides +5 volts power on the bus. So there are two choices. Either put in a DC-DC converter and hope that the upstream USB power supply provides enough power to drive it to produce the required power (to produce +/-5 volts) and still have enough margin to drive a {hopefully short) cable and RS-232C compatible receiver, or hope that the receiving device is very, very tolerant of bad RS-232C implementations and will allow something close to +/-1 volt, which most will no matter what the specification says. Now the DC-DC converter costs money and makes USB/serial adapters a two-chip solution. The very cheap USB adapter simply uses a resistive bridge to output +/-2.5 volts by splitting the 5 volt power supply and doesn't need a DC-DC converter at all. So out goes the +/-3 volt specification except for some very rare ones that find that customer complaints and returns are unacceptably high. It will act erratic if the cable lengths aren't right, or the cable is cheap (too thin), or there are too many devices on the USB bus, or something like that. But most users will not notice (hopefully). And since at this point nobody is really following 1969 EIA RS-232C any more, you can always assign blame to someone else, or set up a customer service number in India where the customers will just give up when they figure out that there is no real support. In addition, since there is no true "UART" any more, and no stiff power supply, the actual signals get very mushy and don't have sharp corners. It becomes progressively harder for the receiver to detect what represents an edge in the signal transitions. So even if the voltages are correct, it might still not work well because the manufacturer decided that big, fat electrolytic or even better, metal film, capacitors don't fit well inside a little dongle case and plastic (and capacitors) cost more money. So shaving a few cents off by undersizing the capacitor means that the signal shape also suffers. Of course GE (Toyo) 90/30 PLC's are very finicky about this stuff and do NOT LIKE CHEAP USB/RS-232 ADAPTERS. So that's your first hint why things don't work very well...cutting corners in the electrical side of the adapter. Now if you have a breakout box and an oscilloscope, all of these ridiculous corner cutting maneuvers become painfully obvious. But even if you don't do this, the above basic architectural decisions should explain why so many controls guys keep trying to suggest that you just go find yourself a 1980's vintage "luggable" Compaq somewhere that still works running DOS and don't bother trying to upgrade for at least another 50 years or so. By then, one of three things will happen: they have retired and don't have to concern themselves with it anymore, or Microsoft obsoletes USB, or PLC manufacturers give in and put USB chips in their equipment. Second problem is that the USB standard has some standard devices defined. Among them is RS-232 (serial port adapters). For some unknown reason, a lot of vendors choose to simply not follow the standard to the letter on the software side. It seems to work OK for a relatively modern Windows program that relies 100% on the software libraries provided by Microsoft. Old DOS (LM90) software on the other hand never had hand-holding from Windows software libraries. All of these programs are designed to use a UART. The original UART was the National Semiconductor 8250, used in the original IBM PC's. One of the problems with the original chip was that if the software didn't service the port quickly enough, the character was lost when the next one came in. So this meant that the UART chip had the capability of sending a signal to cause the software to stop what it was doing and read the port, the interrupt. However, there were never any really formal standards on what interrupts did what, only a series of informal standards. So there were all kinds of interrupt conflict issues and strange dip switch settings on cards that were always confusing to PC technicians and consumers alike. This is the reason that the PNP (plug-and-play) standard was brought into existence in the first place. Instead of cards just picking an interrupt, the PNP interface allowed them to negotiate for one in software. Second major innovation in UART's was the National Semiconductor 16550 which had a 16 character buffer, and many variants were produced after this. There were essentially just 2 interrupts available in the default architecture for PC's, supporting either dual UART chips or two separate cards. As the number of serial ports for special applications grew, more chips were mapped into the exact same interrupt. So each piece of software in turn would check for the card it was looking for. If there was no new data available, it would pass control onto the next piece of software in a daisy chain until everything was checked over. Needless to say, that's taking an awful timing risk especially as serial port speeds increased that a byte would get missed here or there, so the 16550 essentially fixed this problem permanently. But the important detail here is that regardless of who made it, the memory architecture and the interrupts were almost a standard. Anyone deviating from it had all kinds of compatibility issues. When USB came along, implementing the NS 16550 architectural interface was considered critical. So the serial devices of the mid/late 1990's emulated the 10-15 year old NS 16550 interface so that everything still worked as expected. DOS was alive and well, and when Windows 95 came along later (with PNP), it still supported the same old standards. When USB came along, the drivers that Microsoft originally produced simply emulated the 16550 UART interface in software. DOS still worked as expected for DOS emulation, and Windows worked as expected without any need for the DOS compatibility layer, where the interrupts and the memory mapping were no longer needed. Today of course, "nobody" uses DOS anymore. So there is far less of a need for maintaining absolute, strict compatibility with the 8250/16550 hardware interface. So corners get cut here and there. Perhaps some of the status bits don't get implemented or the interrupt function works only on one of the two available interrupts. Cut a corner here, cut a corner there. By now nobody gets giant 1000 page chip specification catalogs on paper anymore. Everything is PDF, and it is doubtful any of it is translated cleanly to Indian or Mandarin languages. Companies farm out their software development and buy "IP" UART, USB, or other implementations. Nobody selling the actual hardware has actually ever read the specifications that they are implementing. "Testing" consists of plugging it into a PC and a modem on the bench...that's the test bench. Nobody in the testing world tests it with an oscilloscope (too much money) or a GE Fanuc 90/30 PLC because they didn't even develop the firmware...they bought it. The software and hardware real estate gets just a bit smaller. Snip a corner here, cut another one there. EIA doesn't do round robins or testing, so no danger of anyone actually checking compatibility, especially when it is being marketed to Walmart. Now unfortunately, USB/serial port adapters do not come with labels (hopefully standards based) that tell you what percentage of the UART and USB serial port adapter standards they follow. They also don't tell you what the actual output voltages are. If they did or if there were PC magazines that cared enough to review these things, we wouldn't be in this situation. Then again, PC managazine reviewers are more interested in the bells and whistles that don't follow standards than how "compatible" things are, so they are definitely not interested in doing reviews on USB/RS-232C adapters. So you have no idea when you buy one how close it implements either standard except by testing. Finally, if you haven't noticed lately, laptop power supplies are getting smaller and low power all the time. So knowing that USB device manufacturers recognize that the 5 volt spec is also somewhat weak (long cables = lower voltage), they've also cheaped out on the power supplies so even if everything is right with the USB adapter, you might still have a cheap port in the laptop that doesn't work. I know that many times the recommendation is to eliminate all the hubs and that sounds good, but often the hub is also internal to the laptop! So sometimes adding a powered hub solves the problem, and sometimes not. If it does, sometimes some of the ports on a particular PC/laptop are different from others, and simply moving the device to a different port may fix the problem. I know that at this point it sounds like the original suggestion (finding a 1990's or earlier vintage PC or laptop that still works) sounds like just about the only viable option. But I can tell you that I've had good luck with three sources: 1. Keyspan in general seems to produce decent USB/serial adapters. That's a general rule, not always accurate. I've had a few stinkers lately and I'm no longer sure how good this recommendation is. 2. B&B Electronics markets communications stuff to the electrical/controls world and I've had good luck buying USB/serial adapters from them. They get lots of feedback from other industrial controls customers and have a pretty liberal return policy, so they've done the "testing a few" devices for you ahead of time before they try to sell something...returns cost them money and credibility. 3. Call Siemens on their drives. Why Siemens? Because they are a very large company, they still respect their customers enough to have real customer support, and their drives all require serial port connections (at least the 6RA70's that I buy). They have a list of recommended serial port adapters.
  8. This case is talking about an output card. You need to write to it basically continuously (or alternating BTW/BTR) so the above information doesn't matter. Since there's nothing being passed back (other than status), you could theoretically delete the BTR's. That being said, there are certain cards that do NOT take kindly to being continuously reprogrammed and either act erratic or eventually tend to crash on you with no explanation whatsoever. We went through PILES of cards for a while on one machine before figuring this one out. The best way to set up your BTR/BTW's to avoid these kinds of problems is to use the wizards to insert the instructions at least initially, even if you have to move them around to where you want them. The wizards know when to initialize on first scan and when not to. In practice unless it was one of the "bad" cards, for read cards, I tended to slow the update times down to once every 30-60 seconds. The maintenance department had figured out that you CAN do a RIUP (remove and insert under power) on a 1771 chassis and it works often enough without any ill effects (frying card, chassis, CPU, or power supply) that it became a practice before I started working there. They had unfortunately already conditioned production to the idea that this was a good idea so all I could do is wait for something unfortunate to happen (damage which took down the PLC anyway). It never did, so the program needed to be written to support RIUP whether it was OK or not (it's not) on a PLC-5.
  9. I hope this is not correct as it destroys the concept of code reuse, especially if you have a lot of little AOI utility functions...which should become more common because that's what you typically see with Siemens programmers. I would strongly encourage you to direct your inquiry to AB's tech support so that hopefully if there really is something going on here which is in fact a bug, they will find it and fix it.
  10. Honeywell PLC and DCS

    Honeywell is VERY hostile towards system integrators. Prepare for virtually no third party support. Honeywell only wants you using their in house people. They also lock you out of a lot of things even when you bought it. Virtually all of their PLC's use some form of function block, but not IEC function block. Their DCS's are ridiculously overpriced (try $700K US for a basic system before adding IO). They mix HMI, PLC, and historian code. Difficult to separate them, and difficult to keep any semblance of separation of concerns. It's all based on either third party or consumer grade hardware. They advertise heavy use of redundancy partly because they use crappy hardware and try to overcome it by allowing it to fail. Licensing is very obnoxious. They charge per IO point rather than per PLC or something like that. And they charge it annually! There is NO comparison to Siemens. And I'm not even a Siemens fan.
  11. First off again, you can use Ignition from Inductive Automation. It's free. No protocol writing needed. There is also C++ code and Java code floating around on the internet that implements both DF1 and PCCC which allows you to use the native protocol built into your PLC without writing any ASCII code on the PLC at all, and no protocol code on either side. This places you way ahead of the game in terms of getting things done. You need only to develop your screens and work out how to get them to show up on the Ipad. Working with an Ipad is somewhat difficult in that it's a closed architecture...you will need some kind of App to talk to the PC side because Ipad doesn't support Flash or Java. I suppose you could remote connect via VNC or some such and do screen scraping. Low end technology but at least it works. I strongly suggest going that way. Writing protocols and protocol handlers is a nontrivial affair. I've been doing them off and on again when absolutely necessary over the last 20 years, and I consider myself pretty good at it. I wrote some of the code in some of the KA9Q TCP libraries in the 1980's as well as the first available Usenet/Citadel BBS gateway back when the "standard" PC was 8086 and DOS based. In spite of having lots of experience doing it, I'm recommending not doing it. It looks easy to write protocol handlers but it's not. You've basically got to approach the whole problem as if anything that can go wrong in the communication stream probably will. So 75%+ of your code is simply cross checks and validation. The code turns into pure spaghetti logic if you don't approach this in layers with data encapsulation, each layer doing one piece of the total package. You will run into all sorts of kludges in your code if you don't follow a strict layer methodology, which is tough to do because many times how you build your layers is touchy. For instance let's say that you have an "escape sequence" to protect "special" start/stop characters, which is common, and you have a start/stop character to delineate a packet. If you make the inner loop handle the escape translation logic, it could "eat" a start/stop signal if the escape gets triggered inadvertently. So then you protect it with a timeout. But this timeout can conflict with the next outer loop which looks for the start/stop signals. So ultimately the easiest way is to push escape translation way down to the command interpretation level but then you have escape translation redundantly in almost every command that takes a parameter. See...it gets very messy, very quickly. Many protocols get screwed up in the design process, even with experts. Hence the reason we are on IPv4 (or V6...notice V5 never came into existence), and IGMP version 1, 2, or 3 (all incompatible). Those are major internet protocols and even the experts took several times to get it right. The BSD TCP/IP protocol stack which in itself is actually quite simple and is the "library" behind virtually all major operating systems (Mac, Linux, and Windows all use it, although Microsoft took pains to make it look like Winsock is anything but BSD 4.2 or 4.3 or probably 4.4 now) is currently at version 4.4 last I heard, so obviously it took several times to get even something as important as that right. If you want to do it yourself, I'd recommend getting a copy of the code I mentioned above. It takes a little searching because it used to be free and now they are charging for it, but the code in C++ is called "ABEL" and was written by Ron Gage who has now moved on apparently to be a wedding photographer. Even if you are not well versed in C++, it is written cleanly enough to be readable. Ron sold the code to someone else, but the original (free) versions are still floating around on the internet. This assumes that you write code in the PC side that uses the PLC's native CSP/PCCC Ethernet protocol. The DF1 protocol is nearly identical in many ways. AB gives away free documentation in their tech library on the code. The DF1 side is accurately documented and works well. The Ethernet version is basically totally undocumented but between Ron Gage's code and Lynn Linse's web site you can piece together the differences. You may also want to look at Lynn Linse's IA protocol tips as he gives away a lot of details, too. www.iatip.blogspot. Lynn is pretty good at writing protocol translators. At least 2 of the Digi One gateway devices are his handiwork. Reason I suggest going this route is that then all the programming on the PLC side of things is already "done" by Allen Bradley. You only have to implement one side (the PC side). DF1 and CSP/PCCC are about as simple as it gets. There are some features that you won't use but it's not enough to matter. An alternative is that you can use Modbus. The modbus web site (www.modbus.org) has all the documentation for free (truly free, not just sort of free unlike ODVA or OPC). The protocol is unbelievably simple and again, someone else worked out most of the details for you already. There is source code for modbus all over the place and it's so simple to do that you can write your own in a few hours if you are experienced with this sort of thing. If you go this route, and you have access to RS-Logix 5000, Allen Bradley has free code for Modbus for Logix-based processors on their web site in the sample code archives. Modbus is popular partly because it is just about the simplest protocol out there, even the Ethernet version. In contrast, Ethernet/IP looks like it came from a foreign planet. Only about 10% of it's features ever get implemented. You can write your own protocols, too. I'd recommend taking hints from above. You might be able to simplify in some cases but not enough to matter. I've done this with industrial ink jet printers from Matthews because that was the only way to make them work. Still, it was ugly, nontrivial, and tended to screw up every few months. If I did it all over again (it was not my original design; I only fixed what was already there), I would have done it differently. In response to BASIC vs. ladder logic: both stink. The code is just as ugly either way. The "cleanest" form for writing protocol/communication code at a low level is to write it in reactor-based style with layers. Each layer is easiest to write in a class/object based system where you can achieve complete data encapsulation in each layer. That's how things like Wireshark and most protocol libraries are written these days. BASIC and ladder logic are quite foreign to those programming styles, so you are working an uphill battle. With some creative use of subroutines you can probably "fake it" but that's as close as you'll ever get. In my mind, there is almost no difference between writing the code in either one. There is no inherent advantage either way. The major advantage of the BASIC modules (other than lining AB's pockets) was always familiarity. PLC-5 and the newer Logix processors support Structured Text which is essentially based on Pascal which is familiar to PC programmers but totally alien to controls programmers. Supporting any of these languages was always done simply to provide a familiar environment for PC programmers. The one advantage that structured text has over ladder logic programming is that certain styles of programs which require extensive procedural approaches (running mathematical calculations with lots of loops like Fourier transforms for instance) was always cleaner, faster, and smaller in structured text.
  12. ASi Bus

    No experience with ASI. Some with Profibus, Genius, and DH+. Strongly recommend these days not bothering. For all the hassle and problems it's just as easy to cross the slip ring with a wireless system and more reliable. Modern protocols these days are getting more and more touchy about slip rings so wireless just makes that much more sense.
  13. Interface USB x Device Net

    No need to uninstall. Just deleting both Harmony files does the trick.
  14. Booster Pack System

    Seen one of those a while ago. If you get it to talk Modbus, you can communicate directly to it with a Micrologix series PLC. And then if you want to use some other PLC other than that (Logix 5000 based), a Micrologix 1100 is cheaper than any other protocol gateway I've found such as Prosoft.
  15. For truly large displays, I've found that the various "kiosk" type displays such as Inview or EZAutomation are OK but a pain in the rear to program overall. Panelviews by themselves even with the true large ones are vastly overpriced. So that leaves three options; First you can use a PC with an HMI, and buy a flat panel TV (cheaper than monitors in large sizes) to get say a 40" or 56" display. Most of them these days will take a VGA input. As to the HMI software, Panelview plus is basically a Windows mobile tor Windows CE PC preloaded with Station ME, which is the software that you "see" (unless you break out or bypass it somehow). There's nothing however stopping you from buying your own PC and buying Station ME. Then you can break out of AB's size limitations (and some pricing). Second option is to use a completely different HMI. Ignition by Inductive Automation offers free "standalone" HMI licenses which do the trick. Of course with this option you can use any HMI you are familiar with. In a large operator control room I just recently put together, we have the "wall of monitors"...18 screens, 12 of which are 24" wide screen monitors and the other 6 are 42" flat screen TV's. They have 18 thin clients driving it. It's truly a "glass cockpit", with a LOT of glass. Third, you can look into the Red Lion G30x displays. By themselves, they're not any better size-wise than the AB Panelview Plus's (actually they don't even have the same size available). BUT, an option that you can buy uses one of them to drive a large LED display. So you get the advantages of an industrial hardened display and a "huge display" at the same time. In practice I'd never use a marquee system ever again. The prices are not really competitive anymore, they are tedious to program, and very limited in what you can display. Very large monitors (unless you get up to the baseball stadium size displays) simply aren't that expensive anymore.