rdrast

MrPLC Member
  • Content count

    83
  • Joined

  • Last visited

Everything posted by rdrast

  1. Coding Efficiency?

    Mmmm... Atari Computers... I LOVED those things. I had a 400, then got an 800, then started buying them up at flea-markets for dedicated controller projects. Step one was always replace the 6502 with NEC's 65C02 (enhanced instruction set... it actually had a store-zero instruction saving a couple clocks woot! (STZ)) Step two was usually cut a slot out of the back of the solit aluminum housing to gain access to the edge connector with the full CPU bus on it. They were actually pretty advanced; The ANTIC (Alpha-numeric television interface chip) was an amazing little processor in its own right. To this day, it was about the only 'video driver' that could allow mid-scan-line resolution/'bit depth' mode changes, and you could actually control the display scan line by scan line without bothering the main CPU. /sigh those were the days.
  2. Coding Efficiency?

    It may seem like a great idea to optimize PLC code for the best scan time. It's not. If you really have a scan time issue, then going through the code, and (cough) streamlining it to squeeze out every possible microsecond will only make it completely unmaintainable. If you really have a scan time issue, saving 7.7 ms by eliminating JSR's won't really do much except make it code unreadable by anyone except the programmer; and even the programmer won't understand it well in six months. (I guess it is good for job security though). Keep the Subroutines, but if need be, scan them conditionally. Run a 'phase counter', say 0 through 10, and scan the less critical subroutines every 10th scan. And never all in the same scan, do one when the counter is 0, next when it's 1, etc. This is particularly good for values that are scaled for HMI's, as they generally don't have to update much faster than once a second. DO trim fluff out of any interrupt routines. Upgrade the processor one notch. Don't use expensive instructions 'Just because You Can'. CMP and CPT in AB's are expensive, but useful. Factor down any constants in them. It may be easy to make a computer that is "Roll_Circum_Feet = Roll_radius_Inches * 2 * 3.1415 / 12", but it's cheaper to put the actual formula in the comment, and just do "Roll_Circum_Feet = Roll_Radius_Inches * 0.52359" -------------------------------- It is of course, only my opinion, but I much prefer code that is understandable, and maintainable rather that 'cool'. Note that this doens't only apply to PLC logic, but also to any programming language. I laugh at people that use every trick they can to 'optimize' C/C++, without even looking at a profiler to see what requires optimization.
  3. Use Moxa switch with SLC5/05

    No problem. A switch is a switch, the only difference is quality, and Moxa is very good for industrial applications.
  4. Non Linear Scaling

    Since the formula is too much of a pain to type in here, here is a link to a site with a nice version: Volume of a Horizontal Tank It doesn't take into account spheroid end-bells, but I've used it many times over and it's good enough.
  5. EVEN NUMBER ENTRY - NS10 HMI

    NUM_IN & 0xFFFE
  6. Why not...

    No, not necessarily. In almost all cases currently, communications between a DDEServer <--> PLC and an OPCServer <--> PLC are the same, and depend on the manner the developer of the server uses to implement the communications. That can (and hopefully will) change in the future. In any event, most PLC's out there do have an upper limit on communications time-slices, as in the PLC CPU can be configured to limit how much CPU time is devoted to communications, and how much is devoted to running the actual program. No matter what server type is used, the same basic 'tricks' apply for optimizing communications, and some DDE/OPC servers are better than others at smart optimizations... Example (with fake names): QADS is a 'Quick and Dirty Server' that someone threw together for talking to an SLC-500. FASS is a 'Fast and Streamlined Server' that was developed with a full understanding of SLC communications. Your application needs to read the following data cyclically: N7:0, N7:1, N7:8, N7:12, and N7:20. QADS will probably poll each individually -- "What is N7:0 now?" -- wait for reply "What is N7:1 now?" -- wait for reply ... "What is N7:20 now?" -- wait for reply, go back to top FASS on the other hand realizes that all the data comes from the N7 file, and will fit in a single block read, so it is a bit smarter: "Give me everything from N7:0 to N7:20" - wait for reply. Notice that FASS is retrieving un-needed data, but overall, it's much faster, as the 'Actual Data' vs. 'Protocol Handling Data' approaches unity. Almost all commercial servers operate this way. In your example, N104:0 through N104:110 can be read as a single (or maybe two) blocks by FASS, but 111 seperate transactions with QADS. So, for optimizing communications in any event, the best thing to do is try to pack all of your HMI cyclical reads and writes together in one or more continuous areas of processor memory. (Use Files for SLC/PLC, and UDT's for CLX). Especially important, is to pack all of your alarm bits / words together, and to a somewhat lesser extent, your historical trend words. Also, for big HMI's, don't poll everything all the time. Take advantage of things like WonderWare's " Poll only active items". I hope that helps some!
  7. Why not...

    b_carlton is close, but a little off the mark :) "DDE" (Dynamic Data Exchange) was introduced in Windows 3/3.1 as a method of inter-process communications. Inter-process meaning between applications running on one computer. It is basically a plain-text formatted method of transferring information, and at it's heart, the protocol supported "CONNECT", "DISCONNECT", "READ", "WRITE (POKE)", and "ADVISE" methods. Like many protocols, the "CONNECT" and "DISCONNECT" commands were used to see if a server was running, and to see it a 'topic' was configured in the server. If you could connect to a server and topic, then you could ask it to retrieve things one at a time asynchronously ("READ"), or write things asynchronously ("POKE"). The "ADVISE" command gave the server one or more items that the asking application wanted to know about any time they changed. For one off things (like an excel macro for example) "READS" created an entire message every time a read was requested. For polled things (HMI's), "ADVISE" commands were sent to the DDE Server. It was up to the DDE Server to continuously poll the target device, and anytime an 'advised' value changed, it would send a message up to the client progrm (HMI) stating 'this item on my advise list changed, it's new value is x'. Software publishers often extended DDE to handle things like "Block DDE", or "FastDDE" in order to squeeze more data into the blocks transferred from the DDE server to the client application (multiple items per 'advise' response). DDE was managed atomically by Windows, and was actually a part of the operating system (it's original design goal was to provide for doing things like printing a word-processing document by dragging the document to the printer icon, for example). The first big disadvantage to DDE for HMI/PLC comms is that it did NOT support networking at all, and then, after extensions were made (WonderWare's NetDDE being one), it only supported networking in a rather clunky manner. That clunky manner even got pretty well broken post Windows 2000 SP4. It is possible to make a DDE request of a server on a remote computer, but that is really a hack, and inefficient. The second big disadvantage, was that DDE didn't natively support timestamping, or item quality analysys, and much like Video / Printer / etc drivers for DOS applications, every publisher had a slightly different way of doing things. "OPC" (OLE for Process Controls) is built firstly on a standard interface, and secondly, even though it is used much like DDE, is a true client/server architecture. If you aren't using timestamps, or item quality, and are using typical OPC communications drivers, it doesn't seem all that much different from DDE at first. The real difference, is that you actually only NEED an OPC Client in order to connect to devices that can handle OPC communications. The actual server side can be anywhere, included embedded into the target device (PLC's can have OPC Servers built in to them). An OPC Client doesn't actually have to know anything about how to talk to a device on the physical level, unlike DDE Servers which do. "OLE" (Object Linking and Embedding) is another Windows technology, but is easily extended to cover non-windows platforms. With OLE, the 'client' program merely serves as a wrapper, which can dynamically connect to the actual software that handles the communications. And since OPC was designed as a successor to DDE, many more features were included that are more directly related to process control and information gathering (such as directly browsing a target devices namespace). Hope that clears up some confusion!
  8. PLC input cards

    Well, yes. Current does flow. If it didn't, then the module wouldn't do anything. Typical current's to input modules are around 2 to 10 ma, which usually drives the opto-isolators on each input channel. The specifications for input current and impedance for all AB modules are listed in the manual for the I/O module. For the record, even when using a volt-meter, current flows, and the meter loads down the signal source. Typical digital volt meters are approximately 1 to 10 megohms input impedance, while analog meters are usually rated in kilo-ohms per volt.