Sign in to follow this  
Followers 0
jstalman

Logix5000 Processors Memory and Scan Time

9 posts in this topic

I have a couple of quick questions. Hopefully they are quick and easy. When creating a ladder logic program in Studio5000 (I use mainly 1769-L36ERM's), which of these provides a quicker scan time and less memory consumption. Using branches in 1 rung or individual rungs of code? I am using 8 SQO's in a indexing application they are all triggered by the same bit, so I had them in a branch circuit, but for readability sake I think it would be better as individual rungs. Then my curiosity kicked in and I was wondering what would be the best option for performance? My other question is when creating a UDT, is there a structure that is best to layout the UDT. For example list all BOOL, REALS then DINTs, then STRINGS, etc? Thanks for any feedback. jstalman

Share this post


Link to post
Share on other sites
With today's hardware, in most cases readability is far more important than efficiency. But to answer your question, it all depends on the rungs you're talking about. If you double-click on a rung, you can see the actual code that the processor executes, and that should give you a pretty good idea based on the number of instructions. Don't forget that each rung also has an SOR (start of rung) and EOR (end of rung) that isn't shown. If you're trying to organize your UDTs for maximum space efficiency, the easiest way is to group everything by data type, especially the BOOLs. The controller stores everything 32-bit chunks, so if it gets to the end of a group of BOOLs, is will just tack on extra space to fill up the chunk. Grouping them all together keeps the losses to a minimum. The same is true to a lesser extent of other data types, but BOOLs are the worst offender.

Share this post


Link to post
Share on other sites
Thanks Jeremy. I usually go with readability as well just was wondering if that was the right approach. I have a strong plant maintenance / service background and always want things to be easy for those guys now that I'm in the engineering side of things. I also assumed that about the UDT's I try to use / setup DINTS as much as possible again keeping it easy for those maintaining the machine down the line. jstalman

Share this post


Link to post
Share on other sites
The difference is minimal unless you have a very large program. Even then the speed isn't usually an issue unless you have an extremely time sensitive application. And if that's the case, I would use a periodic task for the essential logic. It can be satisfying to write some really tight efficient code, but when you get a call from maintenance at three in the morning because they can't understand the logic, all that satisfaction goes poof!

Share this post


Link to post
Share on other sites
One other way to save space is to not create bools as base tags at all. Create a dint or dint array called BOOL_Storage and alias bools out of it.

Share this post


Link to post
Share on other sites
He's talking specifically about UDTs, which already groups BOOLs, assuming you clump them together. Actually, the UDT definition page lists the size of the structure in blocks, and you can test my theory. Create a UDT with a DINT, then a BOOL, then another DINT, then another BOOL and look at the size. Then put the two BOOL back to back, and the size of the UDT will change. But using DINT with subelements instead of a bunch of BOOL is a way to save memory on standard tags (is that what you'd call them?). But again, it hardly makes a difference unless you've got a huge program, since memory isn't at a premium with today's hardware.

Share this post


Link to post
Share on other sites
Aside from grouping all bools together in UDTs, worry about memory AFTER your code is running. When it comes to branches vs rungs, a rung always starts with a hidden SOR (Start of rung) and ends with EOR (End of rung). Branches start with BST and end with BND. I guess you might technically save one instruction with NXB for multiple branch rungs, but it is pointless to think about. Even if memory gets tight, there are usually big causes (like, selecting the wrong processor), and micro-optimizing is pointless. That, and as an end user, if you send me a processor that uses 99.7% of its available memory, I'm sending it back.

Share this post


Link to post
Share on other sites
With Logix PLCs, in my opinion, do not try to worry with keeping memory usage low unless you must. If you have an application that must be high-speed, you may find yourself in a pickle with most Logix PLCs. One application we had to figure out was a high-speed application duplicating and old G&L controller that had a 1ms multi-task. We could not get this to run repeatably or fast until...put the code (12 separate, short routines that were exact other than inputs and outputs) in an AOI with these 12 AOI's on a single run in a 2nd processor. BAM...fast execution. My tips...figure out the parts of your code that MUST be high-speed and put them in a task with an acceptable scan Priority and Rate (put all other code in a 2nd task that runs with a slower scan Priority and Rate). All of my testing have shown that one cannot force the Logix platform to scan any task at a repeatable schedule (all you can do is manage it to scan as fast as possible and live with it).

Share this post


Link to post
Share on other sites
I depends on what you mean by repeatable. From my experience, a task set up to scan every 5 mS will usually scan between 4-6 mS, and a task set up to scan every 20 mS will scan every 18-22 mS. That is close, but you're right, not exactly repeatable. It's always been acceptable for my needs (and I've had some high speed prox/encoder applications). There will always be some sort of error as the processor has to deal with overhead such as IO and other stuff, as well as all the tasks. I don't argue that your special application required what you describe (I'm sure it did!), but saying that "one cannot force the Logix platform to scan any task at a repeatable schedule" is misleading. For most applications, the repeatability of scheduled tasks is perfectly acceptable. Edited by MrAutomation

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0