Sign in to follow this  
Followers 0
tcpdump

Operating systems: Linux Vs Windows

17 posts in this topic

Hello all, Just a question: Which operating system is good for either the Master Station or HMI? Windows or UNIX??? Now I do not want to open a can of worms and I know this may prompt much heated debate between fundamentalists and those whom have been brainwashed by Micro$oft... Does it really make a difference in terms of performance???/

Share this post


Link to post
Share on other sites
Since our major supplier of PLC's is AB/Rockwell we're pretty much stuck using Windows both from a supplier and a corporate edict point of view. That said I've seen a lot of good Unix / Linux based SCADA as well. The biggest drawback I can see to linux is the limited availablility of knowledgeable support personnel versus windows qualified support staff.

Share this post


Link to post
Share on other sites
Good Linux-based SCADA?? What's out there that's worth the effort?? (I'm truly interested!!)

Share this post


Link to post
Share on other sites
Performance is better, especially on older hardware. However, stability is still worlds better than Windows. The differences are blurring however. Windows keeps recompiling the standard Unix TCP/IP stack for their internet protocol support. Java hides all the ugly details of the underlying machine and is very close to natively compiled applications at this point once the monster executable gets booted. I would have never thought that for instance you could treat a SQL server as essentially a generic communication system for an HMI. However, inductiveautomation.com has proven that it actually works quite well, in Java no less. But, you've got to ask yourself what the goal is. For me, if I was going to "go native", I'd probably consider writing everything either in Javascript or Java (or one of Java's various auxiliary languages). That way I wouldn't have to care what the underlying OS is. And performance would be very, very good. Consider the Dojo toolkit or Netbeans if you want to go this route. Visual Studio looks pretty but between the license fees and the bugs and instabilities, I'd rather go with a more solid and well documented platform. The downside is that most HMI's come with lots and lots of packages that have already done a lot of the development work for you. Logging, charting, animation, handling various security aspects, setting up and tracking all the bits and pieces of data from the PLC's...all done for you. So for that reason, I'd rather stick with somebody's HMI. Over the years, I've worked with RS-View 32, Wonderware, Labview, and Cimplicity. To a large degree, most of them work more or less the same. Each definitely has some niche features that are better than the others. At this point, I believe some of the less expensive competitors have equalled or even eclipsed these guys in many ways. They're trying to compensate by selling more "integrated" solutions with additional add-on packages such as AB's RS-Enterprise stuff (Cimplicity has Plant Applications, Wonderware has various things that they add on to FactorySQL). I've never really had the opportunity to make a decision to switch. If I had to start from scratch, I'd take a very hard look at inductiveautomation.com. They seem to be doing some things "right". I'd also have to strongly consider Netbeans outright as a platform. The pre-built features mentioned above are pretty much outright available under Netbeans, plus you get all the bells and whistles that come with Java for free. Nowhere in there is a discussion of platforms. I'm partial to Linux/BSD platforms for the simple reason that the performance is well ahead of Windows, along with the reliability. I can trivially put a fully functional copy of Linux/BSD onto a USB or CD and expect it to boot as fast or faster than Windows with no hiccups out of it for months. However, there's also the inevitable compatibility issues. And the fact that support is just not as good as it is with the Windows platform. Heck, one major problem is OPC. Outside of OPC UA which is a closely guarded secret unless you want to pony up the bucks to be in the club, OPC is based on the COM/DCOM object model that was the Windows platform for years. It is extremely difficult to do any kind of cross-object platform implementations, especially one with documentation that is as bad as COM/DCOM. This is the primary reason that not too many people have pulled it off. There are native drivers for SOME PLC's, but by and large almost everyone is slowly migrating to the OPC concept of more or less a generic interface system. Since it is based on COM/DCOM, it is very rare to see it supported outside of Windows. Even if you could get it, it is most likely still going to require you to use Windows as a front end to your PLC. So like it or not, we're kind of stuck with Windows right now in many circumstances.

Share this post


Link to post
Share on other sites
I second Paul here - and have a few things to add: 1. An HMI terminal should work like an appliance in terms of security, stability/reliability, and performance. The geek in me screams Linux. My experience has been that Windows is what the people who configure, install, and maintain these systems can best support. 2. In terms of connectivity, OPC's the de-facto system for device connectivity. They're leading the way with a (currently) DCOM (read MS) based solution. Even the vendors and non-OPC camps seem to lean toward MS. Unless you're a software programmer who plans on laboring on the aspects of your project where you shouldn't, Linux will be tough here. Note - being more loosely coupled between individual HMI node and PLC can buy you flexibility here. 3. In terms of the builtin tools and packages there are abundant options that are all Windows. I would love to see a mature commercial or open source Linux solution that is viable for projects. Kudos to http://pvbrowser.org/ for leading the way to my knowledge. 4. Open/distributed access. HTML is the most universal, but clearly isn't capable of being a "rich" HMI. It lacks persistence so you don't get realtime updates and is weak in the areas of graphs/printable reports/alarms/etc. Javascript could be used for really cool HMIs in theory (think gmail), but in practice is dangerous - I have written a Javascript based HMI that worked and regret it to this day. Java is a good option due to it's platform independence. Flash seems like it'd be really cool, but I haven't seen it. Aside from that vendors seem to be going with client installed software that can only run locally on MS. They do augment this with a limited web server that may also have ActiveX controls. So unless you have a programming team with a big budget, all signs point to MS. I warn you, even with a big budget now, custom controls applications will be painful later. So we'll wait for OPC-UA and open source projects to gain maturity. Or - heaven forbid - vendors to include Linux support to their existing product lines, which is viable.

Share this post


Link to post
Share on other sites
Nathan and Paul - have read your posts with interest and it generated a question in my mind I'll pose to people. Has anyone tried an MS solution like RSVIEW, RSLOGIX, CXS and so forth and run it in WINE on Linux. Was having coffee with a couple of Linux geek friends of mine {Database and Server programmers} and they mentioned this approach.

Share this post


Link to post
Share on other sites
Haven't attempted Wine-based MS HMI software. I don't know if there's a compatibility problem but it seems like there are fewer and fewer of those every day. Here's another set of options I've been considering... I already have a few HMI's on the floor implemented via KVM extender. This is the reverse of the KVM switch. I have one PC with one or more monitors, instead of several PC's sharing a single keyboard/video/mouse. I can put the PC in an office and put the monitor (and even the keyboard/mouse) out in a nasty place in the plant. Option #2 is thin clients. Personally, I've already rejected the notion. But I made the unfortunate mistake of helping our IT people far too much to figure out how to actually implement thin clients. Don't get me wrong, there are some advantages to thin clients. But I haven't been very impressed with the plant floor version. With a thin client, you put a "graphics terminal" on the floor. It is essentially a more sophisticated version of the KVM extender. The actual "PC" lives some place else on the network. I didn't eliminate PC maintenance, but frequently, you can just do all the maintenance on one big PC with multiple clients attached. It reminds me of the old mainframe days though...the world went away from the centralized computing model years ago for good reason. Instead, I've been toying with the idea of a "middle weight" client. Advantech sells an industrial PC called an "UNO PC". It's basically an entire diskless PC. It's got all the usual ports on board including multiple Ethernet ports, VGA, serial, etc. The "drive" is a CompactFlash card. It runs Windows XP Embedded...essentially Windows XP except you can strip out anything you don't want. There are also several other competitors but I haven't found any at Advantech's price point. Now here's where it gets interesting. One of the problems with using CompactFlash as a bootable device for Windows XP is that it wants to constantly write to the CF card, which quickly wears them out. This is one huge advantage of Windows CE...except that with CE, you have various software compatibility problems. Not a problem with XP Embedded...it is real XP. Windows XP Embedded has the capability of installing "filters" into the file system. So you can allocate a chunk of RAM and tell XP that all writes will be redirected to the RAM, making the CF card write protected. So...you can install everything onto a CF card. Then write protect the CF card both in software and hardware. No reason for virus checkers and such since every time you reboot, you truly reboot...there is no true "storage" at all. The "identity" of the machine (such as IP addresses) is also on the card. To replace a machine, swap cards. Same with upgrades. Although XP Embedded does also have the capability of having a "boot server" on the network where all the XPE machines can go download a fresh boot image, eliminating even the need to swap cards. I started looking into it but I smelled trouble. You can go with the UNO preloaded. Or save the money and load your own stuff. The development environment for XPE is $1K. Each licensed XPE system is $99 (no choice...MS always wants money for a bootable system). And if down the road Linux becomes viable, just swap cards again. In other words, this turns the HMI into an appliance. Under the hood, yes, it's XP. The development environment I mentioned is definitely NOT for the neophyte either. Then again, neither is doing software installations and maintenance with Windows-based HMI's. But the XP embedded solution on a CF card takes all the pain and suffering of maintaining XP installations out of the picture.

Share this post


Link to post
Share on other sites
I would guess that emulation probably isn't there yet. Wine's good for Windows solitaire, though . Ok, bad joke...A quick google search seems to indicate that DCOM is tough - as would any graphics hardware acceleration. Then you get into networking and drivers, not to mention your sound card and anything else. It's definitely possible - MAME does an excellent job with what I would guess is a much more difficult problem. Thing is, more geeks want to put time into pirating arcade games rather than making HMIs run on Linux. You might consider virtualization, which often works magically well. For example, one of my professional software developer friends (of the MS .NET flavor), often uses his Mac laptop running Windows on parallels for software development. Granted it does run an Intel core 2 duo CPU, but ALL THE DRIVERS work, and he can "tab" between operating systems. The only exception that I know of is that only one OS at a time can use USB Plug and Play. How does this fit into HMI/SCADA? Let me give you another example. I have another friend who's a sys admin for a university department. One of his responsibilities is running a computer lab that necessarily has: clunky specialized Chemistry simulation software that's a pain to configure (like HMIs) and each class does, Internet access, local administrative ability for the class. College students are even worse (better) at hosing computers. He uses Linux boxes that run a virtualized instance of Windows. Scripts can be set up to re-image the Windows installation as periodically as you want. The idea is to appliance-ize your HMI terminals. I can't think of additional benefits of Linux under the hood over doing a similar Ghost (imaging software) multicast on the computers periodically. I'm sure there are other reasons. I've never heard of using this to run multiple Windows instances on one machine, although you probably could run multiple Linux shells. You might be able to somehow use the Linux portion as a firewall, but this seems complex and cumbersome for most configurations. In any event, virtualization and emulation are cool, but be careful. Running Windows on Linux may introduce more potential points of failure and problems. I wouldn't do it without a really good reason. Also, I love Pauls use of KVM extenders. I've never heard of doing it with more than 1 extra terminal on Windows - and performance may still be questionable. Unix has been a multi-user platform, ugh, probably from the get go. Edited by Nathan

Share this post


Link to post
Share on other sites
To follow up on a point Paul mentioned. We just got a new process {revamped with new pLC's and HMIs} in our plant. The vendor used ControlLogx with a Wonderware HMI. The wondereware resides on a windows 2003 server running terminal server. All field HMI are thin client using remote desktop or ThinManager and acting as thin clients. Looks cool. more as I get used to it.

Share this post


Link to post
Share on other sites
Very Cool! I'm really impressed with the version of Remote Desktop that ships with 2003 Server (compared to their older Terminal Services). What's your policy about remote access/pulling up the system from your desk within your organization? How many concurrent clients can the system feasibly support?

Share this post


Link to post
Share on other sites
Right now there is no "formal policy", the informal policy is that only Engineers and Electrical Technicians are granted network rights to remote access HMI's and the like. We all operate by a gentlemans agreement that we don't operate what we can't see, we jsut observe and assist electricians in getting processes running again.

Share this post


Link to post
Share on other sites
Actually, I wasn't actually considering multiple "real" HMI's with the KVM extender trick. I've used it twice successfully. In the first case, we wanted to put real time displays of QC data on the plant floor next to the machine operator. This is a static display...no actual keyboard/mouse. I went about doing it in a decidedly non-KVM way. We've had good success with cheap $100 CCTV "security monitors" on the plant floor. So I replicated the idea. I bought a 4 monitor graphics card from Matrox that had composite video output, a 4-input video multiplexer (cheap $100 piece of hardware...very common with CCTV equipement), and 5 monitors. The standard PC was set up with a 5 "screen" desktop. The 5th monitor sits next to the "normal" one using the multiplexer. It's just there for setup reasons. The rest are on the floor with coax wiring (hundreds of feet range unlike vga). The other case is a "standard" setup...one pc, one monitor. The pc is in an air conditioned closet. The issue I see with doing multi-monitor pc hmi's is the keyboard/mouse. How would you share this out? Would rhe HMI get confused with simultaneous mouse "clicks" with multiple touchscreens? I guess you could provide your own keyboards (www.piengineering.com) each mapped to different keys so that say say F1 on hmi #1, F1 is actually F2 in hmi #2, and the hmi screen is set up to accept either to mean "open valve" but you can imagine the inevitable maintenance headaches.

Share this post


Link to post
Share on other sites
Paul, I thought you were going geek on me with multiple X clients running on a central 'nix box - the bigger question there is how do you get an X based HMI. I have seen windows solutions that allow 2 keyboards/mice/monitors for one Windows machine. I believe that they're a combination of hardware and software. I don't see a real good reason for this with cheap PCs. Also a quick search yielded Windows base terminals (WBTs) that are some type of Microsofty thin clients that run remote desktop/terminal services. Here's some info. As a comment related to your comment on the CCTV/display distribution, I was just talking to an integrator who's getting into using 37" LCD TVs for customized realtime marquees. He has a guy who happens to be good at visual/graphic layout making the screens for him. From the sounds of it, plant managers are going ape over it. He's using FactoryPMI with a local $300 computer to run the input (there's no reason why you couldn't use any application, HMI or otherwise). An interesting challenge is at that size you really need to run DVI/HDMI to get the resolution/quality that makes this setup impressive (1920x1080 at 1080i, I think). That kind of equipment including: splitters, amplifiers, and even the cable make it more economical to just put a computer at each LCD (or between 2 close ones). Off the shelf HDMI equipment for high end home theater enthusiasts are emerging rapidly. Another interesting benefit of having a separate computer with every monitor is the ability to control the views of any screen (computer) centrally from your HMI package - these could even be timed rotating views or event driven views. The HMI package simply needs to be able to poll some central coordinating location (SQL database for FactoryPMI). That would be considerably more expensive and difficult to do with video switching equipment. I'm not even sure how you'd set up one computer to drive 8 high resolution displays, with a centralized independent screen control. I'll post more info after I get pictures and screenshots. Edited by Nathan

Share this post


Link to post
Share on other sites
I didn't need high resolution. The plant floor where this equipment is at is very dusty. The dust is thermally insulative and electrically conductive! On top of that, it is very humid all the time and the temperature in the summer gets up close to 110 F. The guys who work in that area routinely break things. It is also not unusual for everything to get splashed with liquid iron if we have a bad mold. Hence, everything needs to be rugged as all get out and cheap enough to replace. If the little 9" B&W CCTV monitors last for months in that environment, then I'm going to keep using them over the wow-factor of a 50" plasma screen. If you want four monitors, I bought a Matrox Parhelia PCI card. It can do 4 monitors by itself with 128 MB of memory at up to 1600x1200 resolution on each display with just about any video format you want as an output. If you use "joined" mode, you can use 2 cards in parallel to get to your 8 card output: http://www.matrox.com/graphics/en/pid/news...ver.php#options The Extio series can do up to 1920x1200 on each monitor and each card can do 4 monitors. You can hardware "join" these as well for an 8 monitor output. It has hardware acceleration so it really shouldn't tax the processor as long as you aren't using it for a giant gaming display system. You can get HDMI- and DVI-based video extenders. HDMI bandwidth will still conveniently fit onto CAT5 cabling and works every hundreds of feet as long as the extender does deskewing since the cable impedance is somewhat frequency dependent and will skew the video signal. The card I bought actually has HDMI outputs. I would have just had to buy it with more memory. This happens to be exactly how many of the large displays that you see in an airport or a casino for instance work. When I was shopping, that's mostly what the vendors were advertising. The real reason for using separate PC's is simply so that if one fails, you don't take down the entire system.

Share this post


Link to post
Share on other sites
Paul, Thanks for the great insight on thin clients and virtualization. I know you don't need high res for CCTV capabilities in a harsh environment. The discussion about marquees strayed way OT. The particular manufacturer uses them like large screen displays at a NOC. Although everyone has access to this info, they post things like: weather, network/router status, realtime call center statistics, etc on large screens that everyone can see. The high res is important because text/images are functionally illegible at this size - not just addressing "eye candy". Anyway, I know you can get cat5e/6 DVI/HDMI extenders that are good for up to about 200' (resolution dependent). For anyone else reading, you're using a point to point cable - it's not part of your network. Point being, they claim productivity increases by allowing all the workers on the line to see realtime production/downtime/efficiency/etc from anywhere they may be working (not necessarily by a computer). In my experience this is something that: owners love, managers/supervisors use, and workers may secretly hate. And you bring up a good point - you certainly have to consider how nasty your environment it.

Share this post


Link to post
Share on other sites
Isn't it amazing what they've done with MAME? I would think that emulating one microprocessor with another would be challenging enough, but when you consider that some of these games run as many as 4-5 microprocessors simultaneously (not always off-the-shelf parts either) and most have undocumented custom ASICs as well, it makes MAME's success all the more incredible. And to top it off, it's FREE! BTW, as far as piracy.. it's my understanding that the only part of the game that is copyrighted is the code in the ROMs. In theory, MAME is only for people that already own licensed set(s) of game ROMs, why is why the MAME program itself is never distributed together with ROM images (other than the few that have been released into the public domain).

Share this post


Link to post
Share on other sites
Correct - writing, owning, and distributing emulators are perfectly legal. In fact, I would guess that many of the contributors enjoy creating MAME more than playing the games. According to Wikipedia, the majority of the games are still copyrighted but no longer commercially available. Some have been released to public domain and you see more and more licensed groups of the "classics" being released. I've seen a list of support microprocessors and there are many. Basically I second everything you said. You can purchase DVD sets of every ROM that MAME can emulate (by version) from a "ROM burner" for about $50 - that's convenient for those of us who have an old arcade machine around but don't know how to transfer it to our computer. Most of the classic games are pretty small (they're stored zipped). The new games coming out (to emulation - still very old) take up much more space with textures. The newer games yet are much harder to emulate. It's pretty amazing, I remember playing fairly functional console games on an emulator in college when the consoles were still selling late in their life cycles. Simply amazing! We've strayed pretty far from Linux vs. Windows. I think it's safe to summarize that the bottom line depends more on the implementation than the platform. We've all seen crappy and good programs written for both and it's hard to dispute that a knowledgeable administrator could successfully manage either.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0