Sign in to follow this  
Followers 0
Nathan

SCADA Exploit

Industrial Security   13 votes

  1. 1. Is Industrial Security taken seriously enough?

    • Absolutely
      0
    • No, the government needs to regulate
      1
    • No, but we can do it on our own
      10
    • I don't particularly care
      2

Please sign in or register to vote in this poll.

7 posts in this topic

Core security recently released an advisory for Wonderware systems running Suitelink (multiple products fall under this category). It's a vulnerability that I consider to be a pretty big deal. While I wasn't exactly impressed with their timeline, the vendor did the right thing. Without pointing fingers at anyone specific, it occurs to me that this has received little publicity and that very few people seem to care. We're talking, conservatively, thousands of plants in about every industry that are subject to this exploit. We're talking about the lead dog, by a wide margin. Have you received a notification to patch? Does this sit well with you? Perhaps I've missed the relevant security and standards organizations. It seems like the shift to PC and Ethernet based control systems occurred in a vacuum, without the hard lessons learned of the general computing field, from which the technology is based. I'd like to hear your opinions. This isn't a "beat up Wonderware" thread - it's just what drew my attention to the obvious shortfalls of our industry. http://www.coresecurity.com/?action=item&id=2187 http://isc.sans.org/diary.html?storyid=4390

Share this post


Link to post
Share on other sites
I think there is a general problem with companies accepting that supplied equipment will have 'bugs' or similar problems and then having a method of dealing with them. I can remember having a problem with a major PLC manufacturer some years back. They remained in 'denial' mode until I actual took the kit into the UK office to demonstrate the problem. There were then some hurried phone calls to head office followed by a comment along the lines of 'They didnt think anyone would find the problem' Needless to say the company I was working for were very upset (insert suitable expletive here!) about it, as they had to pay for my time to sort it, and then they had to update about 50 systems in the field . I cannot name names and companies as it ended up getting legal, but they did get some compensation.

Share this post


Link to post
Share on other sites
Agreed, and your case exemplifies my point. Covering up the vulnerability makes it that much worse to everyone once discovered. It would be marginally acceptable if the vendor chose to not make the details public, but contact their customers. My opinion is that they should publicly release details after giving their clients sufficient notice. Every other computing industry, particularly in security oriented sectors, have come to except that bugs and patches are a natural part of the development cycle. Publishing is the best way to facilitate mitigation and inform your client base. As an end user I'm not seeking unnecessary patches - I need to be told when it really matters to minimize the inconvenience. As an attacker I will locate the exploits - the harder they are to find the better.

Share this post


Link to post
Share on other sites
This is the fundamental problem with proprietary protocols and protection by obscurity. If they published the details of the suitelink protocol, the problem would have been found (and corrected) much earlier. If they would just stick with standards in terms of protocols, there wouldn't even be a problem. As it stands if you ever talk with any of the security people such as someone at CERT, they will immediately tell you that firewalls are critical as a means of eliminating even unknown exploits like this. The conceptual term is that you are reducing your attack surface...by controlling the number of points of entry to only known protocols, you reduce the chance that an exploit through some unknown means is possible. The most troublesome protocols are ones like suitelink or COM/DCOM (aka ActiveX). These protocols tend to be a gateway to all kinds of objects, running code, etc. It is darned near impossible to track down and eliminate every exploit, even if you have the power and money of Microsoft behind you, if you are using a general protocol that allows raw machine code to be downloaded. The current approach to controlling exploits in these very broad, general protocols, is by running code in a "safe" virtual machine (aka a sandbox). A good example of this (though it still has it's limits) is Java applets, or even Java RMI. In both cases, the code is carefully sandboxed so that it can't execute arbitrary code.

Share this post


Link to post
Share on other sites
Perhaps...That was my initial response, but I really don't think that applies here. Specifically their service could be taken down by a malformed packet from an attacking computer from an unauthenticated TCP session. More precisely, a malloc (memory allocation) call returned a NULL value that induces a NULL pointer that crashes the application - those kind of things exist in every application. And standards don't dictate line by line implementation. This case was a matter of not sufficiently checking input parameters - and perhaps it may have been protected from a lower level. That problem really likely wouldn't have been solved by "sticking with standards" or even releasing/documenting theirs. If I were to speculate so far as it were an open source project, I still imagine this process would have been similar - the only difference being that it's easier to determine the problem and fix once reported. Projects of all types face challenges like this. However, I agree that our industry is stuck on the "security by obscurity" and relies on the false protection of undocumented weak protocols. It's only a matter of time before software experts get in there and tear it up. I was just reading an article about how sophisticated attacks have become easier by non-technical users by "standing on the shoulders of giants" (check out metasploit, for example). It only takes one geek program for all the little "script kiddies" to use. Our industry, particularly SCADA security, is coming under the spotlight. It won't be "safe" by confusing design or lack of documentation for much longer. I agree with all your points and arguments. However, I believe this particular case is an interesting one in that it's not a direct result of any of those. It's a matter of company fixing an excusable bug that happens to be a nasty vulnerability. Can you imagine the damage Disgruntled Joe Plant Worker could cause by dropping the application at the worst times? The interesting detail to me is not the bug itself, but how the vendor and public deal with it. That's why my original post focused more on "little publicity" aspect than technical details of the problem (now that there's a resolution). The vendor did appropriately request that Core Security delay their bug release until they finished the fix. That step was correct... Edited by Nathan

Share this post


Link to post
Share on other sites
Remember when a hacker did the same thing to Microsoft for MONTHS? And then started actually crashing and exploiting their own servers? The best thing for the public to do is to collect evidence of this sort of thing and use it during the selection process for HMI software, etc. In past years, Microsoft all but lost the server business because they failed the reliability and security issue so horrendously. It looked like the tide was turning and they were going to actually address those issues until Vista came along. Granted the theory behind Vista is to address some of those nasty issues (COM/DCOM), but the new platform is not very stable again.

Share this post


Link to post
Share on other sites
I wasn't claiming that my example implementation is any more or less secure than others. My example attempted to illustrate that you know what's going on with respect to vulnerabilities provided that you know what the system is running/standards. For example, I learned from a Red Team (Ethical hackers/penetration testers) that MS SQL Servers often provide good attack points. There are many such attacks - the point is that you can say, "hey IT department, help me secure my SQL Server". The same applies to Tomcat and the OpenBSD project as you suggested.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0