Sign in to follow this  
Followers 0
paulengr

Historian experiences?

6 posts in this topic

We have a GE Cimplicity installation. As of the first of the year, I turned on logging (into a SQL Server). Then suddenly I found myself buying a portal. It seemed innocent enough at the time...now it is growing legs faster than anyone imagined. It has become the de facto plant reporting/metrics system. Needless to say, that's the ideal situation. But now we're starting to hit a wall. First there was the communication buffer limits. Then it was user limits (time to buy more licenses). The latest is that I'm starting to notice that the load on the SQL Server is starting to expand beyond what the SQL server can handle. I need to start planning on installing a data historian. The obvious choice would be to go to with the GE (Intellution) product, Historian (iHistorian). However, it appears that the existing system is equally adept at working with nearly any of the competitors, especially PI. Does anyone have any experiences, good or bad, with any of the data historians out there?

Share this post


Link to post
Share on other sites
Paul, I doubt that you're running into limits of what SQL Server can handle - and that's a great thing for you! There are many things that you can do to help performance. For example, SQL server has transaction logging set by default. This is an intensive setting for un-doing operations - totally irrelevant for data logging. When querying the data what is the search/filter/sort criteria...(proabably the date field)? Is this set as an index/primary key? If not, your queries are going to run MUCH slower. Other questions: How large is your database (in gigs)? What are the specs of your server (RAM, CPU)? How frequently do you log? How much data do you pull up at once? It may make sense to archive your data periodically. A lot of system performance depends on your database configuration. I have no problem helping you with your database. Edited by Nathan

Share this post


Link to post
Share on other sites
Paul. I am a bit out of touch with iFix but the Historian supplied with this used to be a version of Pi. I cannot believe that you are even approaching the limits of SQL Server so I suspect the question is really about which front end you prefer. Andybr

Share this post


Link to post
Share on other sites
Transaction logging: Off. And I'm not using atomic transactions since I can't really think of a situation where atomic transactions would matter. Primary keys: Date field almost always. Cimplicity marks it as the primary key automatically. Depending on what queries are being used, I've also started setting additional secondary keys. Gigs: About 5 Gigs. User-driven. Specs: It's a P4-3-4 GHz, 2 GB RAM, 10K RPM SCSI drives. Basically this is a "server grade" machine. Not trying to make some ancient 386 with 1 MB do the job. Frequency: Depends on the log. Since this is a foundry, it's a mix of discrete manufacturing and process-oriented systems. I didn't just do something stupid like "log everything once per second". Most production machines are logging once per piece. The continuous ones log between once every 5-10 seconds and once per hour, depending on how fast the process actually changes, along with some "rate of change" checking where this is appropriate. For instance we monitor load on the drive on an annealing oven. Most of the time this changes very little so we log once every 10 minutes. But when it does change, the peaks/valleys are important so when the rate of change exceeds a threshold, it logs additional points. How much data: Depends on what is being requested. Generally not over a month. I'm beginning to develop some scripts and triggers to consolidate "weekly" and "monthly" type queries. Overall, it's not a matter of the queries. If you enter a stupid query, you get a stupid result. If the query doesn't come back in about 20 seconds, the front end that most users are using right now will abort anyways. The same front end defaults to limiting queries to 1000 rows unless somebody takes the time/effort to override it. This avoids the "newbie" mistakes like SELECT * FROM <huge table of production data that we keep a 6 month window on>. If they do override any of that, then they've probably also already got enough experience not to do something stupid like send a query requesting results since the database started up. More than once I've come up with some queries that basically nuke the server if I don't abort them (usually correlated subqueries). The most annoying ones are when the best way to do the query would be with a correlated subquery in order to simulate a windowing function (since SQL Server's support of windowing functions is minimal)....it is unfortunately very easy to make a correlated subquery that throws the query compiler for a loop and causes it to execute something in O(N^2) or worse time. My issue is that I'm rapidly approaching the point where I'm starting to hit the server with high insert rates (100 data points per second). Very easy to do when the SCADA system is reaching 3000 points and I expect it to almost double over the next couple years. I've considered getting past this by programming the logging rate to be more "gentle"...log only when data actually changes. This works and is exactly how the data collection/compression side of a data historian works. But SQL itself doesn't really understand the concept of interpolating results from these kinds of tables back into nice, neat periodic data that would be much easier to deal with at the front end. I'm still debating on whether I should simulate this capability via SPROC's or triggers. This is the second really useful thing that a data historian does automatically. As of today, the users are already using it for all the things that the "business intelligence" people claim. I've got it logging downtimes, alarms, production rates, and such. The production folks are commenting the data to include things that the SCADA system can't automatically determine itself (WHY was it down). And this in turn is feeding production/exception reports. This reporting/query side of things is working very well most of the time.

Share this post


Link to post
Share on other sites
When I am logging data from a large number of tags I normally use stored procedures to sort the data and put it into the right tables. This minimises the size of the transactions and also allows you to use the Server date and time so all data has a common Date/Time stamp but it does make it more difficult to cache transactions if connections fail. Andybr

Share this post


Link to post
Share on other sites
My mind has been made up. At least with respect to Proficy Historian. Advantages: Trending is much faster and better supported with GE's Historian. You can't really interface to "generic" tables any other way except through a "generic" trend chart. I'm still investigating what would be a good one for the latter. There is no way to simply chart "anything" from GE's chart module. :( Otherwise, I could appropriately code something on the SQL server to handle the data bandwidth issues. "Compression"...throwing out duplicated values and/or doing a "logging with deadband" type thing is essentially build into their product. Doing some sort of offline calculation/updating is also built in. Disadvantages (legion): No creating of new tables, inserts, updates, or basically creating or altering data in any way except through the data logger. This prevents users from screwing with the log but also prevents me from for instance overlaying it with additional plant intelligence. Granted I can link the server to a SQL one, but then I have to play the JOIN game and it will cause a scan instead of a search with an index whenever I use cross-server joins (along with big time bandwidth issues). Calculations in the queries are severely limited. In a "process" plant, the trending issue would be a huge advantage. In discrete manufacturing or batch, it tips the scales towards generic servers since trends are nice but not the center of the universe.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0