speakerman

MrPLC Member
  • Content count

    88
  • Joined

  • Last visited

Community Reputation

4 Neutral

About speakerman

  • Rank
    Sparky
  • Birthday 10/06/65

Contact Methods

  • Website URL http://www.innotechglobal.com

Profile Information

  • Gender
  • Location British Columbia
  • Country Canada
  • Interests High resolution audio playback systems for music listening or movies.

Recent Profile Visitors

3610 profile views
  1. PowerFlex 755 F12 (HW OverCurrent)

    Hey everyone; It has been a long time so I wanted to post an update. We have not heard back from Allen Bradley on this issue. It was resolved for most of our motors by adding the msg instructions to change the control mode on two of the drives, changing one other to SVC, and just power cycling the other two after a down-day. This type of fault is resettable in the drive so my hope is Rockwell is working on changing the reset action so any value it is holding on to that causes this fault gets cleared when it is reset. That is the distracting part of this problem and it can cause both extended downtime, and in our case, a big mess. I'm sure there are many plants like ours where several motors need to ramp up in sequence for things to go well. I have explained to many people to not reset this fault on the drive, but it just takes one person to hit that button without looking and we can have a plug-up. Power cycling the drive should not be required if the fault allows a software reset, so I feel this problem does need to be resolved on the firmware level. 
  2. PowerFlex 755 F12 (HW OverCurrent)

    Hey VFD Guy; Two drives are 3 HP PF753, two drives are 7.5 HP PF755, and one is 40 HP PF755. All are firmware 13 or higher. All five run in Flux Vector mode, and all five have encoder feedback. The two 7.5 HP drives are running with torque prove enabled. The fault happens the instant the motor is told to run. It happens while the motor is being fluxed up, before any motion occurs. It happens even when the speed reference is set to 0 Hz. In other news, I tried the programming change to push SVC control into one of the 7.5 HP drives. This worked, no more F12 faults. It did sometimes generate an Output phase loss fault, so we also had to set the Output phase Loss level to 0 or we got that alarm. The output phase loss action was set to 0 - ignore, but it still generated a fault. This is related to the torque prove parameter, as it will ignore the setting if torque prove is enabled. It's still weird, because torque prove was disabled before the testfire happened. No matter, so far we've testfired the motor 10 times in a row with no faults either way, and it appears to be working. I'll post more when I hear back from Rockwell, they are looking into this non-resettable F12 fault. **EDIT: Forgot to mention a comment by Todd from Rockwell, that it could be related to a slip compensation value that is calculated to a ridiculous number during the testfire with an open local disconnect, since there is no motion from the encoder while "running" so the "slip" would be... LARGE? Then once the motor is connected it tries to use that level of slip comp and it results in the F12. Power cycling the drive erases this value and the drive starts with default slip comp values and functions normally again. Just a theory so far, but it seems reasonable, I believe he's attempting to prove that now.**
  3. PowerFlex 755 F12 (HW OverCurrent)

    Hey BobLfoot and ElectronGuru; Thanks for the quick reply. I should back up my original comment and say that Rockwell tech support has now gotten back to me, and there's a fellow trying to help. He had some interesting things to say. His concern is not the encoder feedback but rather the Flux Vector Control mode. He says that changes the way the current loop acts, but is still baffled that we'd get an F12 hardware overcurrent fault with the local open. He is going to get back to me with thoughts on why the F12 comes up, and why it doesn't reset properly. I agree BobLfoot, I have yet to see a reset work. We always have to power cycle the drive to clear it. The manual says it's resettable, and even auto-resettable, so it's a good challenge for Rockwell to make that happen. In the message above I mentioned using a SSV, but I misspoke. It's actually a MSG instruction using a Set Attribute Single command. We been using this method to disable torque proving on two drives that we need to testfire for over a year now, and it works properly. Those are two of the drives that sometimes have an F12 after a testfire, so I'm going to try Todd's advice and build a MSG instruction to switch the VFD Parameter 35 from 3-FVC to 0-V/Hz while local is down, and see if the faults go away. FVC will be disabled after torque prove is, and re-enabled before torque prove is, so it doesn't generate a config error.  Will post the findings after this is tried. Should be able to do it sometime tomorrow.
  4. PowerFlex 755 F12 (HW OverCurrent)

    Hey everyone; We are having these exact same faults with PF750 series drives that are testfired with their local disconnects open. This is the standard required by our local safety authority: when locked out on a local disconnect, full motor control voltage must be sent to the open blades to prove the motor is physically disconnected from the drive. A running status and no motion on the equipment constitutes a successful testfire. I can see how motors with encoders could have a hardware overcurrent fault after this is done, as of course the encoder does not move with the local open. I am assuming the drive must wind up the amps to try and move it. The problem is the HW Overcurrent Fault 12 is supposed to be resettable, but in this case it isn't. The drive will let you reset the fault on the faceplate or through the network over and over, but then immediately fault again as soon as it tries to run. It will fault even if the motor is being told to run zero speed, so it's the action of fluxing up the motor that causes this HW Overcurrent fault, not the load on the motor itself. We also have times where the fault isn't there until we try and run the drive, so we have no idea it's going to fail on the first try. The VFD shows ready to run until it's commanded to, then it faults. The only way we can get rid of this recurring fault is to cycle the power to the drive. After that it runs normally every time. Allen Bradley has not been helpful with this issue, so I'm wondering if anyone else has faced it and solved it. We have many other VFDs that we testfire with the local disconnect open without this fault, and if they don't have an encoder, it works just fine. I am considering trying to do an SSV to change the motor feedback during testfire to open loop, and then change it back afterwards to encoder. Seems a dirty way around what appears to be a firmware problem of the fault not properly resetting. If the fault would actually reset, this wouldn't be an issue. Any leads that someone has on this issue would be welcome, it's a pain.
  5. Thanks for the trial download. Will give it a look. Price? Happy programming, speakerman.
  6. Hey Leo; I have a manual for that series of PLC, and I know it can be programmed with ProWorx NXT or 32. You do need to know what it was programmed in, and the program printout should identify that somewhere in the header or footer. Otherwise post some pics of the soft copy and it may be apparent. We have a 984-245 here at our plant, and will be phasing it out at some point, but have been able to find refurbished spares and they can also be repaired. It's expensive though. Depending on the application you may have better luck changing the PLC entirely should something fail. You have the program, so that makes it easier. I've uploaded the manual for the PLC familyCompact.pdf in case you need that. Happy programming, Speakerman.
  7. Hey Modiconbob; Thanks for the reply. It's RG-11 trunk, quad shield, but we have since discovered the route has been done improperly to the one drop we're having issues with. The original cable is in conduit on the side of the cable trays, passing above drop 9 on the way to drop 7. There used to be the terminal resistor in drop 7, but when 9 was added they just pulled a tech cable of RG-11 back to 9, laying it in the cable tray with everything from 240VAC to 600 three-phase. It has been this way for a long time without the issue cropping up as far as we know, but over time more and more stuff has been added to that cable tray, so the chance for noise has been slowly increasing over the years. The whole network is about 1200', but most drops are within the first 500. There's a 650' run from drop 5 to drop 7, (They are not in numerical order. It's 1, 8, 2, 3, 10, 4, 6, 5, 7, and 9 in that order. From 8 to 5 is about 400' total. The document with that information has been taken away right now; it's not handy so I can't be exact. We have a plan to add more conduit down to the PLC cabinet, then pull the original trunk cable back to drop 9 and pull a new RG-11 in the remaining conduit to drop 7, making it the end of the line again. This will eliminate the tech cable in the cable tray, and all the coax will be in proper conduit isolated from the rest of the wires. Putting drop 9 in line to drop 7 instead of daisy-chaining back will reduce the length by about 100' overall. Drop 9 has about 4 to 6 retries per second on average. The other drops have between 1 and 2, sometimes none. Very occasionally we'll see three retries in a second on the other drops. The drops closest to the head are the best, obviously, and have the fewest of all. The duration of the dropouts on drop 9 are so far only for one scan of the PLC, and maybe once a day on average, so it is very marginal at this point. We do not have testing equipment for the cable system onsite, and were looking for a contractor from Schneider who could come and assess it. Now that we've discovered this routing issue, we'll address it for sure and see how many retries we have after that is done. Thanks for anything you have to add. This may be a "we post our resolution" kind of path, and that's fine with me. Originally we didn't think we had a cable problem, but now it looks like we do. Will let you know what happens when the cable is re-done. Happy programming, speakerman.
  8. Hello everyone; Long time no chat, been way too busy. Got to cruise the forums for some pay-back after this, it's been a while since I gave back. We've run into a problem beyond my experience, so I have a question for people well versed in the Modicon PLC remote I/O world. We're having some drop-outs with a remote rack of a quantum PLC system, and can find no problems with the cables, taps, or the RIO drop card. We're seeing what seems like a lot of retries, but then we don't know what constitutes a lot. There are 10 drops on this PLC. Drops 1, 9, and 10 are quantum and drops 2-8 are old 800 series. It's a CRP931-00 remote I/O head with a bunch of J890-001 cards, and two CRA-931-00 for the new racks. Funny, but the two quantum racks have the most retries of all the cards. A couple of the 800 series have almost no retries by comparison. The quantum drop 9 has had dropouts at random, sometimes twice or three times a day, sometimes none for a couple days. Causes major problems, obviously. Drop 9 is actually at the end of the trunk cable, so it's got the terminal resistor on one tap. Drop 10 was added later between drops 3 and 4, and has more retries than either of them, but no dropouts. We've asked Schneider for a contractor who can perform a RIO cable network integrity test, but it seems to mostly be these two quantum heads with the most retries, and only the one has been dropping out. Is there anyone who has seen something like this and can comment? We've changed almost everything around it and there's been no change to the retries, and the occasional drop-out still happens. Hope things are going great for everyone out there. Happy programming, speakerman.
  9. Fatal Error when going online

    Hey everyone; Quick note, we completed the installation of the ENBT card in rack one, and the connections from the old card in rack 3 have been moved to it and are working perfectly. Happy programming, speakerman.
  10. Hey everyone; Thanks KidPLC, this is very helpful. It's is a long time after your post, but I am just now looking at a Micrologix 810 and LCD combination, and this is the only post I could find pertaining to that model. I'm trying to find out more on the LCD and what it can do. The on-screen menu lists the ability to modify variables, but the manual states "This feature is not yet implemented…" So does anyone know if and when it will be, or if perhaps it has been by now? The LCD is pretty useless without this functionality, and it is in the menu, so one would think it would eventually "be implemented"… Any Allen Bradley moderators or power users here know the skinny on this one? Thanks, Speakerman.
  11. Fatal Error when going online

    Yes, dmargineau, I forgot to restate that as of now the ethernet IP is done through an ENBT card in a remote rack. The configuration dialogue does ask if we want to schedule their connection over controlnet, and of course it is deselected. It has worked flawlessly so far, as the amount of data is relatively small. Our controlnet itself has always been scheduled, and will continue to be. Once we have the dedicated EIP card hooked up in rack 1, we can add more unscheduled EIP modules to it there. We have four more items to integrate on EIP, and they're on hold until the network to the rack 1 card is complete.
  12. Fatal Error when going online

    Interesting, this was my thought exactly. We do have a scheduled controlnet, just the ethernet IP portion is not scheduled. Most of the analog cards are, but we had to add and change a couple on the run, so we weren't able to schedule them at that time. Everything worked just fine until we could get them into the schedule a few months later. We do plan to move these ethernet connections into rack 1, and then the controlnet issue goes away. I think that may happen faster now it's caused downtime.
  13. Fatal Error when going online

    Hey everyone; So we are up and running after a short delay. Seems the solution provided above did clean the file up and it did upload properly. Unfortunately, it also set all controlnet connections as scheduled by default, so it turned off our ethernet IP network for 9 critical pumps. It selected them as scheduled even though they weren't, and greyed out the selection to turn it back off. Tech support indicated afterwards I should have told them we had an unscheduled controlnet before we started, so word of caution to everyone. We ended up downloading my last best save before the crash, which was the file I had open when the fatal error occurred, and it worked fine, no problems so far. All the ethernet IP modules are back to being unscheduled, and the plant is able to run. In a previous thread, Ken Roach helped me to understand why this system was designed incorrectly from day one, with ethernet cards in every remote rack - and one of them was used for this drive network before we understood the trouble that may cause. I am working on getting it changed, but in the interim it does work fine, and until now has caused no issues. Tech Support said EVERYBODY uses controlnet scheduling, 99% of the time, and our facility is an exception. So, a quick poll - are they right? is it highly unusual to have a contrologix PLC system with the controlnet unscheduled? Ours is pretty small. 14% load, 5ms NUT. My face is ready for some egg... At least we're running. Lessons learned all round. Happy programming, speakerman.
  14. Fatal Error when going online

    Okay, I've talked to Tech Support and they recommended I export the most recent saved file as an L5K version, then re-import it as an ACD file with a new name. That has been done, and I'm waiting for the chance to download this new file to the controller. We will be shutting down the plant for an hour later today, so I'll have my opportunity. Still no word on what caused the fatal errors in the first place, I'll pass on the logs to the Tech Support guys after we get through this problem. Keeping you posted, speakerman.
  15. Fatal Error when going online

    So I have a different question related to this problem: It has been suggested to try a repair of the RSLogix 5000 application, since it is the application that is crashing. I am not sure about this, the application works fine with the other two controllers. If this is done, will it screw up my version? The install is up to V17.1. Will the disk overwrite any updated files during the repair process?