mikakaraila / node-red-contrib-opcua

A Node-RED node to communicate OPC UA. Uses node-opcua library.
Other
208 stars 191 forks source link

cpu usage of opc ua server #670

Closed TheeeEnding closed 2 months ago

TheeeEnding commented 4 months ago

Hi there,

I have the following problem: There is an older opc ua server that I can connect to with this package and everything works fine. However after about 1-2h the cpu usage of the server gets too high (around 90-95%) which leads to a shutdown of the connection. It seems that the server can't handle the connection (only one client that is subscribed to two opc ua tags).

But after some analysis and testing around I made this observation: When using other tools like UAExpert or Softing with the same subscription configuration, the server and connection stays active and the cpu usage stays at like 70-75%.

Do you know a reason why this package could result in the use of so much more cpu on the server than from other kinds of opcua clients?

Thanks a lot in advance!

schaffner commented 4 months ago

Maybe the requested publishing interval by Node-RED is too low, this would increase the CPU usage. AFAIK the default interval of the Node-RED OPC Client is 100ms, while UAExpert uses 500ms, at least in my setup. Try increasing the interval of the Node-RED OPC Client node.

If you really wanna compare the actual subscription parameters, you can have a look at all active sessions and their subscriptions by visiting the OPC Nodes in "Server->ServerDiagnostics->SessionsDiagnosticsSummary->YOUR_CLIENT_NAME->SubscriptionDiagnosticsArray->SESSION_ID->....".

TheeeEnding commented 4 months ago

Thanks for the fast answer!

That is a good point, unfortunately it doesn't seem to matter what I set the interval to... I even set it to 1000ms and increased chunksize + messagesize but it did not change the server behavior. image

What I can see in the subscription parameters: There are pretty different values for MaxKeepAliveCount / MaxLifetimeCount / MaxNotificiationsPerPublish --> the UAExpert values are very high (10 / 2400 / 65636) whereas the node-red values are very low (3 / 30 / 10). Is there way to change these parameters in the node-red opc client?

schaffner commented 4 months ago

It seems that the parameters you mention hard-coded in node-red-contrib-opcua. For a test it would be easier to match the subscription settings in UaExpert.

image

mikakaraila commented 4 months ago

Current status of testing?

TheeeEnding commented 4 months ago

Hi Mika, I tested the connection with the modified settings as in node-red (MaxKeepAliveCount / MaxLifetimeCount / MaxNotificiationsPerPublish / Priority) and it seems that the problem cannot be traced back to these parameters. The cpu load stayed the same and the server didn't crash after some time with the UAExpert subscription active...

mikakaraila commented 4 months ago

Disable all debug nodes and change logLevel to error (settings.js) because they will syncronize execution. It could extra load and check if there is still too much console output then I have to reduce it.

TheeeEnding commented 4 months ago

I can try that however I'm unsure how this could reduce the cpu load on the opcua server of the machine? It would be great if you could explain that a bit because I'm not seeing how the debugs and debug level would affect the cpu load of the server I'm connecting to...

Just to be clear: it is an external machine with an opc ua server and I'm trying to connect to it with your opcua client

Thanks!

mikakaraila commented 3 months ago

Debug / console will block normal asynchronous execution and thus could cause problems.

Please check also all server diagnostics counters if there is some error counters increasing. And of course monitor memory usage / handles that there is no resource leak.

TheeeEnding commented 3 months ago

Alright thanks, I will test it asap and give feedback

TheeeEnding commented 2 months ago

Unfortunately, because of this problem, I was not able to test it. I was not able to connect the client for long enough to test the circumstances. And because it is an old productive industrial machine, we decided to not connect it for now. Thanks anyways for the input and if I have the chance, I will take a look into this at some later point.