locka99 / opcua

A client and server implementation of the OPC UA specification written in Rust
Mozilla Public License 2.0
501 stars 131 forks source link

memory leak in server #202

Closed lovasoa closed 2 years ago

lovasoa commented 2 years ago

When running an opcua server, every time a nw client connects, new memory is allocated and never released, causing the server to get out of memory and crash after a set amount of connections.

Here is a heaptrack screenshot of an experiment run with the simple-client from this repo connecting and disconnecting every 2 seconds to the simple-server

image

The unreleased memory allocations in the networking code seem to come mainly from creating auditing events that grow the address space until the server crashes

image

schroeder- commented 2 years ago

Events are added to the Addressspace und I think are never delete. As quickfix you can delete them your self?

lovasoa commented 2 years ago

Would it be possible to let the user configure the logging on their own ? Just growing the address space indefinitely is not a good default...

schroeder- commented 2 years ago

I think deactivating is nice, but also auditing should work so maybe just add a task that calls purge_events , like in the example, every minute or so?

lovasoa commented 2 years ago

One doesn't even need a task running in the background and consuming cpu time. If there is a limit on event storage, then it should be enforced when an event is added. I haven't been able to access these events in from an opcua client. Where are they in the opcua tree?

lovasoa commented 2 years ago

Scrolling through the many clippy warnings, I see this

warning: associated function is never used: `deregister_session`
   --> lib/src/server/session_diagnostics.rs:123:19
    |
123 |     pub(crate) fn deregister_session(&self, session: &Session, address_space: &mut AddressSpace) {
    |                   ^^^^^^^^^^^^^^^^^^
    |
    = note: `#[warn(dead_code)]` on by default

Could this be the source of the bug ?

locka99 commented 2 years ago

I just merged the associated pull #203 so I'll assume this fixes this issue and you can reopen if it doesn't and thanks for the patch.