Closed GeekZJJ closed 8 months ago
@GeekZJJ Are you using the default in-memory database configuration? If yes, this may be expected behavior. Presently, due to technical limitations, the database cleaner function is disabled for this mode. See #857 & 9a75102.
Of course memory usage is also dependent upon your usage patterns and the type of network traffic events logged. The best solution may be to use a database file configuration.
@lainedfles Got that. I try to put the database file under a tmpfs mount point. That seems to work fine. Thanks for developing such an amazing application.
@GeekZJJ That's a creative compromise. Nice!
Here is my shameless plug. I did recently contribute changes to permit activation of the SQLite Write-Ahead Logging which aims to provide the performance of the in-memory mode while also providing a high level of persistence on-disk. This option is available under 1.6.3+. My motivation was similar to yours, high uptime memory usage and constant disk writes using file mode.
I too love this project and feel gratitude to all contributors especially the original author evilsocket and present maintainer/collaborator gustavo-iniguez-goya (although I don't want to \@ them simply for gratitude LoL) :wine_glass:
@lainedfles Nice! I will update it to 1.6.3.👍
An easier way to check memory usage is using:
systemctl --user status app-opensnitch_ui@autostart.service
systemctl --user status | grep -B1 -m1 opensnitch-ui
Example run on my machine at moment: (opensnith-ui 1.6.3-1)
● app-opensnitch_ui@autostart.service - OpenSnitch
Loaded: loaded (<$HOME>/.config/autostart/opensnitch_ui.desktop; generated)
Active: active (running) since Mon 2023-09-04 09:32:52 +03; 5h 10min ago
Docs: man:systemd-xdg-autostart-generator(8)
Main PID: 2067 (opensnitch-ui)
Tasks: 26 (limit: 38145)
Memory: 223.3M
CPU: 52.204s
CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/app-opensnitch_ui@autostart.service
└─2067 /usr/bin/python3 /usr/bin/opensnitch-ui
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: ~~~~~~~~~~~~~~~~~~~~~~^^^^^
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: KeyError: 1
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: ERROR:dbus.connection:Exception in handler for D-Bus signal:
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: Traceback (most recent call last):
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: File "/usr/lib/python3/dist-packages/dbus/connection.py", line 218, in maybe_handle_message
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: self._handler(*args, **kwargs)
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: File "/usr/lib/python3/dist-packages/notify2.py", line 154, in _closed_callback
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: n = notifications_registry[nid]
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: ~~~~~~~~~~~~~~~~~~~~~~^^^^^
Sep 04 14:30:14 kubuntu opensnitch-ui[2067]: KeyError: 1
Hi @GeekZJJ ,
I'd say that this is the expected behaviour for in-memory databases. I can't find the link, but if I remember correctly, once you hit the DB size limit, old records are discarded.
I've been running the GUI for months (years even with v1.4.x), and it has never exceeded ~300MB. Maybe (besides a mem leak of course), there's something that makes it allocate more RAM than expected.
How many rules do you have configured? Is there a lot of net traffic being logged?
Hi, @gustavo-iniguez-goya I have updated it to 1.6.3. And as I mentioned above, I changed the database settings from in-memory to a file under a tmpfs mount point. After that, the memory leak issue seems gone. I checked the statistics in the bottom of opensnitch-ui, which shows: "Connections 638885 Dropped 137793 Uptime 6 days, 9:02:33 Rules 106". The memory usage is about 78MB, the database file size is about 74MB.
@TriMoon Hi, I think you may also has the same issue as me. As I can see, the process run only 5 hours but it consume about 223MB memory. So you can have a try of my solution, modify the log setting from memory to file.
@GeekZJJ , no it was just a comment to show how to easier see the mem usage...
FYI, nothing much chagnged here is a new fresh one:
● app-opensnitch_ui@autostart.service - OpenSnitch
Loaded: loaded (~/.config/autostart/opensnitch_ui.desktop; generated)
Active: active (running) since Wed 2023-09-20 11:16:16 +03; 2 days ago
Docs: man:systemd-xdg-autostart-generator(8)
Main PID: 2289 (opensnitch-ui)
Tasks: 27 (limit: 38144)
Memory: 222.2M
CPU: 12min 23.654s
CGroup: /user.slice/user-1000.slice/user@1000.service/app.slice/app-opensnitch_ui@autostart.service
└─2289 /usr/bin/python3 /usr/bin/opensnitch-ui
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: ~~~~~~~~~~~~~~~~~~~~~~^^^^^
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: KeyError: 2
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: ERROR:dbus.connection:Exception in handler for D-Bus signal:
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: Traceback (most recent call last):
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: File "/usr/lib/python3/dist-packages/dbus/connection.py", line 218, in maybe_handle_message
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: self._handler(*args, **kwargs)
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: File "/usr/lib/python3/dist-packages/notify2.py", line 154, in _closed_callback
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: n = notifications_registry[nid]
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: ~~~~~~~~~~~~~~~~~~~~~~^^^^^
Sep 23 01:04:05 kubuntu opensnitch-ui[2289]: KeyError: 2
@TriMoon Get it. 👍
oops, sorry @GeekZJJ , lainedfles was absolutely right. The OP of the #844 issue didn't report high mem consumption, but it seems to be the same case. Related change: 5b5e2714aef9eb6752d15f41c23a69b013764f1e
So this is a known problem, and the workaround for now is what you did, save events to disk.
I'm exploring a new way of deleting events.
Just an update about this. I haven't managed to delete old events from in-memory db. We'll have to keep investigating how to delete old events from the in-memory db from a different thread.
@gustavo-iniguez-goya For the moment it may be easier to just save a db to /dev/shm/ as i believe this directory is stored in memory.
This will hopefully reduce or fix the problem and give devs the time to work on a better solution at there own speed.
Can anyone test this change?:
From:
DB_IN_MEMORY = ":memory:"
---
self.db = QSqlDatabase.addDatabase("QSQLITE", self.db_name)
self.db.setDatabaseName(self.db_file)
if not self.db.open():
to:
DB_IN_MEMORY = "file::memory:"
---
self.db = QSqlDatabase.addDatabase("QSQLITE", self.db_name)
self.db.setDatabaseName(self.db_file)
if dbtype == Database.DB_TYPE_MEMORY:
self.db.setConnectOptions("QSQLITE_OPEN_URI;QSQLITE_ENABLE_SHARED_CACHE")
if not self.db.open():
https://github.com/evilsocket/opensnitch/blob/master/ui/opensnitch/database/__init__.py#L7-L10 https://github.com/evilsocket/opensnitch/blob/master/ui/opensnitch/database/__init__.py#L51-L53
And:
to:
#if self._cfg.getBool(Config.DEFAULT_DB_PURGE_OLDEST):
self._start_db_cleaner()
Sometime ago we changed how we used in-memory databases to allow delete old events: 5b5e2714aef9eb6752d15f41c23a69b013764f1e
but we realized that it created a file (a regular sqlite db) on disk, so in the end it was the same than saving events to disk: 9a751026eb6ecfeeb8840ae7fd78e3124bf8841d
As far as I can tell, with the changes posted above, it doesn't save events to disk and deletes events when using the db in-memory. Tested on Mint 20.3 and Debian Sid, I'll monitor it a bit more.
I've added finally this change. Hopefully it'll solve this issue. Otherwise the solution will be to save events to a file on disk or /dev/shm.
Describe the bug Possible memory leak in opensnitch-ui. Memory usage is abnormal, which raise up to 1.2GB after 8 days' uptime.
1.6.0.1-1
opensnitch:1.6.0.1-1
Ubuntu
22.04.2 LTS
GNOME Shell
Linux Ubuntu 5.19.0-45-generic #46~22.04.1-Ubuntu SMP PREEMPT_DYNAMIC Wed Jun 7 15:06:04 UTC 20 x86_64 x86_64 x86_64 GNU/Linux
To Reproduce Install opensnitch and setup some custom rules.
Steps to reproduce the behavior:
Expected behavior (optional) A reasonable memory usage after a long uptime.