Open noyannus opened 8 months ago
Thanks for reporting. Might be related to #1580 . Aryoda will reply soon.
The problem with --diagnostics
is known and fixed upstream but not released yet. You can ignore it.
@noyannus You are running BiT as root
(at least the process owner is root
)?
Then definitely related to #1580
I still did not manage to set up a Thumbleweed virtual machine with a working Gnome or KDE DE to debug this (working on it, requires manual installation).
@noyannus You are running BiT as root (at least the process owner is root)?
Yes.
I still did not manage to set up a Thumbleweed virtual machine with a working Gnome or KDE DE to debug this (working on it, requires manual installation).
Installer comes as full DVD and as netinstall that downloads as needed. If you only used one of these options, maybe the other works better? (No idea, just speculating).
I have now set up a VirtualBox VM using these instructions and it went surprisingly smooth:
https://techviewleo.com/install-opensuse-tumbleweed-on-virtualbox-vmware-workstation/
@noyannus
I have installed a fresh Tumbleweed 20231219 with
and installed BiT v1.4.1 via yast but could not reproduce the problem (with BiT root): When starting a backup via the BiT Qt GUI
qt5_probing.py
is used to check if a systray icon can be shown.
Could you please provide me more details how to reproduce the problem, eg.
the "problem" is not when starting from GUI as that works fine. the problem is when starting from a cron job generated by BiT.
As ROOT
I am running a cron job as root (created by BiT) without errors now for 30 minutes (executed every 5 minutes).
Any more ideas what could be different in your setup?
BTW: A temporary workaround would be to deactivate the BiT systray icon be "deleting" the systrayiconplugin.py
from the plugin
folder:
bit@localhost:/usr/share/backintime/plugins> sudo mv systrayiconplugin.py ..
no idea, but will test your sidestep and report. tks
that worked "this time" for me. removed systrayiconplugin.py
seems there exists no option to control this "plugin" ?? perhaps a permissions problem ?
tks, will monitor and report any further failures. tks
seems there exists no option to control this "plugin" ??
Unfortunately not, this is legacy work and the idea was to separately package plugins to install/uninstall them but every package I know delivers the systray icon plugin always together with the BiT (Qt5) GUI.
perhaps a permissions problem ?
Most probably in Qt5 or the Qt5 wrapper in combination with the cron
which provides only a reduced environment and perhaps something is missing (but why only on your computer and not in my VM?). We have an open issue with a segfault which may be related (but almost impossible to diagnose: #1095) and this could happen when qt5_probing.py
segfaults too and tries to write a core dump which the parent process tries to prevent...
Could you please post the output of
pkexec env DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY python3 -c "from PyQt5.QtGui import QIcon; from PyQt5.QtWidgets import QSystemTrayIcon,QApplication; app = QApplication(['']); print('isSystemTrayAvailable: ' + str(QSystemTrayIcon.isSystemTrayAvailable())); print('Theme name: ' + QIcon.themeName()); print('has theme icon <document-save>: ' + str(QIcon.hasThemeIcon('document-save'))); print('themeSearchPaths: ' + str(QIcon.themeSearchPaths())); print('fallbackSearchPaths: ' + str(QIcon.fallbackSearchPaths())); print('fallbackThemeName: ' + str(QIcon.fallbackThemeName()))"
here. I want to check if a non-standard Qt5/KDE theme could cause the trouble for root
with cron
...
On my VM I get this output:
QStandardPaths: runtime directory '/run/user/1000' is not owned by UID 0, but a directory permissions 0700 owned by UID 1000 GID 1000
isSystemTrayAvailable: True
Theme name: hicolor
has theme icon <document-save>: False
themeSearchPaths: ['/usr/share/icons', ':/icons']
fallbackSearchPaths: ['/usr/share/pixmaps']
fallbackThemeName: hicolor
I only have this problem with root from cron, never from the qt app as root or any occasion as
# pkexec env DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY python3 -c "from PyQt5.QtGui import QIcon; from PyQt5.QtWidgets import QSystemTrayIcon,QApplication; app = QApplication(['']); print('isSystemTrayAvailable: ' + str(QSystemTrayIcon.isSystemTrayAvailable())); print('Theme name: ' + QIcon.themeName()); print('has theme icon <document-save>: ' + str(QIcon.hasThemeIcon('document-save'))); print('themeSearchPaths: ' + str(QIcon.themeSearchPaths())); print('fallbackSearchPaths: ' + str(QIcon.fallbackSearchPaths())); print('fallbackThemeName: ' + str(QIcon.fallbackThemeName()))"
qt.qpa.plugin: Could not load the Qt platform plugin "xcb" in "" even though it was found.
This application failed to start because no Qt platform plugin could be initialized. Reinstalling the application may fix this problem.
Available platform plugins are: eglfs, linuxfb, minimal, minimalegl, offscreen, vnc, wayland-egl, wayland, wayland-xcomposite-egl, wayland-xcomposite-glx, xcb.
@aryoda
Could you please provide me more details how to reproduce the problem, eg.
which GPU is installed (may be a Qt5 issue)
The GPU and other graphics info:
# lshw -c display -sanitize
*-display
description: VGA compatible controller
product: TigerLake-LP GT2 [Iris Xe Graphics]
vendor: Intel Corporation
physical id: 2
bus info: pci@0000:00:02.0
logical name: /dev/fb0
version: 01
width: 64 bits
clock: 33MHz
capabilities: pciexpress msi pm vga_controller bus_master cap_list rom fb
configuration: depth=32 driver=i915 latency=0 mode=1920x1080 resolution=2256,1504 visual=truecolor xres=1920 yres=1080
resources: iomemory:600-5ff iomemory:400-3ff irq:145 memory:605c000000-605cffffff memory:4000000000-400fffffff ioport:3000(size=64) memory:c0000-dffff memory:4010000000-4016ffffff memory:4020000000-40ffffffff
how do you start the backup (GUI, CLI or scheduled via cron)
BiT is started for two backups, via anacron every 1 and 4 hours, set in the GUI.
does it always happen
Returns after every boot.
could provide me here an anonymized minimal profile config file that reproduces the problem on your machine
I have appended it below for better readability.
which user rights and owner are set on the backup target folder and the included folders/files?
# find /media/backups/ -maxdepth 5 -ls
220608 0 drwxrwxrwx 1 root root 32 Nov 30 16:05 /media/backups/
256 16 drwxrwxrwx 1 me me 20 Dez 12 11:30 /media/backups/backup-1
257 0 drwxrwxrwx 1 root root 10 Dez 12 11:30 /media/backups/backup-1/backintime
258 0 drwxrwxrwx 1 root root 8 Dez 12 11:30 /media/backups/backup-1/backintime/[redacted]
259 0 drwxr-xr-x 1 root root 2 Dez 12 11:30 /media/backups/backup-1/backintime/[redacted]/root
260 0 drwxr-xr-x 1 root root 254 Dez 22 09:42 /media/backups/backup-1/backintime/[redacted]/root/1
256 16 drwxrwx--- 1 me me 20 Dez 12 02:14 /media/backups/backup-2
53607347 0 drwxrwxrwx 1 root root 10 Dez 12 02:14 /media/backups/backup-2/backintime
53607348 0 drwxrwxrwx 1 root root 8 Dez 12 02:14 /media/backups/backup-2/backintime/[redacted]
53607349 0 drwxr-xr-x 1 root root 2 Dez 13 02:05 /media/backups/backup-2/backintime/[redacted]/root
53607350 0 drwxr-xr-x 1 root root 292 Dez 22 09:35 /media/backups/backup-2/backintime/[redacted]/root/2
May also be of interest:
# pkexec env DISPLAY=$DISPLAY XDG_RUNTIME_DIR=$XDG_RUNTIME_DIR XAUTHORITY=$XAUTHORITY ...
returns
QStandardPaths: runtime directory '/run/user/1000' is not owned by UID 0, but a directory permissions 0700 owned by UID 1000 GID 1000
isSystemTrayAvailable: True
Theme name: hicolor
has theme icon <document-save>: False
themeSearchPaths: ['/usr/local/share/icons', '/usr/share/icons', ':/icons']
fallbackSearchPaths: ['/usr/share/pixmaps']
fallbackThemeName: hicolor
The only config is/root/.config/backintime/config
(no configs in ~/.config/backintime/
or /etc/backintime/
)
config.version=6
global.use_flock=false
internal.manual_starts_countdown=-1
profile1.qt.last_path=/
profile1.qt.places.SortColumn=1
profile1.qt.places.SortOrder=0
profile1.qt.settingsdialog.exclude.SortColumn=1
profile1.qt.settingsdialog.exclude.SortOrder=0
profile1.qt.settingsdialog.include.SortColumn=1
profile1.qt.settingsdialog.include.SortOrder=0
profile1.schedule.custom_time=8,12,18,23
profile1.schedule.day=1
profile1.schedule.mode=25
profile1.schedule.repeatedly.period=1
profile1.schedule.repeatedly.unit=10
profile1.schedule.time=0
profile1.schedule.weekday=7
profile1.snapshots.backup_on_restore.enabled=true
profile1.snapshots.bwlimit.enabled=false
profile1.snapshots.bwlimit.value=3000
profile1.snapshots.continue_on_errors=true
profile1.snapshots.copy_links=false
profile1.snapshots.copy_unsafe_links=false
profile1.snapshots.cron.ionice=true
profile1.snapshots.cron.nice=true
profile1.snapshots.cron.redirect_stderr=false
profile1.snapshots.cron.redirect_stdout=true
profile1.snapshots.dont_remove_named_snapshots=true
profile1.snapshots.exclude.1.value=.gvfs
profile1.snapshots.exclude.2.value=.cache/*
profile1.snapshots.exclude.3.value=.thumbnails*
profile1.snapshots.exclude.4.value=.local/share/[Tt]rash*
profile1.snapshots.exclude.5.value=*.backup*
profile1.snapshots.exclude.6.value=*~
profile1.snapshots.exclude.7.value=.dropbox*
profile1.snapshots.exclude.8.value=/proc/*
profile1.snapshots.exclude.9.value=/sys/*
profile1.snapshots.exclude.10.value=/dev/*
profile1.snapshots.exclude.11.value=/run/*
profile1.snapshots.exclude.12.value=/etc/mtab
profile1.snapshots.exclude.13.value=/var/cache/apt/archives/*.deb
profile1.snapshots.exclude.14.value=lost+found/*
profile1.snapshots.exclude.15.value=/tmp/*
profile1.snapshots.exclude.16.value=/var/tmp/*
profile1.snapshots.exclude.17.value=/var/backups/*
profile1.snapshots.exclude.18.value=.Private
profile1.snapshots.exclude.19.value=/home/[redacted]
profile1.snapshots.exclude.20.value=/home/[redacted]
profile1.snapshots.exclude.21.value=* ~
profile1.snapshots.exclude.22.value=* BUP
profile1.snapshots.exclude.23.value=/home/[redacted]
profile1.snapshots.exclude.24.value=/.snapshots
profile1.snapshots.exclude.25.value=/timeshift
profile1.snapshots.exclude.26.value=/*.Trash-0/*
profile1.snapshots.exclude.27.value=/media/*/*
profile1.snapshots.exclude.bysize.enabled=false
profile1.snapshots.exclude.bysize.value=500
profile1.snapshots.exclude.size=27
profile1.snapshots.include.1.type=0
profile1.snapshots.include.1.value=/
profile1.snapshots.include.2.type=0
profile1.snapshots.include.2.value=/home/[redacted]
profile1.snapshots.include.3.type=0
profile1.snapshots.include.3.value=/home/[redacted]
profile1.snapshots.include.4.type=0
profile1.snapshots.include.4.value=/home/[redacted]
profile1.snapshots.include.5.type=0
profile1.snapshots.include.5.value=/home/[redacted]
profile1.snapshots.include.6.type=0
profile1.snapshots.include.6.value=/etc
profile1.snapshots.include.size=6
profile1.snapshots.local.nocache=false
profile1.snapshots.local.password.save=false
profile1.snapshots.local.password.use_cache=true
profile1.snapshots.local_encfs.path=/media/backups/backup-1
profile1.snapshots.log_level=3
profile1.snapshots.min_free_inodes.enabled=true
profile1.snapshots.min_free_inodes.value=5
profile1.snapshots.min_free_space.enabled=true
profile1.snapshots.min_free_space.unit=20
profile1.snapshots.min_free_space.value=100
profile1.snapshots.mode=local
profile1.snapshots.no_on_battery=true
profile1.snapshots.notify.enabled=false
profile1.snapshots.path=/media/backups/backup-1
profile1.snapshots.path.host=frawo
profile1.snapshots.path.profile=1
profile1.snapshots.path.user=root
profile1.snapshots.preserve_acl=true
profile1.snapshots.preserve_xattr=true
profile1.snapshots.remove_old_snapshots.enabled=true
profile1.snapshots.remove_old_snapshots.unit=80
profile1.snapshots.remove_old_snapshots.value=1
profile1.snapshots.rsync_options.enabled=false
profile1.snapshots.rsync_options.value=
profile1.snapshots.smart_remove=true
profile1.snapshots.smart_remove.keep_all=1
profile1.snapshots.smart_remove.keep_one_per_day=7
profile1.snapshots.smart_remove.keep_one_per_month=24
profile1.snapshots.smart_remove.keep_one_per_week=8
profile1.snapshots.smart_remove.run_remote_in_background=false
profile1.snapshots.ssh.check_commands=true
profile1.snapshots.ssh.check_ping=true
profile1.snapshots.ssh.cipher=default
profile1.snapshots.ssh.host=
profile1.snapshots.ssh.ionice=false
profile1.snapshots.ssh.nice=false
profile1.snapshots.ssh.nocache=false
profile1.snapshots.ssh.path=
profile1.snapshots.ssh.port=22
profile1.snapshots.ssh.prefix.enabled=false
profile1.snapshots.ssh.prefix.value=PATH=/opt/bin:/opt/sbin:\$PATH
profile1.snapshots.ssh.private_key_file=
profile1.snapshots.ssh.user=root
profile1.snapshots.take_snapshot_regardless_of_changes=true
profile1.snapshots.use_checksum=false
profile1.snapshots.user_backup.ionice=true
profile2.name=backup2 (4 h)
profile2.qt.last_path=/home/[redacted]
profile2.qt.places.SortColumn=1
profile2.qt.places.SortOrder=0
profile2.qt.settingsdialog.exclude.SortColumn=1
profile2.qt.settingsdialog.exclude.SortOrder=0
profile2.qt.settingsdialog.include.SortColumn=1
profile2.qt.settingsdialog.include.SortOrder=0
profile2.schedule.custom_time=8,12,18,23
profile2.schedule.day=1
profile2.schedule.mode=25
profile2.schedule.repeatedly.period=4
profile2.schedule.repeatedly.unit=10
profile2.schedule.time=0
profile2.schedule.weekday=7
profile2.snapshots.backup_on_restore.enabled=true
profile2.snapshots.bwlimit.enabled=false
profile2.snapshots.bwlimit.value=3000
profile2.snapshots.continue_on_errors=true
profile2.snapshots.copy_links=false
profile2.snapshots.copy_unsafe_links=false
profile2.snapshots.cron.ionice=true
profile2.snapshots.cron.nice=true
profile2.snapshots.cron.redirect_stderr=false
profile2.snapshots.cron.redirect_stdout=true
profile2.snapshots.dont_remove_named_snapshots=true
profile2.snapshots.exclude.1.value=.gvfs
profile2.snapshots.exclude.2.value=.cache/*
profile2.snapshots.exclude.3.value=.thumbnails*
profile2.snapshots.exclude.4.value=.local/share/[Tt]rash*
profile2.snapshots.exclude.5.value=*.backup*
profile2.snapshots.exclude.6.value=*~
profile2.snapshots.exclude.7.value=.dropbox*
profile2.snapshots.exclude.8.value=/proc/*
profile2.snapshots.exclude.9.value=/sys/*
profile2.snapshots.exclude.10.value=/dev/*
profile2.snapshots.exclude.11.value=/run/*
profile2.snapshots.exclude.12.value=/etc/mtab
profile2.snapshots.exclude.13.value=/var/cache/apt/archives/*.deb
profile2.snapshots.exclude.14.value=lost+found/*
profile2.snapshots.exclude.15.value=/tmp/*
profile2.snapshots.exclude.16.value=/var/tmp/*
profile2.snapshots.exclude.17.value=/var/backups/*
profile2.snapshots.exclude.18.value=.Private
profile2.snapshots.exclude.19.value=/home/[redacted]
profile2.snapshots.exclude.20.value=/home/[redacted]
profile2.snapshots.exclude.21.value=* ~
profile2.snapshots.exclude.22.value=* BUP
profile2.snapshots.exclude.23.value=/home/[redacted]
profile2.snapshots.exclude.24.value=/.snapshots
profile2.snapshots.exclude.25.value=/timeshift
profile2.snapshots.exclude.26.value=/*.Trash-0/*
profile2.snapshots.exclude.27.value=/media/*/*
profile2.snapshots.exclude.bysize.enabled=false
profile2.snapshots.exclude.bysize.value=500
profile2.snapshots.exclude.size=27
profile2.snapshots.include.1.type=0
profile2.snapshots.include.1.value=/
profile2.snapshots.include.2.type=0
profile2.snapshots.include.2.value=/home/[redacted]
profile2.snapshots.include.3.type=0
profile2.snapshots.include.3.value=/home/[redacted]
profile2.snapshots.include.4.type=0
profile2.snapshots.include.4.value=/home/[redacted]
profile2.snapshots.include.5.type=0
profile2.snapshots.include.5.value=/home/[redacted]
profile2.snapshots.include.6.type=0
profile2.snapshots.include.6.value=/etc
profile2.snapshots.include.size=6
profile2.snapshots.local.nocache=false
profile2.snapshots.local.password.save=false
profile2.snapshots.local.password.use_cache=true
profile2.snapshots.local_encfs.path=/media/backups/backup-2
profile2.snapshots.log_level=3
profile2.snapshots.min_free_inodes.enabled=true
profile2.snapshots.min_free_inodes.value=5
profile2.snapshots.min_free_space.enabled=true
profile2.snapshots.min_free_space.unit=20
profile2.snapshots.min_free_space.value=100
profile2.snapshots.mode=local
profile2.snapshots.no_on_battery=true
profile2.snapshots.notify.enabled=false
profile2.snapshots.path=/media/backups/backup-2
profile2.snapshots.path.host=frawo
profile2.snapshots.path.profile=2
profile2.snapshots.path.user=root
profile2.snapshots.preserve_acl=true
profile2.snapshots.preserve_xattr=true
profile2.snapshots.remove_old_snapshots.enabled=true
profile2.snapshots.remove_old_snapshots.unit=80
profile2.snapshots.remove_old_snapshots.value=1
profile2.snapshots.rsync_options.enabled=false
profile2.snapshots.rsync_options.value=
profile2.snapshots.smart_remove=true
profile2.snapshots.smart_remove.keep_all=1
profile2.snapshots.smart_remove.keep_one_per_day=7
profile2.snapshots.smart_remove.keep_one_per_month=24
profile2.snapshots.smart_remove.keep_one_per_week=8
profile2.snapshots.smart_remove.run_remote_in_background=false
profile2.snapshots.ssh.check_commands=true
profile2.snapshots.ssh.check_ping=true
profile2.snapshots.ssh.cipher=default
profile2.snapshots.ssh.host=
profile2.snapshots.ssh.ionice=false
profile2.snapshots.ssh.nice=false
profile2.snapshots.ssh.nocache=false
profile2.snapshots.ssh.path=
profile2.snapshots.ssh.port=22
profile2.snapshots.ssh.prefix.enabled=false
profile2.snapshots.ssh.prefix.value=PATH=/opt/bin:/opt/sbin:\$PATH
profile2.snapshots.ssh.private_key_file=
profile2.snapshots.ssh.user=root
profile2.snapshots.take_snapshot_regardless_of_changes=true
profile2.snapshots.use_checksum=false
profile2.snapshots.user_backup.ionice=true
profiles=1:2
profiles.version=1
qt.last_path=/
qt.logview.height=690
qt.logview.width=1444
qt.main_window.files_view.date_width=100
qt.main_window.files_view.name_width=100
qt.main_window.files_view.size_width=100
qt.main_window.files_view.sort.ascending=true
qt.main_window.files_view.sort.column=0
qt.main_window.height=1012
qt.main_window.main_splitter_left_w=211
qt.main_window.main_splitter_right_w=632
qt.main_window.second_splitter_left_w=230
qt.main_window.second_splitter_right_w=383
qt.main_window.width=847
qt.main_window.x=1698
qt.main_window.y=0
qt.show_hidden_files=false
does it always happen
Returns after every boot.
Ah! Does the anchron BiT job hang if it is started after a reboot but before the first user is logged in with a desktiop environment like KDE Plasma (establishing an Xorg session)? Anachron does directly start overdue jobs...
This could also explain why the old implementation of checking for systray icon support using xdpinfo
failed (hung) as reported in #1580...
I am trying to test scenario this in my VM...
Notes:
note that xdpyinfo does not appear to be present when my instance hangs.
do not know what has changed but my root instance started by cron is again hanging:
(lines will wrap): USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 30470 11.5 35.6 13240528 13184432 ? RN 07:00 19:48 python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job root 30471 35.2 0.1 190992 45364 ? SNl 07:00 60:22 _ /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
note, maybe because I renamed rather than deleting the systrayplugin. have now deleted them.
do not know what has changed but my root instance started by cron is again hanging: note, maybe because I renamed rather than deleting the systrayplugin. have now deleted them.
renaming does not work, only moving into another folder or deleting it (since all *.py files in the plugin folder will be loaded in BiT)
I could reproduce it by booting without logging in while a BiT root cron
job starts every 5 minutes (no need to use anacron
.
100 % cpu and a few dozens of hanging processes:
bit@localhost:~> ps ax | grep -i backintime
1345 ? SN 2:31 python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job
1347 ? S 0:00 /usr/bin/python3 -Es /usr/share/backintime/qt/serviceHelper.py
1352 ? RNl 6:13 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
1401 ? RN 1:59 python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job
1402 ? SNl 4:42 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
...
THX a lot for your very helpful kill
script in your first post ;-)
Now I have to find a way to debug this without having a debugging GUI in my VM and simulating a started cron job...
@aryoda
I could reproduce it by booting without logging in while a BiT root cron job starts every 5 minutes (no need to use anacron.
100 % cpu and a few dozens of hanging processes:
Strange. I rebooted to CLI, logged in as root and waited for ~15min. The only python processes were a backup-job after ~2min and later another. Seems legit; I have two backup profiles active.
If you want me to provide additional info or do some tests, tell me soon. I plan to change to a LVM setup during the holidays and that may entail a fresh system setup (have not researched the procedure yet).
THX a lot for your very helpful kill script in your first post ;-)
Save it somewhere; the structure is great for anything that needs checking at regular intervals. With a sound alarm (alsabat
) I use it to get noticed when my Internet is alive again (ping
instead of ps
of course).
I've made it into a login script: edit: corrected an error.
#!/usr/bin/bash
# kills backintime probing processes
# /usr/share/backintime/common/backintime.py
wait=5 # wait minutes
while true; do # start loop again after killing processes and waiting time
for pid in $(ps -ef | awk '/\/backintime\/common\/qt5_probing\.py/ {print $2}'); do
kill -9 $pid;
done
sleep $(($wait * 60))
done
# for pid in $(ps -ef | awk '/\/backintime\/common\/qt5_probing\.py/ {print $2}'); do kill -9 $pid; done
I rebooted to CLI, logged in as root and waited for ~15min. The only python processes were a backup-job after ~2min and later another. Seems legit; I have two backup profiles active.
I can even provoke the problem by switching to a terminal in the login screen after booting (via Ctrl+Alt+F1 or Host-Key+F1 in my VM) and manually starting a backup with backintime --profile-id 1 backup-job
.
Then I can see the hanging qt5_probing.py
process with ps ax | grep -i backintime
...
If you want me to provide additional info or do some tests, tell me soon. I plan to change to a LVM setup during the holidays and that may entail a fresh system setup
I can reproduce the problem now and once I fix it it would be great if could test it again (just depends on who is faster - me with a fix or you with a new system installation. But no pressure from my side, holidays are the perfect time to do such maintenance work).
I can reproduce the problem now and once I fix it it would be great if could test it again (just depends on who is faster - me with a fix or you with a new system installation. But no pressure from my side, holidays are the perfect time to do such maintenance work).
OK. See on the other side. :-)
Oh-- and have great holidays!
Short update: Via journalctl --since=today | grep -i backintime
I can clearly see that the dbus-daemon tries to establish remote call to the dbus name org.freedesktop.portal.Desktop
triggered by qt5_probing.py
:
I have step-wise removed code from qt5_probing.py
and discovered that this line of code is causing the process to hang (and produce above dbus output):
So I have to find a way now to
Edit: Compared to direct call in a root shell via python3 qt5_probing.py
the call via BiT injects different environment variables:
The most significant difference is that DISPLAY: :0
is injected and this may be the reason why Qt5 assumes there is a running desktop environment (even though there is none).
I can see in the BiT code that the X11 env var DISPLAY: :0
is set here (why? it is given by logging in into a DE):
This code was added 14 years ago:
Fresh install from
openSUSE-Tumbleweed-DVD-x86_64-Snapshot20231222-Media.iso
(has been
openSUSE-Tumbleweed-DVD-x86_64-Snapshot20231106-Media.iso
before). Other main difference is a change from partition layout to LVM, but that's unlikely to play a role here.
Re display:
localhost% echo $DISPLAY
:0
There are still several BiT python processes, but all consume very little CPU if any, and xorg is well behaved, too.
Filtered for xorg|python
:
Looks like they start after/with the first backup jobs, then stop multiplying.
I could dig out relevant files from the backups and do a diff with the current version, if that helps.
Looks like they start after/with the first backup jobs, then stop multiplying.
What surprises me (beyond the known problem with the hanging qt5_probing.py
) is that there are ten or more BiT GUIs started (app.py
). Did you start the BiT GUI manually or somehow automated?
I could dig out relevant files from the backups and do a diff with the current version, if that helps.
THX, currently I don't need that.
I am currently trying to find a solution since I can reproduce the hanging process...
Did you start the BiT GUI manually or somehow automated?
GUI only manually.
There's something that may be related re Tumbleweed's dbus. I spend the whole yesterday with it cursing and screaming happily learning. Balked at the idea to introduce non-standard links and looked for better solutions (unsuccessful).
They have an unusual setup: qdbus
is libqt5-qdbus
.
You can see in YaST Software (search dbus
and qdbus
) what is where and under what name.
Some say a symbolic link would be the solution (like, e.g., ln -s /usr/lib64/qt5/bin/qdbus /usr/bin/qdbus
), or changing the $PATH
an applet dev in KDE store),
NOTE: openSUSE Tumbleweed users for right work need create symlink to qdbus:
sudo ln -s ../lib64/qt5/bin/qdbus /usr/bin/qdbus
guy on reddit with a global idea (but 4yrs ago!)
I’ve actually created symlinks of many of the Qt5 programs without the -qt5 part to /usr/local/bin so that I don’t have to remember this, like /usr/local/bin/lrelease → /usr/bin/lrelease-qt5. "
one on SUSE Forum(2016)
Search for “qtpaths --binaries-dir” and replace “qtpaths” with “/usr/bin/qtpaths” there. I have the latest version 5.8.3 installed, where the line numbers likely changed. But again, this and the previous change are only necessary if you want to keep the PATH= in ~/.bashrc. And only one of them would be necessary either.
or in Arch Forum likewise
and here a dev considers an additional command to mitigate. That may be your solution, too, I hope -- that would be easy. :-)
HTH
One more idea.
qdbus
called as root:
localhost:~ # qdbus
Could not connect to D-Bus server: org.freedesktop.DBus.Error.NoReply: Did not receive a reply. Possible causes include: the remote application did not send a reply, the message bus security policy blocked the reply, the reply timeout expired, or the network connection was broken.
qdbus
called as user:
localhost% qdbus
:1.1
org.freedesktop.systemd1
:1.10
org.kde.KWin
org.kde.KWin.Effect.WindowView1
org.kde.KWin.ScreenShot2
org.kde.NightColor
org.kde.kappmenuview
:1.12
org.kde.kglobalaccel
:1.122
org.freedesktop.secrets
org.kde.kwalletd5
:1.13
:1.14
ca.desrt.dconf
:1.15
com.canonical.Unity
org.freedesktop.Notifications
... long list of connections ...
IF this is a qdbus
caused problem, this would match the observation that only root is affected, as @ptilopteri emphasized.
I have noticed absolutely no problem scheduling and performing using the gui as
only problem noticed when scheduled with cron by
If spent much time to find out the reason for this problem and stop now (because I had to debug the X11 client/server communication) but I know now how it happens and how to fix it.
Xorg.bin
X11 server process has high CPU usageReboot and log in into terminal (at the login window press Ctrl + ALT + F1) as user without starting a Desktop Environment (Gnome, KDE...)!
You can reproduce the problem with one of the following commands (simulating a cron
job - the DISPLAY
environment variable is not set like in cron!)
su - # make me root (like cron - the DISPLAY env var is not set)
backintime --profile-id 1 backup-job # try to start a backup for the main profile
BiT will exit but leaves a hanging process:
ps ax | grep -i backintime
24649 ? Sl 0:01 /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
Start the BiT code that causes the problem (still in above root shell without a running desktop environment):
cd /usr/share/backintime/common
env DISPLAY:=0.0 QT_LOGGING_RULES="qt.*=true" QT_FORCE_STDERR_LOGGING=1 python3 qt5_probing.py
What happens then:
Nothing happens ;-) The process does not end and blocks the terminal
Press Ctrl + C (to send SIGINT interrupt to kill the process - in fact it is not killed completely but no you can see the output)
Then you will see repeated output in the terminal saying:
Authorization required, but no authorization protocol specified
Stop with: Strg+Z (put process into background)
Enter ps ax | grep -i qt5_probing
Kill the hanging process with kill -9 <PID>
to enforce the kill since the process is otherwise unstoppable
fg
is not required now I guess
Note: The same "Authorization required..." error is shown when you try to run other X11 commands in the root shell, eg.
env DISPLAY:=0.0 xdpyinfo
which explains why older BiT versions that used xdpyinfo
to check for an existing X11 server did also hang ;-)
Even
env DISPLAY:=0.0 xhost +local
shows the same error!
qt5_probing.py
processI have step-wise removed code from qt5_probing.py
and discovered that this line of code is causing the process to hang (and produce above dbus output):
The QApplication
constructor hangs most probably when it tries to establish a connection to the X11 server (XOpenDisplay()
of xlib.h
?).
This endlessly prints above "authorization required..." error message and
blocks the main thread (Qt5 GUI code must be executed
in the main thread) .
It is unclear if the error message is printed by the X11 server or the Qt5 client code (would need more time to investigate) but I could not find the error message text in the source code of Qt5
or the xcb
plugin for X11
so I assume it is a problem of the X11 server (also because xdpyinfo
shows the same error message without containing it in the source code and the first X11 lib call in the source code is XOpenDisplay().
Note: In cron
jobs the display variable is not set at all by default but
BiT sets the X11 display env var to the default :0.0 if none is set ("hard-coded" trying to use first display as "best guess"!):
I have looked into the Git history why the display variable is set in the above code (if not already set) and found out that all (non-cosmetical) changes are quite old (date back to 2009) to fix this reported bug:
BiT systray icon causes crash when screen is locked
The code causing this bug is based on PyKDE4 for Python 2.5!
The applied solution came from PyGTK: How to display a systray icon from a cronjob
where exactly setting DISPLAY
to :0.0
was recommended (if not set).
It does NOT work to remove above code that sets the DISPLAY
env var when none is set.
The systray icon would then no longer be shown when BiT is started via a root cron
job
Use a client-side "timeout" to kill the qt5_probing.py
sub process (if it does not end "in time" it may hang)
The easiest fix is here:
Just modify the sub process command line with the timeout
command (which is installed by default on almost every distro via the GNU core utilities package coreutils
):
cmd = ["timeout", "30s", sys.executable, path]
timeout
exits with status 124 if the timeout was reached and kills the process (see man timeout
).
Alternative fix: Apply the sub process execution timeout via Python code
Code pattern (in the location as in the prev. described fix):
proc = subprocess.Popen(...)
try:
outs, errs = proc.communicate(timeout=15)
except TimeoutExpired:
proc.kill()
outs, errs = proc.communicate()
This would be a more robust solution since
timeout
) which is another dependencyUpdate: I have applied above fix and now backintime
(as root) hangs when trying to establish a connection to the user (session) dbus because root has none (or no rights?):
The best explanation for this problem may be this: https://stackoverflow.com/questions/71425861/connecting-to-user-dbus-as-root
This can be reproduced with this MRE:
# Precond: Desktop environment with X11/xorg installed
# Reboot into terminal and log in as user
echo $DBUS_SESSION_BUS_ADDRESS # eg. unix:path/run/user/100/bus
su -
echo $DBUS_SESSION_BUS_ADDRESS # is not set in my case
# https://dbus.freedesktop.org/doc/dbus-launch.1.html
# If DBUS_SESSION_BUS_ADDRESS is not set for a process that tries to use D-Bus,
# by default the process will attempt to invoke dbus-launch with the --autolaunch option to start up a new session bus
# or find the existing bus address on the X display or in a file in ~/.dbus/session-bus/
env DISPLAY=:0.0 python3
>>> import dbus
>>> bus = dbus.SessionBus() # hangs forever !!!
# Note: If DISPLAY env var is not set the above call throws an execption:
# DBusException: org.freedesktop.DBus.Error.NotSupported: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
To fix this prevent dbus session bus connections when BiT runs as root. This is already mentioned in the code but the implementation does first try to connect to dbus and then (sic!) checks if BiT is running as root which may theoretically have worked sometimes:
To fix this check for root first and return (without trying to connect to the user session dbus).
Wayland
instead of X11
TODO Most probably yes (at least reported by a user) and since the backintime
CLI command uses XWayland
most probably the authorization problem may be the same...
Wayland
and other distros (does the bug occur on different setups?)xdpyinfo
, "no systray icon for root when started via cron
"...)@noyannus @ptilopteri
It wasn't dbus
, the problem occured even earlier (when trying to open the X11 display).
Could you please test an easy "hotfix" (just one line of code)?
Open the file /usr/share/backintime/common/tools.py
as sudo
Command out this line at about line 700 (function is_Qt5_working()
using a leading "# "
cmd = [sys.executable, path]
and insert this line after above code (with the same indention = number of leading spaces as before):
cmd = ["timeout", "30s", sys.executable, path]
The code must look then like this:
try:
path = os.path.join(backintimePath("common"), "qt5_probing.py")
# cmd = [sys.executable, path]
cmd = ["timeout", "30s", sys.executable, path]
with subprocess.Popen(cmd,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
universal_newlines=True) as proc:
Check if the systray icon is still working (= code fixed correctly) by running backup via GUI
reinstalled backintime/backintime-qt
made requested changes to /usr/share/backintime/common/tools.py as
tried to start gui, failed something to the effect: could not start systrayicon
currently waiting for added cron job text to start
note: no icon appears for me in the systray anyway now or before.
tried to start gui, failed something to the effect: could not start systrayicon
Looks like a syntax error in the code.
I think I will prepare a fixed version and also implement a Python-side timeout (without the timeout
command since I have discovered hanging backintime
processes now - still debugging this).
note: no icon appears for me in the systray anyway now or before.
Never in the unmodified BiT version (neither GUI nor CLI, neither as user nor as root?)?
Looks like an extra problem...
Holy Moly, that was a textbook example of a rabbit hole! Kudos.
Could you please test an easy "hotfix" (just one line of code)? ... Check if the systray icon is still working (= code fixed correctly) by running backup via GUI
Yes, it works. BiT launched as root, manually started two backups => two BiT icons appear in the systray.
note: no icon appears for me in the systray anyway now or before.
Never in the unmodified BiT version (neither GUI nor CLI, neither as user nor as root?)?
Looks like an extra problem...
no, I don't see it currently, nor previously for quite some time.
note: no icon appears for me in the systray anyway now or before. no, I don't see it currently, nor previously for quite some time.
If your backintime --version
is 1.4.x could you please provide me the output of starting a backup in the console with --debug
added so I can see if and how the systray icon support is recognized by Qt5? THX!
backintime --version backintime 1.4.1
python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug
DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7fb3d9a320c0>} | unknownArgs: [] DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 20853 DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0 DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 20854 DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0 DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors
Back In Time Version: 1.4.1
Back In Time comes with ABSOLUTELY NO WARRANTY. This is free software, and you are welcome to redistribute it under certain conditions; type `backintime --license' for details.
note: /usr/share/backintime/plugins/systrayiconplugin.py not present
with "systrayiconplugin.py" present:
python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7fa07cb9a0c0>} | unknownArgs: [] DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 21198 DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0 DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 21199 DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0 DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors
Back In Time Version: 1.4.1
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 21199 0.0 0.0 31880 17060 ? S 11:48 0:00 _ python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug root 21200 0.2 0.1 188388 44900 ? Sl 11:48 0:00 _ /usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
note: /usr/share/backintime/plugins/systrayiconplugin.py not present
sorry, I forgot that backup-job
spawns a new process so that the debug output is no contained in the same terminal.
To get the full debug logging output could you please start your job with
python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug
and then after a about a minute query the syslog with:
journalctl --since "2 minutes ago" | grep -i backintime
and send me the relevant part of the log...
PS: If you kill the backintime
process you should also kill the left-over lock file:
rm /root/.local/share/backintime/worker*.lock
Otherwise the backup will probably not start (but you will see a message then)
after 65 seconds:
Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24091]: DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7f24cbd9a0c0>} | unknownArgs: [] Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24091]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 24092 Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24092]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0 Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24092]: DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24092]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 24093 Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0 Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 1=Main profile Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/tools.py:174 initiate_translation] No language code. Use systems current locale. Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/backintime.py:677 getConfig] config file: /root/.config/backintime/config Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/backintime.py:678 getConfig] share path: /root/.local/share/backintime Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: Main profile(1) :: DEBUG: [common/backintime.py:679 getConfig] profiles: 1=Main profile, 5=dent, 2=tumbleweed Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: tumbleweed(2) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 2=tumbleweed Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:245 PluginManager.load] Register plugin path /usr/share/backintime/plugins Dec 29 12:28:30 crash.wahoo.no-ip.org backintime[24093]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin usercallbackplugin.py
@ptilopteri Could you please requery the journalctl log again I think it may have been too early (the pluginmanager should load more plugins -> also the systray plugin: Add plugin systrayiconplugin.py
). In my case it looks like this:
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/backintime.py:679 getConfig] profiles: 1=Main profile, 2=profile 2, 3=test2config, 4=blue_usb (ext) test profile, 5=ssh-hello-world, 6=issue 851 (cross-device symlinks), 7=encFS test
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 1=Main profile
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/pluginmanager.py:245 PluginManager.load] Register plugin path /usr/share/backintime/plugins
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/tools.py:719 is_Qt5_working] Qt5 probing result: exit code 2
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [plugins/systrayiconplugin.py:76 init] System tray is available to show the BiT system tray icon
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin systrayiconplugin.py
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin usercallbackplugin.py
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin notifyplugin.py
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: INFO: [common/snapshots.py:729 Snapshots.backup] Lock
Dez 29 18:39:22 AMD5600x backintime[41086]: Main profile(1) :: DEBUG: [common/tools.py:1623 inhibitSuspend] Inhibit S
Or is it not installed? ls /usr/share/backintime/plugins/
should show three files (one of them being systrayiconplugin.py
)
l /usr/share/backintime/plugins/ total 32 drwxr-xr-x 3 root root 4096 Dec 29 11:47 ./ drwxr-xr-x 6 root root 4096 Nov 13 16:42 ../ drwxr-xr-x 2 root root 4096 Dec 22 10:04 pycache/ -rw-r--r-- 1 root root 1925 Nov 13 16:42 notifyplugin.py -rw-r--r-- 1 root root 4263 Nov 13 16:42 systrayiconplugin.py -rw-r--r-- 1 root root 4647 Nov 13 16:42 usercallbackplugin.py
after 1min 55 sec:
journalctl --since "2 minutes ago" | grep -i backintime Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26040]: DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7fae701de0c0>} | unknownArgs: [] Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26040]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 26041 Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26041]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0 Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26041]: DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26041]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 26042 Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0 Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 1=Main profile Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/tools.py:174 initiate_translation] No language code. Use systems current locale. Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/backintime.py:677 getConfig] config file: /root/.config/backintime/config Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/backintime.py:678 getConfig] share path: /root/.local/share/backintime Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: Main profile(1) :: DEBUG: [common/backintime.py:679 getConfig] profiles: 1=Main profile, 5=dent, 2=tumbleweed Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: tumbleweed(2) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 2=tumbleweed Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:245 PluginManager.load] Register plugin path /usr/share/backintime/plugins Dec 29 12:52:57 crash.wahoo.no-ip.org backintime[26042]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin usercallbackplugin.py
@ptilopteri Thanks for your patience :-)
Strange. Perhaps the grep -i backtintime
is to restrictive (does journalctl | grep -i systrayiconplugin
find anything?).
I will look into the source code to find out under which conditions the systrayplugin does not appear in the log...
Edit: The code does not help to find the reason why two plugins are not loaded (too much if
nestings without logging output)... I have added more logging to the code for the next release...
python3 -Es /usr/share/backintime/common/backintime.py --profile-id 2 backup-job --debug DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7f8ee60c60c0>} | unknownArgs: [] DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 31338 DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0 DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 31339 DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0 DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors
Back In Time Version: 1.4.1
journalctl --since "5 minutes ago" | grep -i backintime Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31337]: DEBUG: [common/backintime.py:589 argParse] Arguments: {'debug': True, 'profile_id': 2, 'command': 'backup-job', 'func': <function backupJob at 0x7f8ee60c60c0>} | unknownArgs: [] Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31337]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 31338 Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31338]: DEBUG: [common/tools.py:2583 BackupJobDaemon.daemonize] first fork pid: 0 Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31338]: DEBUG: [common/tools.py:2592 BackupJobDaemon.daemonize] decouple from parent environment Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31338]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 31339 Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: DEBUG: [common/tools.py:2600 BackupJobDaemon.daemonize] second fork pid: 0 Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: DEBUG: [common/tools.py:2609 BackupJobDaemon.daemonize] redirect standard file descriptors Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 1=Main profile Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/tools.py:174 initiate_translation] No language code. Use systems current locale. Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/backintime.py:677 getConfig] config file: /root/.config/backintime/config Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/backintime.py:678 getConfig] share path: /root/.local/share/backintime Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: Main profile(1) :: DEBUG: [common/backintime.py:679 getConfig] profiles: 1=Main profile, 5=dent, 2=tumbleweed Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: tumbleweed(2) :: DEBUG: [common/configfile.py:591 Config.setCurrentProfile] Change current profile: 2=tumbleweed Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:245 PluginManager.load] Register plugin path /usr/share/backintime/plugins Dec 29 14:46:08 crash.wahoo.no-ip.org backintime[31339]: tumbleweed(2) :: DEBUG: [common/pluginmanager.py:262 PluginManager.load] Add plugin usercallbackplugin.py
from a previous (last month) invocation:
Nov 17 08:09:02 crash.wahoo.no-ip.org backintime[19194]: tumbleweed(2) :: ERROR: Failed to load plugin systrayiconplugin.py: 0
When backintime runs its
qt5_probing.py
,xorg.bin
consumes a full CPU (~97..~102%). This happens only with the/usr/bin/python3 /usr/share/backintime/common/qt5_probing.py
processes. Quickly after they are killed,xorg.bin
is back to normal. Also RAM and swap fill up. If I don't kill theqt5_probing.py
in time, the machine becomes unresponsive (and hot, duh). Maybe related: their CPU loads are high themselves.Before killing:
The processes were killed with:
About a minute after after killing
xorg.bin
is at <1% CPU load.The backup jobs have finished (one rsync is still active), but earlier tests have shown that they not the culprits.
This happened with BiT version 1.4.1 both from YaST (SUSE packet manager), and directly from GitHub. Python version is 3.11.6. Operating System: openSUSE Tumbleweed 20231215 KDE Plasma Version: 5.27.10 KDE Frameworks Version: 5.112.0 Qt Version: 5.15.11 Kernel Version: 6.6.3-1-default (64-bit) Graphics Platform: X11 Processors: 8 × 11th Gen Intel® Core™ i7-1165G7 @ 2.80GHz
Wellllll..... Could that have a common cause?