realies / soulseek-docker

🐳 Soulseek Docker Container
https://hub.docker.com/r/realies/soulseek/
MIT License
209 stars 35 forks source link

Soulseek lost configuration after VM crash. #60

Closed AverageHoarder closed 1 year ago

AverageHoarder commented 1 year ago

The ubuntu 22.04 VM on my truenas core server recently crashed and now Soulseek wants me to re-enter my credentials, has lost my user list, chats, shares etc..

I've looked in the /data/.SoulseekQt folder and there are 3 files named soulseek-client.dat+a lot of numbers as the extension. They range from 48MB to 89MB in size. Opening one of them in a text editor confirmed my suspicion that they contain the user list and everything else I've "lost". Is there a way to restore one of these files? I had hundreds of users in my list and even rescanning my shares alone would take literal days.

Also the performance was always bad (often freezes for literally 20+ seconds before it registers a keypress or click) but now it's even worse. I've recreated the container with the latest image, restarted both server and VM but nothing seems to resolve the performance issues.

Example image of a freeze: Soulseek Is that expected behavior? I've given the VM 4C8T of my Xeon, 16GB of DDR4 RAM and 150GB of m2 SSD storage.

realies commented 1 year ago

@HairyOtter, soulseek-client.dat.timestamp files appear to be the application settings that are backed up every so often on the filesystem. Not sure why the client is not recognising them unless they are improperly mounted or there is filesystem data corruption.

I've not seen screen issues or unresponsiveness in the way you describe them. Is the performance bad when you start a clean instance?

AverageHoarder commented 1 year ago

mounts These are my mounts. The user config is mounted to storage within the VM so I doubt that it would cause problems as the VM storage resides on a raidz2 pool with zfs which does error correction and scrubs once per month. Unless the VM itself corrupted the data, which would be unfortunate.

The performance has been really bad for me ever since I started using soulseek in docker months ago. It often freezes and creates these weird window effects until it reacts again.

I set up a new container that pointed to empty directories within the VM. This one was responsive.

I then pointed it to the existing folders of the old container. Didn't start but I got a "CIFS VFS: Close unmatched open" error on my VM which makes me suspect that something with my share mounts isn't working properly.

As a last step I used the existing appdata + logs folders of my old container but pointed "shared" + "downloads" to empty directories within the VM to avoid the SMB mounts. That started up and was responsive, but sadly neither my chats nor anything else of my config showed up. Which is odd since I can browse to the logs and see the individual chat files from within soulseek.

I don't know why it refuses to recognise the existing logs + config files. I'll do further testing with the SMB mounts. I'll report back if I find something reproducible.

malventano commented 1 year ago

soulseek-client.dat.timestamp files appear to be the application settings that are backed up every so often on the filesystem.

Any idea why these contents are only updated 'every so often'? I've noticed that restarting the container loses any recently changed config, queued downloads, etc. Perhaps SoulseekQt is not shut down cleanly when the container is stopped or restarted?

edit I didn't realize update frequency was in the UI config. Confirmed updates are more frequent if changed.

Is there a way to restore one of these files?

@HairyOtter It appears the full config is present in all three files. Perhaps try without the most recent (possibly corrupted) file present.

AverageHoarder commented 1 year ago

It appears the full config is present in all three files. Perhaps try without the most recent (possibly corrupted) file present.

I did that with a copy of the secondmost recent config file and got it working again. But only to a degree. Now it refuses to start when I add back the volume for my shared music folder. I have mounted my music folder in the ubuntu VM as read-only (there's no need for soulseek to write to the files that I share) and it used to work like this. Although I changed this share in no way, the container plays dead every time I add it and redeploy.

To test if the SMB share was at fault I also created an NFS share of the same folder (also read-only) and mounted that. Now all soulseek does on start is display soulseek ro "chown: changing ownership of "/data/Soulseek Shared Folder/: Read-only file system" for every single file in my shares in the log. I think it's attempting to take ownership over the files needlessly and fails because they are read-only. As I have close to a million files in my share folder I wasn't patient enough to see if it only attempts that once for every file or starts over when it reaches the end as that could take literally half a day.

My old samba share is accessible from within the VM, as is the new NFS share. I've also just browsed to the mounted volume from within the container's cli and confirmed that I can freely browse all the folders and files within. I've also given the new soulseek container the same UID and GID as the old one and used the exact same mount points. I don't know why it refuses to start with my old share mounted and why it insists on taking ownership of every single file with my new share although both the NFS permissions as well as the volume permissions tell it that the volume is read-only.

Is there a way to prevent it from trying to take ownership of my shared files?

Update: Removing the GUID and UID completely (noticed they were optional) resolved the issue with the NFS share. Once it's done rescanning my shares I'll switch it back to the SMB share and see if that also works now.

Also the very laggy experience seems to be due to the amount of files that I share. When the shares weren't working the program was responsive. Now it's back to waiting for 10+ seconds for it to register a click. Sad as the VM storage is on an m2 ssd, I've given the VM 16GB of RAM and 4 Cores + 8 Threads of my Xeon. Surprising that it still lags this bad. But I'm happy that it works again at all.

malventano commented 1 year ago

Removing the GUID and UID completely (noticed they were optional) resolved

Yeah I made that same mistake initially - if set, the container will do a chown -R across its mounts on every startup, which takes a while.

Also the very laggy experience seems to be due to the amount of files that I share.

That's likely less the fault of the docker and more just how SoulseekQT works. It's not really multithreaded at all and will get bogged down with indexing large shares. If you ever switch over to TrueNAS Scale, maybe try running the docker on bare metal instead of in a VM. Might help a bit.

17huiwei commented 1 year ago

after "docker restart soulseek",i lost all my new config. why.......

AverageHoarder commented 1 year ago

if set, the container will do a chown -R across its mounts on every startup, which takes a while

The weird part is that my SMB share was also always read-only and it used to work like this while having a GUID and UID defined without spamming the "chown: changing ownership of "/data/Soulseek Shared Folder/: Read-only file system" on startup (I still have my old container untouched).

That's likely less the fault of the docker and more just how SoulseekQT works.

That's why I'm currently checking out slskd to see if it performs better with my amount of files. I'll report back with any additional findings.

realies commented 1 year ago

Soulseek saves its config file periodically, and the interval is outlined in its settings. Unfortunately, I can't replicate the UI bug in the screenshot. If GUID and PUID are not set, it will not chown the mounted shares. Please feel free to comment on this issue if you have any related updates.

AverageHoarder commented 3 months ago

Soulseek saves its config file periodically, and the interval is outlined in its settings. Unfortunately, I can't replicate the UI bug in the screenshot. If GUID and PUID are not set, it will not chown the mounted shares. Please feel free to comment on this issue if you have any related updates.

Is this still true? After updating the container today vnc only showed a black window. Taking a look at the logs revealed: chown: changing ownership of '/data/Soulseek Shared Folder/....flac': Read-only file system for every file in my music folder.

I changed nothing in my compose and neither PUID nor PGID are set.

  soulseek:
    image: realies/soulseek
    container_name: soulseek
    volumes:
      - '/opt/soulseek/appdata:/data/.SoulseekQt'
      - '/mnt/downloads/soulseek:/data/Soulseek Downloads'
      - '/opt/soulseek/logs:/data/Soulseek Chat Logs'
      - 'music_ro:/data/Soulseek Shared Folder:ro'
    restart: unless-stopped
    network_mode: 'service:gluetun'

As my music folder is mounted read-only on the VM level, trying to chown every music file is pointless and time consuming. Is there a way to prevent it from doing this?

realies commented 3 months ago

@AverageHoarder, this has been changed. People have had issues with downloading files when the container process that runs Soulseek tries to write to mounted folders. This is why, at the moment, when starting the container, the container user is assigned the PUID/PGID IDs, and folders and files get their ownership and permissions updated to match the UMASK and IDs: https://github.com/realies/soulseek-docker/blob/ecc866e99744b93e728ed97a153b56696045c1a2/rootfs/etc/s6-overlay/s6-rc.d/init-setup/run

Happy to take any suggestions on improving this.

AverageHoarder commented 3 months ago

A solution would be to only try to claim ownership of folders that will actually be written to. Which is not true for the shared folder that works perfectly fine read-only. Or even better to only test whether the folders that have to be written to can be written to without changing the permissions of any existing files. I'd honestly be very annoyed if my terabytes of music suddenly got their permissions updated, which is why I mounted my music read-only in the first place. With this (for the shared folder) pointless check in place, starting up the container takes a long time and wastes resources by trying to chmod hundreds of thousands of files on a read-only mount. Another solution would be to abort the process of changing ownership as soon as read-only files are encountered.

However I'd already be happy if there was an environment variable to globally skip changing ownership of any folder as it's not needed for a correctly configured setup. That would restore the previous behavior when not specifying a PUID and GUID.