kelinger / OmniStream

Deployment and management tools for an entire streaming platform that can reside on a server (local, remote, hosted, VPS) with media files stored on cloud services like Google Drive or Dropbox.
MIT License
30 stars 8 forks source link

Streaming Dockers aren’t playing nice (Plex, Emby, Jellyfin). #61

Closed shadowsbane0 closed 1 year ago

shadowsbane0 commented 1 year ago

Ken,

I’m sure I’m a one off case issue. I’ve rebuilt and redesigned my setup since I first started this journey. I’ve taken some of your influence (thank you) and some of my own direction. I’m currently running my OmniStream setup on a Debian VM on a Proxmox Host Cluster. You likely have much more experience with Proxmox than I have at this point. The issue I have is with the three media streamers mentioned above which is weird. My issue is that I have CEPHFS configured on my hosts and I have it kernel mounted to the Debian VM as my Media volume. I have that volume mounted the same way for all of the containers where I use it. All of the ‘ARR containers see it with no problem along with Calibre and another container I use. The three culprits go into a shutdown reboot loop after the scripts finish running. If I comment out the volume mount in the yaml and use the VARIABLE /Media volume, Emby and Jellyfin boot normally. Plex I haven’t nailed down to issues with the mount or the restore from backup yet. I can focus on it later - I use Emby primarily (less space overhead). I don’t get anything obvious in the logs that I’ve been able to nail down. I’m trying to keep the kernel mount for better performance rather than try a ceph-fuse mount or SMB mount. Any ideas why those three are having issues and the other containers are not? I have one other thing to troubleshoot and hate to go that route but I can if need be because its time consuming. Like I said this is likely a single case use issue that others will never encounter. I’m just looking for direction. Thanks! @kelinger @TechPerplexed

shadowsbane0 commented 1 year ago

So the troubleshooting I wanted to do was to ensure Debian 12 on my guest vm wasn't the issue. I stood up a Debian 11 box, threw a fresh install on it, and restored my backup. Same issue. On a positive note I was able to verify the issue is not with the cephfs mount. In the brief moments between restarting I was able to console into the container and the volume mount was there and accessible. I'm now troubleshooting my backup of the servers. I'm on different hardware and previously was using hw encoding whereas now I am not. Additionally the back up was from April prior to my rebuild efforts. I'm looking at vanilla installs and migrating my databases.

kelinger commented 1 year ago

Unfortunately, I haven't really used Proxmox to do much more than host my systems and a few VMs. Which is to say, it's a fairly simplistic setup. I've found that, for replication, simply backing the LXC or VM up daily (to Google) and pulling it down to another Proxmox install if/when needed has suited my purposes. Yeah, it's not a hot-standby or instant-failover configuration but it's also simple to manage.

I agree, though, that the problem is very unlikely the guest systems, whatever they are. From the standpoint of Proxmox, its still the same CPU/Memory/etc. configuration and it shouldn't really care what's running.

Is it possible that mounting the media from the virtualized guest is being attempted before the guest is ready?

shadowsbane0 commented 1 year ago

Solved this one. It was the health checks. Health checks are scripted to use MEDIA variable. I don’t use that variable because my files are local. REM’d them out for now. Any idea what I would need to use SS cmd instead of Netstat since its depreciated and not used on some containers. What I’ve tried for SS has not worked. It should just simply be SS -t -l -a -p. That gets me nowhere.

shadowsbane0 commented 1 year ago

Problem found and worked around.