Zibbp / ganymede

Twitch VOD and Live Stream archiving platform. Includes a rendered and real-time chat for each archive.
https://github.com/Zibbp/ganymede
GNU General Public License v3.0
491 stars 25 forks source link

v3 Beta - Changelog and Upgrade Instructions #474

Closed Zibbp closed 3 months ago

Zibbp commented 3 months ago

V3 Beta

Changes

Known Issues

Breaking Changes

Notable Changes

Changes that aren't breaking but are notable.

Upgrading to the Beta

[!IMPORTANT]
BACK UP YOUR CURRENT INSTANCE, INCLUDING THE DATABASE!!!

The upgrade is non-reversible. Be sure to create a backup of the database if something goes wrong. Rely on your standard backup procedure or create a database dump by running the following command.

docker exec ganymede-db pg_dump -U ganymede ganymede-prd | gzip > /tmp/dump.sql.gz

Upgrade Steps

Wait until you have no active archives. There is no archive comparability between versions!

The v3 beta image is tagged as :dev for both the API and frontend. These are built frequently as I push changes. Be sure to periodically update!

  1. Bring down all containers docker compose down
  2. Make modifications to the docker-compose.yml file
    1. Perform the changes to your docker-compose.yml outlined in the breaking changes section above. This includes...
    2. Removing the temporal containers, and any references to it.
    3. Optionally updating paths in the VIDEOS_DIR and TEMP_DIR environment variable. I recommend mounting the TEMP_DIR to a volume on your host. This is to prevent losing data if the container crashes or restarts. Any modification to these variables requires changing the volume mounts as well.
    4. Adding the river-ui container
    5. Update the API and frontend image tags to use :dev
version: "3.3"
services:
  ganymede-api:
    container_name: ganymede-api
-   image: ghcr.io/zibbp/ganymede:latest
+   image: ghcr.io/zibbp/ganymede:dev
    restart: unless-stopped
    depends_on:
-      - ganymede-temporal
+      - ganymede-db
    environment:
      - TZ=America/Chicago # Set to your timezone
      - DB_HOST=ganymede-db
      - DB_PORT=5432
      - DB_USER=ganymede
      - DB_PASS=PASSWORD
      - DB_NAME=ganymede-prd
      - DB_SSL=disable
      - JWT_SECRET=SECRET
      - JWT_REFRESH_SECRET=SECRET
      - TWITCH_CLIENT_ID=
      - TWITCH_CLIENT_SECRET=
      - FRONTEND_HOST=http://IP:PORT
      # OPTIONAL
+     # - OAUTH_ENABLED=false
      # - OAUTH_PROVIDER_URL=
      # - OAUTH_CLIENT_ID=
      # - OAUTH_CLIENT_SECRET=
      # - OAUTH_REDIRECT_URL=http://IP:PORT/api/v1/auth/oauth/callback # Points to the API service
-     - TEMPORAL_URL=ganymede-temporal:7233
      # WORKER
      - MAX_CHAT_DOWNLOAD_EXECUTIONS=5
      - MAX_CHAT_RENDER_EXECUTIONS=3
      - MAX_VIDEO_DOWNLOAD_EXECUTIONS=5
      - MAX_VIDEO_CONVERT_EXECUTIONS=3
    volumes:
-   - /mnt/nas/vods:/vods
+   - /mnt/nas/vods:/data/videos
-   - ./logs:/logs
+   - ./logs:/data/logs
+   - ./temp:/data/temp
+   - ./config:/data/config # be sure to move `config.json` if needed
    ports:
      - 4800:4000
  ganymede-frontend:
    container_name: ganymede-frontend
-   image: ghcr.io/zibbp/ganymede-frontend:latest
+   image: ghcr.io/zibbp/ganymede-frontend:dev
    restart: unless-stopped
    environment:
      - API_URL=http://IP:PORT # Points to the API service
      - CDN_URL=http://IP:PORT # Points to the CDN service
      - SHOW_SSO_LOGIN_BUTTON=true # show/hide SSO login button on login page
      - FORCE_SSO_AUTH=false # force SSO auth for all users (bypasses login page and redirects to SSO)
      - REQUIRE_LOGIN=false # require login to view videos
    ports:
      - 4801:3000
-  ganymede-temporal:
-    image: temporalio/auto-setup:1.23
-    container_name: ganymede-temporal
-    depends_on:
-      - ganymede-db
-    environment:
-      - DB=postgres12 # this tells temporal to use postgres (not the db name)
-      - DB_PORT=5432
-      - POSTGRES_USER=ganymede
-      - POSTGRES_PWD=PASSWORD
-      - POSTGRES_SEEDS=ganymede-db # name of the db service
-    ports:
-      - 7233:7233
-  # -- Uncomment below to enable temporal web ui --
-  # ganymede-temporal-ui:
-  #   image: temporalio/ui:latest
-  #   container_name: ganymede-temporal-ui
-  #   depends_on:
-  #     - ganymede-temporal
- #   environment:
-  #     - TEMPORAL_ADDRESS=ganymede-temporal:7233
-  #   ports:
-  #     - 8233:8080
  ganymede-db:
    container_name: ganymede-db
    image: postgres:14
    volumes:
      - ./ganymede-db:/var/lib/postgresql/data
    environment:
      - POSTGRES_PASSWORD=PASSWORD
      - POSTGRES_USER=ganymede
      - POSTGRES_DB=ganymede-prd
    ports:
      - 4803:5432
  ganymede-nginx: # this isn't necessary if you do not want to serve static files via Nginx. The API container will serve the `VIDEO_DIR` environment variable.
    container_name: ganymede-nginx
    image: nginx
    volumes:
      - /path/to/nginx.conf:/etc/nginx/nginx.conf:ro
      - /pah/to/vod/stoage:/mnt/vods
    ports:
      - 4802:8080
+ ganymede-river-ui:
+   image: ghcr.io/riverqueue/riverui:0.3
+   environment:
+     - DATABASE_URL=postgres://ganymede:DB_PASSWORD@ganymede-db:5432/ganymede-prd # update with env settings from the ganymede-db container
+   ports:
+     - 4804:8080
  1. Delete unused temporal directory rm -rf temporal (if you still have it)
  2. Bring the containers back up docker compose up -d
  3. Watch the api logs for possible issues docker logs ganymede-api -f

Questions / Issues

Please post any questions or issues you run into!

russelg commented 3 months ago

I'm having an issue with HLS videos, ffmpeg is failing to write the output files. /data/tmp does indeed exist, but the 42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0 directory does not when it runs the command. I don't think it was necessary to create the directory in v2, not sure what's changed here for this to fail.

Note I'm running it manually here, but that was just to test it outside the API. it's the same command that the API attempted to run.

ganymede-api  | {"level":"debug","task":"watchdog","jobs":"2","time":"2024-07-31T20:32:25+08:00","message":"jobs found"}
ganymede-api  | {"level":"debug","video_id":"7bedb12a-4ee1-11ef-b8e6-0242ac120009","time":"2024-07-31T20:32:27+08:00","message":"logging ffmpeg output to /data/logs/7bedb12a-4ee1-11ef-b8e6-0242ac120009-video-convert.log"}
ganymede-api  | {"level":"debug","video_id":"7bedb12a-4ee1-11ef-b8e6-0242ac120009","cmd":"-y -hide_banner -i /data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video-convert.mp4 -c copy -start_number 0 -hls_time 10 -hls_list_size 0 -hls_segment_filename /data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824_segment%d.ts -f hls /data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824-video.m3u8","time":"2024-07-31T20:32:27+08:00","message":"running ffmpeg"}
ganymede-api  | {"level":"error","error":"exit status 1","time":"2024-07-31T20:32:27+08:00","message":"error running ffmpeg"}
ganymede-api  | time=2024-07-31T20:32:27.507+08:00 level=ERROR msg="jobExecutor: Job errored" error="error running ffmpeg: exit status 1" job_id=326 job_kind=task_video_convert
ganymede-api  | {"level":"error","job_id":"326","attempt":"5","attempted_by":"01ceba9dee4c_2024_07_31T11_37_22_184180","args":"{\"input\": {\"QueueId\": \"b4073de2-cd78-466a-a829-0b86a94aafa5\", \"HeartBeatTime\": \"2024-07-31T20:27:44.032594474+08:00\"}, \"continue\": true}","error":"error running ffmpeg: exit status 1","time":"2024-07-31T20:32:27+08:00","message":"task error"}
ganymede-api  | {"level":"debug","time":"2024-07-31T20:32:27+08:00","message":"Error notification is disabled"}
ganymede-api  | {"level":"debug","task_id":"326","time":"2024-07-31T20:32:27+08:00","message":"heartbeat stopped due to context cancellation"}
root@01ceba9dee4c:/opt/app# ffmpeg -y -hide_banner -i /data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video-convert.mp4 -c copy -start_number 0 -hls_time 10 -hls_list_size 0 -hls_segment_filename /data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824_segment%d.ts -f hls /data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824-video.m3u8
Input #0, mov,mp4,m4a,3gp,3g2,mj2, from '/data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video-convert.mp4':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf59.27.100
  Duration: 04:52:48.16, start: 0.000000, bitrate: 6183 kb/s
  Stream #0:0[0x1](und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], 6003 kb/s, 60 fps, 60 tbr, 90k tbn (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
  Stream #0:1[0x2](und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 162 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]
Output #0, hls, to '/data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824-video.m3u8':
  Metadata:
    major_brand     : isom
    minor_version   : 512
    compatible_brands: isomiso2avc1mp41
    encoder         : Lavf59.27.100
  Stream #0:0(und): Video: h264 (High) (avc1 / 0x31637661), yuv420p(tv, bt709, progressive), 1920x1080 [SAR 1:1 DAR 16:9], q=2-31, 6003 kb/s, 60 fps, 60 tbr, 90k tbn (default)
    Metadata:
      handler_name    : VideoHandler
      vendor_id       : [0][0][0][0]
  Stream #0:1(und): Audio: aac (LC) (mp4a / 0x6134706D), 44100 Hz, stereo, fltp, 162 kb/s (default)
    Metadata:
      handler_name    : SoundHandler
      vendor_id       : [0][0][0][0]
Stream mapping:
  Stream #0:0 -> #0:0 (copy)
  Stream #0:1 -> #0:1 (copy)
Press [q] to stop, [?] for help
[hls @ 0x55a028d71740] Opening '/data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824_segment0.ts' for writing
[hls @ 0x55a028d71740] Failed to open file '/data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824_segment0.ts'
av_interleaved_write_frame(): No such file or directory
[hls @ 0x55a028d71740] Opening '/data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824_segment0.ts' for writing
[hls @ 0x55a028d71740] Failed to open file '/data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824_segment0.ts'
[hls @ 0x55a028d71740] Opening '/data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824-video.m3u8.tmp' for writing
[hls @ 0x55a028d71740] failed to rename file /data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824-video.m3u8.tmp to /data/tmp/42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0/42679252824-video.m3u8: No such file or directory
frame=  602 fps=0.0 q=-1.0 Lsize=N/A time=00:00:10.01 bitrate=N/A speed=1.23e+03x    
video:7626kB audio:200kB subtitle:0kB other streams:0kB global headers:0kB muxing overhead: unknown
Conversion failed!
Zibbp commented 3 months ago

I'm having an issue with HLS videos, ffmpeg is failing to write the output files. /data/tmp does indeed exist, but the 42679252824_7bedb12a-4ee1-11ef-b8e6-0242ac120009-video_hls0 directory does not when it runs the command. I don't think it was necessary to create the directory in v2, not sure what's changed here for this to fail.

Should be resolved in the latest build of the :dev image. The hls directory was never being created. The code is more strict on the directories and files names so that's why it worked before.

Steltek commented 3 months ago

Frontend links to /vods path for profile pictures that doesn't seem to resolve anymore in nginx after the latest changes:

http://hostname:4802/vods/channelname/profile.png

needs to be changed to

http://hostname:4802/data/videos/channelname/profile.png

russelg commented 3 months ago

@Steltek this was fixed recently, https://github.com/Zibbp/ganymede/pull/468/commits/e80179b80ef637378ea4394066b8ede24236b6b9

I have a feeling that this won't run if you've already migrated, so I think you'll need to change the path to something else, restart, let it migrate that, then change it back to your desired path.

Steltek commented 3 months ago

@Steltek this was fixed recently, e80179b

I have a feeling that this won't run if you've already migrated, so I think you'll need to change the path to something else, restart, let it migrate that, then change it back to your desired path.

Thanks for the information. I didn't know where this path was coming from (wasn't aware it was stored in the DB), so I ended up just manually updating the paths in the channel info. (This is a new instance with no data other than channel info in it yet.)

c-hri-s commented 3 months ago

I worked around the error Steltek mentioned above by changing the nginx volume line to : - /volume1/video/Twitch:/mnt/vods/data/videos/

Zibbp commented 3 months ago

I forgot to mention the nginx changes, I'll update the issue. If you're wanting to use the new default of /data/videos you will need to update the nginx container volumes to mount the path to your videos directory to /data/videos.

  ganymede-nginx:
    container_name: ganymede-nginx
    image: nginx
    volumes:
      - /path/to/nginx.conf:/etc/nginx/nginx.conf:ro
      - /mnt/nas/vods:/data/videos

Then the nginx.conf will also need to updated with the paths pointing to /data/videos.

worker_processes auto;
worker_rlimit_nofile 65535;
error_log  /var/log/nginx/error.log info;
pid        /var/run/nginx.pid;

events {
   multi_accept       on;
   worker_connections 65535;
}

http {

  sendfile on;
  sendfile_max_chunk 1m;
  tcp_nopush on;
  tcp_nodelay on;

  keepalive_timeout 65;
  gzip on;

  server {
    listen 8080;
    root /data/videos;

    add_header 'Access-Control-Allow-Origin' '*' always;
    add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
    add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;
    add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;

    location ^~ /data/videos {
      autoindex on;
      alias /data/videos;

      location ~* \.(ico|css|js|gif|jpeg|jpg|png|svg|webp)$ {
          expires 30d;
          add_header Pragma "public";
          add_header Cache-Control "public";
     }
      location ~* \.(mp4)$ {
          add_header Content-Type "video/mp4";
          add_header 'Access-Control-Allow-Origin' '*' always;
          add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS' always;
          add_header 'Access-Control-Allow-Headers' 'DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range' always;
          add_header 'Access-Control-Expose-Headers' 'Content-Length,Content-Range' always;
      }
    }
  }
}
Zibbp commented 3 months ago

New changes:

Steltek commented 3 months ago

I forgot to mention the nginx changes, I'll update the issue. If you're wanting to use the new default of /data/videos you will need to update the nginx container volumes to mount the path to your videos directory to /data/videos. ...

I had seen those in the commit log and already applied them (but the /vods link from the frontend of course couldn't resolve with that config).

sc-idevops commented 3 months ago

~So, I have a few things in my queue that are stuck on ERROR from the temporal ui version. How can I clear those out after migrating to the new river queue system?~

Nevermind I'm tired and stupid, I just have to go into the queue and set them to not processing (for those of you who might have this problem I'll keep this here)

Oh, I do have another question, is there a way to manually trigger a chat render assuming you have the json file downloaded already?

Zibbp commented 3 months ago

Oh, I do have another question, is there a way to manually trigger a chat render assuming you have the json file downloaded already?

Yes you can, assuming you still have a queue item for the archive. You will need to copy the chat json file to the path the queue expects it to be. This is $TEMP_DIR/${EXT_ID}_${ID}-chat.json (e.g. /data/temp/2205549924_245542d3-4a1e-11ef-b465-0242ac1c0007-chat.json). You can open the video menu and click "Info" to get this path, or in the database. image Once the chat json file is in the correct location you can restart the chat render task.

If you don't have a queue item for this, you can manually create a render by running this command, updating the paths to the input and output. Optionally add any extra settings you have configured for chat renders.

docker exec ganymede-api TwitchDownloaderCLI chatrender -i /path/to/chat.json --collision overwrite -h 1440 -w 340 --framerate 30 --font Inter --font-size 13 --log-level Verbose -o /path/to/output/chat.mp4
Steltek commented 3 months ago

Not sure if anyone else can repro this. I have a watched channel with "Archive chat" turned on, and "Render chat" turned off. After a recording finishes, I still see a queue item processing for "Chat Render" (and an ffmpeg task doing the rendering to a video file in the temp directory). After the rendering completes, the mp4 file is left in the temp directory.

Zibbp commented 3 months ago

Not sure if anyone else can repro this. I have a watched channel with "Archive chat" turned on, and "Render chat" turned off. After a recording finishes, I still see a queue item processing for "Chat Render" (and an ffmpeg task doing the rendering to a video file in the temp directory). After the rendering completes, the mp4 file is left in the temp directory.

I was able to reproduce this. The logic wasn't detecting if the chat should be rendered for live streams or not. This is fixed in the latest :dev image.

AkatsukiiDesu commented 3 months ago

One thing that I have noticed after the update is that when I bulk add watched channels and de-select both Archive and Render chat, that it's still downloading chat. This in turn is causing the video to fail to process. I will attempt to grab some logs when a stream is finished recording.

Zibbp commented 3 months ago

One thing that I have noticed after the update is that when I bulk add watched channels and de-select both Archive and Render chat, that it's still downloading chat. This in turn is causing the video to fail to process. I will attempt to grab some logs when a stream is finished recording.

Latest :dev image for the API container should resolve this.

jayjay181818 commented 3 months ago

Wow, after a little bit of playing to get this beta up and running on one of my instances... All I can say wow, this is the best update yet! Touch wood everything is working perfectly. I hope this ends in the main release soon as it's awesome, great work! 👌

CappiSteijns commented 3 months ago

I completely agree with the comment above—I'm really enjoying the beta as well.

However, I do have one issue. The progress bar on all my videos is static. When I click on a VOD, it does take me to the latest point I watched, so it seems to be just a visual bug. Additionally, when I've fully watched a VOD or manually set it to "mark as watched," it doesn't get marked as watched.

image image

Zibbp commented 3 months ago

However, I do have one issue. The progress bar on all my videos is static. When I click on a VOD, it does take me to the latest point I watched, so it seems to be just a visual bug. Additionally, when I've fully watched a VOD or manually set it to "mark as watched," it doesn't get marked as watched.

I fixed this a couple days ago, are you running the latest build of the :dev images (both API and frontend)?

CappiSteijns commented 3 months ago

I fixed this a couple days ago, are you running the latest build of the :dev images (both API and frontend)?

Ah good to hear, not running latest build as I am in the middle of recording a subathon. But good to hear it's fixed! Thank you

Zibbp commented 3 months ago

v3.0.0 has been released. If you run into any other issues please open a new issue.

jayjay181818 commented 3 months ago

Awsome! If we were on the beta, are there any specific steps to swap back to the main build now that's on 3.0? Or just swap back from :dev to :latest ? Thanks

Steltek commented 3 months ago

Awsome! If we were on the beta, are there any specific steps to swap back to the main build now that's on 3.0? Or just swap back from :dev to :latest ? Thanks

In general just switching to :latest should work, however it depends on when you went into the beta (the beta underwent a few changes to the docker-compose and nginx.conf for example that you'd need to follow).

jayjay181818 commented 3 months ago

Ah, so I have the beta working on my AMD based server fine.. But trying to get the v3 working on one of my arm64 devices is proving a no go. I tried using :latest and get "exec /usr/local/bin/entrypoint.sh: exec format" in the API logs which I guess it means that it can’t find an arm64 version. Previously I ghcr.io/zibbp/ganymede:main-arm64 before, but I presume this wont contain the update. I wonder if the update hasn't been compiled to work with arm64, the API container keeps complaining about temporal when I tried using ghcr.io/zibbp/ganymede:latest-arm64 the image even though I've completely removed the mention of temporal in the yaml/config and the old folders, this is using the latest tag.

Am I missing something or has v3 just not been compiled for arm64 yet? Thanks

Zibbp commented 3 months ago

Images are being built and published in amd64 and arm64. Can try deleting the docker image and pulling again?

  1. docker image rm ghcr.io/zibbp/ganymede:latest
  2. docker image pull ghcr.io/zibbp/ganymede:latest

Then can you check what architecture the image is? docker inspect ghcr.io/zibbp/ganymede:latest | grep Arc

Last I check it's a "multi-architecture" image in that you can simply pull the :latest and your host will select the correct one. Unfortunately Github just updated their packages UI and I can't see that anymore...

jayjay181818 commented 3 months ago

Images are being built and published in amd64 and arm64. Can try deleting the docker image and pulling again?

  1. docker image rm ghcr.io/zibbp/ganymede:latest
  2. docker image pull ghcr.io/zibbp/ganymede:latest

Then can you check what architecture the image is? docker inspect ghcr.io/zibbp/ganymede:latest | grep Arc

Last I check it's a "multi-architecture" image in that you can simply pull the :latest and your host will select the correct one. Unfortunately, Github just updated their packages UI and I can't see that anymore...

I did remove and re-pull the images, they do show "Architecture": "arm64".. but still the same "exec /usr/local/bin/entrypoint.sh: exec format" in the API frustratingly. I also removed the ganymede:latest-arm64 and pulled this again, but I still get the temporal errors in the API logs.. Very odd indeed

Zibbp commented 3 months ago

I noticed an issue with the Dockerfile causing arm image builds to build as amd64 for the last stage. Can you try pulling the latest :dev image?

The *-arm64 images are no longer used, that's why you are still seeing temporal. I consolidated the dockerfiles so it can be built as a 'single multi-arch' image now.

jayjay181818 commented 3 months ago

Tried with the dev tag and i get something similar still even after stopping the stack, removing the images and re-pulling: error: exec failed: exec format error error: exec failed: exec format error usermod: no changes

User uid: 911 User gid: 911

However.. I did try on my M3 pro MacBook pro with docker desktop/portainer and managed to get it working with ganymede:latest . I presume that macos would use the same arm64 image we are aiming to get? That being said, I haven't used docker on mac much at all, so I don't know if it has differences or some kind of emulation involved. I might be tempted to backup my folders on my orange pi 5 plus and try pull a fresh instance down and see if that works like it did on my Mac using ganymede:latest