ShaneIsrael / fireshare

Self host your media and share with unique links
GNU General Public License v3.0
688 stars 41 forks source link

No video with supported format and MIME type found #108

Closed truthsword closed 2 years ago

truthsword commented 2 years ago

When I click on a video a window opens with:

"No video with supported format and MIME type found."

Yet if I drag that video file into a blank browser (Firefox) tab, it plays. Is this expected?

Thank you!

ShaneIsrael commented 2 years ago

Are you running Fireshare via docker and are you properly mounting all the required volumes? It sounds like its unable to create the symlinks to your videos. Are there any errors in the Fireshare docker logs that you can post here?

truthsword commented 2 years ago

This is typical... 2022/08/03 00:01:50 [error] 11#11: *1 open() "/processed/video_links/3345c04e2b2eaaaf9ec0ee7ea15a23c6.mp4" failed (13: Permission denied), client: 172.28.0.1, server: default, request: "GET /_content/video/3345c04e2b2eaaaf9ec0ee7ea15a23c6.mp4 HTTP/1.1", host: "192.168.96.11:8884", referrer: "http://192.168.96.11:8884/"

This is running on Docker on a Synology NAS... so maybe there are permissions issues somewhere.

Docker compose:

version: "3.5"
services:
  fireshare:
    image: shaneisrael/fireshare:latest
    container_name: fireshare
    ports:
      - 8884:80
    volumes:
      - /volume1/docker/fireshare/data:/data
      - /volume1/docker/fireshare/processed:/processed
      - /volume1/testing/clips:/videos
    environment:
      - ADMIN_PASSWORD=admin
    restart: always
ShaneIsrael commented 2 years ago

Everything looks set up correctly. I would check your folder permissions on /volume1/docker/fireshare/data and /volume1/docker/fireshare/processed

Fireshare is getting permission denied which means that your system isn't allowing docker read/write to those folders.

truthsword commented 2 years ago

I'm stumped. Those folders were owned by root. I changed ownership to my primary user and no difference. Then I uploaded another file, and the new folder containing the poster was again owned by root, as was the uploaded video file.

I suspect this is somehow Synology related. In other containers, I must specify the PUID/PGID of my preferred user as an environmental variable, so that those containers (ex. sonarr) run as user, and not as root. I'm unsure if this applies to your container.

Thanks for your help. I'd love to get this running. My only other option is a Raspberry Pi, but I don't see that image supported.

ShaneIsrael commented 2 years ago

In docker compose you can try setting the user and group ids to whatever you normally would set them to.

user: $UID:$GID

So for example

version: "3.5"
services:
  fireshare:
    image: shaneisrael/fireshare:latest
    container_name: fireshare
    user: 99:100
    ports:
      - 8884:80
    volumes:
      - /volume1/docker/fireshare/data:/data
      - /volume1/docker/fireshare/processed:/processed
      - /volume1/testing/clips:/videos
    environment:
      - ADMIN_PASSWORD=admin
    restart: always

if 99 and 100 were your respective user and group id's.

truthsword commented 2 years ago

Unfortunately, things only got worse with that addition. I tried both an administrator account, and a user account. Errors abounded. Log attached. _fireshare_logs.txt

ShaneIsrael commented 2 years ago

@truthsword Yeah that was definitely a shot in the dark idea. Unfortunately I don't have a nas let alone a synology one that I would be able to test with. Did you create the folders yourself or did you let docker create them? You might try creating them yourself first and see if that works. Or vice versa try letting docker create them.

The only thing I can say for sure is that it definitely seems to be a permissions issue with your nas so I don't know how much help I'll be.

truthsword commented 2 years ago

Generally I create the folders myself. In this case, initially I ran the docker-compose file without doing do, and it errored out. At that point I created the persistent folder paths. I hope to arouse some interest in other Synology users, that are more experienced... hopefully they can point to the issue that evades me.

ShaneIsrael commented 2 years ago

If you do figure out a solution I would love to know what it is so that I can include it in our troubleshooting section in case other people have a similar issue on their nas.

truthsword commented 2 years ago

Here's an initial reaction ...

  1. The Dockerfile has no USER instruction – which is the user which user: would override – this image doesn't support it.
  2. It is undesirable to run a container where the main application runs with root permissions... Instead, the suggestion is to implement the necessary stuff used to run the application with an unprivileged user.

Thanks for your consideration.

ShaneIsrael commented 2 years ago

Yeah so I'll need to update the Docker image to use a non root user and probably also allow people to specify what that users UID GID are if they want/need to.

I don't know if I'll be able to get to that tonight, I'll have to do some testing with this change.

ShaneIsrael commented 2 years ago

@truthsword I added PUID and PGID as ENV variables that you can set in your docker compose file for your desired user id and group id that you want docker to use. Try setting those to one of your nas users and groups and let me know if that solves your problem.

ShaneIsrael commented 2 years ago

You'll need to pull the latest image down also. Should be version 1.2.1

ShaneIsrael commented 2 years ago

The new image is currently building, should be up in about 5-10 minutes.

truthsword commented 2 years ago

Added PUID=xxxx PGID=xxx

No change. But some logs

-------------------------------------

User uid:    xxxx
User gid:    xxx
-------------------------------------

rm: cannot remove '/jobs.sqlite': No such file or directory
2022-08-04 13:24:21,990 INFO    __init__.update_config:26 | Validating configuration file...
INFO  [alembic.runtime.migration] Context impl SQLiteImpl.
INFO  [alembic.runtime.migration] Will assume non-transactional DDL.
2022-08-04 13:24:24,359 INFO    __init__.update_config:26 | Validating configuration file...
2022-08-04 13:24:25,207 INFO    schedule.init_schedule:17 | Initializing scheduled video scan. minutes=5
/usr/local/lib/python3.9/site-packages/apscheduler/util.py:94: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
  if obj.zone == 'local':
/usr/local/lib/python3.9/site-packages/apscheduler/triggers/interval.py:66: PytzUsageWarning: The normalize method is no longer necessary, as this time zone supports the fold attribute (PEP 495). For more details on migrating to a PEP 495-compliant implementation, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html
  return self.timezone.normalize(next_fire_time)
[2022-08-04 13:24:29 +0000] [102] [INFO] Starting gunicorn 20.1.0
[2022-08-04 13:24:29 +0000] [102] [INFO] Listening at: http://127.0.0.1:5000 (102)
[2022-08-04 13:24:29 +0000] [102] [INFO] Using worker: gthread
[2022-08-04 13:24:29 +0000] [104] [INFO] Booting worker with pid: 104
[2022-08-04 13:24:29 +0000] [105] [INFO] Booting worker with pid: 105
[2022-08-04 13:24:29 +0000] [106] [INFO] Booting worker with pid: 106

Also saw time zone warnings... Out of curiosity I added time zone envrironmental variable TZ=America_Chicago and had many log errors... so that didn't lead anywhere.

Hope this is helpful

ShaneIsrael commented 2 years ago

Neither of those are bugs. Removing of jobs.SQLite if it exists is just an on start cleanup. It's fine if it can't be found.

The listening in port 5000 is also normal. That's the Fireshare python server that runs in the container.

So really I'm not seeing any issues with those logs. If you still get the permission denied error then I'm not sure since you should now be able to specify a user that does have access.

You might try a clean install first. Recreate the volume mapped folders, etc.

On Thu, Aug 4, 2022, 7:38 AM truthsword @.***> wrote:

Added PUID=xxxx PGID=xxx

No change. But some logs


User uid: xxxx User gid: xxx

rm: cannot remove '/jobs.sqlite': No such file or directory 2022-08-04 13:24:21,990 INFO init.update_config:26 | Validating configuration file... INFO [alembic.runtime.migration] Context impl SQLiteImpl. INFO [alembic.runtime.migration] Will assume non-transactional DDL. 2022-08-04 13:24:24,359 INFO init.update_config:26 | Validating configuration file... 2022-08-04 13:24:25,207 INFO schedule.init_schedule:17 | Initializing scheduled video scan. minutes=5 /usr/local/lib/python3.9/site-packages/apscheduler/util.py:94: PytzUsageWarning: The zone attribute is specific to pytz's interface; please migrate to a new time zone provider. For more details on how to do so, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html if obj.zone == 'local': /usr/local/lib/python3.9/site-packages/apscheduler/triggers/interval.py:66: PytzUsageWarning: The normalize method is no longer necessary, as this time zone supports the fold attribute (PEP 495). For more details on migrating to a PEP 495-compliant implementation, see https://pytz-deprecation-shim.readthedocs.io/en/latest/migration.html return self.timezone.normalize(next_fire_time) [2022-08-04 13:24:29 +0000] [102] [INFO] Starting gunicorn 20.1.0 [2022-08-04 13:24:29 +0000] [102] [INFO] Listening at: http://127.0.0.1:5000 (102) [2022-08-04 13:24:29 +0000] [102] [INFO] Using worker: gthread [2022-08-04 13:24:29 +0000] [104] [INFO] Booting worker with pid: 104 [2022-08-04 13:24:29 +0000] [105] [INFO] Booting worker with pid: 105 [2022-08-04 13:24:29 +0000] [106] [INFO] Booting worker with pid: 106

  • rm: cannot remove '/jobs.sqlite': No such file or directory: not alt all sure what this is
  • [2022-08-04 13:24:29 +0000] [102] [INFO] Listening at: http://127.0.0.1:5000 (102): Unsure why listening on port 5000. as my external port is 8887:80.

Also saw time zone warnings... Out of curiosity I added time zone envrironmental variable TZ=America_Chicago and had many log errors... so that didn't lead anywhere.

Hope this is helpful

— Reply to this email directly, view it on GitHub https://github.com/ShaneIsrael/fireshare/issues/108#issuecomment-1205271341, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMZPD6TQAY7HZOC5ZB2EG3VXPBWRANCNFSM55MT6RDQ . You are receiving this because you commented.Message ID: @.***>

truthsword commented 2 years ago

No success.

    environment:
      - ADMIN_PASSWORD=admin
      - PUID=xxxx (admin user)
      - PGID=xxx (admin group)

Observations...

It took quite sometime before the "upload" window appeared. When it was created by the container, its folder "uploads" had root ownership. A video I uploaded using the upload window (under the "uploads" folder), was owned by root.

I noticed similar things under the persistent folder "processed" that I created. New sub-folders were created, all with root ownership. There were named "derived", "video_links"; under the persistent folder "data", two files were created, "config.json", "db.sqlite" ... both with root ownership.

I also tested the shareable link domain. It didn't seem active. I'm unsure how that is used.

ShaneIsrael commented 2 years ago

Ah yeah I see now in your reply you are setting those to root. I believe that was your issue that the other people pointed out was that you need to be using a non-root user.

On Thu, Aug 4, 2022, 11:55 AM Shane Israel @.***> wrote:

Are you sure you are providing a non root id and group id for the PUID and PGID environment variables? And if you are, are you sure the id's you are providing actually correspond to a user on your system?

On Thu, Aug 4, 2022, 11:49 AM truthsword @.***> wrote:

No success.

  • Deleted all persistent folders and recreated them
  • Deleted the image and container
  • Ran docker-compose with with Synology PUID/PGID values

    environment:

    • ADMIN_PASSWORD=admin
    • PUID=xxxx (admin user)
    • PGID=xxx (admin group)

Observations...

It took quite sometime before the "upload" window appeared. When it was created by the container, its folder "uploads" had root ownership. A video I uploaded using the upload window (under the "uploads" folder), was owned by root.

I noticed similar things under the persistent folder "processed" that I created. New sub-folders were created, all with root ownership. There were named "derived", "video_links"; under the persistent folder "data", two files were created, "config.json", "db.sqlite" ... both with root ownership.

I also tested the shareable link domain. It didn't seem active. I'm unsure how that is used.

— Reply to this email directly, view it on GitHub https://github.com/ShaneIsrael/fireshare/issues/108#issuecomment-1205581667, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMZPD2S7LCIPXYJ2O77BILVXP7ENANCNFSM55MT6RDQ . You are receiving this because you commented.Message ID: @.***>

truthsword commented 2 years ago

I see now in your reply you are setting those to root. I believe

How so? I am not running as root. It is the docker file creating those folders/files and setting root ownership. My PUID/PGID are non root.

That said, docker compose runs as sudo.

What is your advice? Thanks.

ShaneIsrael commented 2 years ago

Can you manually create those folders as non-root first and then run application via docker compose?

truthsword commented 2 years ago

No change. Uploaded videos are owned by root, and videos copied into the videos folder won't play.

ShaneIsrael commented 2 years ago

I'll take another look at this tonight. It might be that the python app is still being run as root which would explain why your uploads are still being created as root on your file system.

On Thu, Aug 4, 2022, 2:20 PM truthsword @.***> wrote:

No change. Uploaded videos are owned by root, and videos copied into the videos folder won't play.

— Reply to this email directly, view it on GitHub https://github.com/ShaneIsrael/fireshare/issues/108#issuecomment-1205728262, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMZPDYEQGC5XUL2L6DU6G3VXQQ2PANCNFSM55MT6RDQ . You are receiving this because you commented.Message ID: @.***>

ShaneIsrael commented 2 years ago

I was able to confirm that there was an issue with the commands being run. The application process itself which handles the uploading of videos was still running as the root process.

I made some changes and fixed that. It should now be running as your specified user. Just to make sure I tested this on my own system and was able to verify that uploaded videos were being created as the user I specified now.

Though I am not sure why videos you copy over won't play. Maybe this will fix that as well but if not I'm at a loss for why copied videos would still be failing.

A new image v1.2.2 should be up shortly.

truthsword commented 2 years ago

The admin ownership issue was resolved with v1.2.2, and all new files/folders follow the PUID. Thanks!!

But still I have some play back issues that may be Synology related. However, I have devised a workaround. In my docker compose file, the volume paths were originally:

volumes:
      - /volume1/docker/fireshare/data:/data
      - /volume1/docker/fireshare/processed:/processed
      - /volume1/testing/clips:/videos

I changed the "videos" path to fall in the same shared folder as the program files... so this:

volumes:
      - /volume1/docker/fireshare/data:/data
      - /volume1/docker/fireshare/processed:/processed
      - /volume1/docker/fireshare/clips:/videos

When I did this all videos played, including share links. There may be some odd Synology permissions thing happening here, or a path issue with Synology, I'm unsure. But as things work now, I'll sort this out at a later time. Thanks for your help!

ShaneIsrael commented 2 years ago

Yeah that definitely sounds like something related to Synology. I'm glad we were able to get this sorted out, sorry it took a bunch of trial and error.

truthsword commented 2 years ago

To add a final note... I compared the ACL between the "clips" folders I used. The folder that ended up working, has full READ rights granted to "Everyone", where the failing folder had no particular rights assigned to "Everyone".

After granting similar rights to "Everyone" in the failing folder, and recreating the container, the videos played as you would expect.

This struck me odd initially as the PUID/PGID user was owner of all folders/files involved. So the permissions issue was buried elsewhere.

Perhaps this will help a Synology user in the future, or point to a potential rights issue with the docker file. Either way, all is good now.

ShaneIsrael commented 2 years ago

Are you sure you are providing a non root id and group id for the PUID and PGID environment variables? And if you are, are you sure the id's you are providing actually correspond to a user on your system?

On Thu, Aug 4, 2022, 11:49 AM truthsword @.***> wrote:

No success.

  • Deleted all persistent folders and recreated them
  • Deleted the image and container
  • Ran docker-compose with with Synology PUID/PGID values

    environment:

    • ADMIN_PASSWORD=admin
    • PUID=xxxx (admin user)
    • PGID=xxx (admin group)

Observations...

It took quite sometime before the "upload" window appeared. When it was created by the container, its folder "uploads" had root ownership. A video I uploaded using the upload window (under the "uploads" folder), was owned by root.

I noticed similar things under the persistent folder "processed" that I created. New sub-folders were created, all with root ownership. There were named "derived", "video_links"; under the persistent folder "data", two files were created, "config.json", "db.sqlite" ... both with root ownership.

I also tested the shareable link domain. It didn't seem active. I'm unsure how that is used.

— Reply to this email directly, view it on GitHub https://github.com/ShaneIsrael/fireshare/issues/108#issuecomment-1205581667, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAMZPD2S7LCIPXYJ2O77BILVXP7ENANCNFSM55MT6RDQ . You are receiving this because you commented.Message ID: @.***>

truthsword commented 2 years ago

For some odd reason, the previous post responds to a far upstream post that was resolved 2 months ago. Thanks!