shaka-project / shaka-streamer

A simple config-file based approach to preparing streaming media, based on FFmpeg and Shaka Packager.
https://shaka-project.github.io/shaka-streamer/
Apache License 2.0
199 stars 62 forks source link

Final mpd, m3u8 files are not getting uploaded to s3 #67

Closed Karthik-0 closed 3 years ago

Karthik-0 commented 3 years ago

Once transcoding is completed, mpd and m3u8 was not uploaded to s3. Instead, I am getting the following error.

XML API does not suport gzip-encoded uploads.
XML API does not suport gzip-encoded uploads.
XML API does not suport gzip-encoded uploads.
XML API does not suport gzip-encoded uploads.
CommandException: 4 files/objects could not be copied/removed.
Exception ignored in: <function NodeBase.__del__ at 0x7ff5153b9440>
Traceback (most recent call last):
  File "/Users/karthik/.virtualenvs/shaka-streamer/lib/python3.7/site-packages/streamer/node_base.py", line 51, in __del__
  File "/Users/karthik/.virtualenvs/shaka-streamer/lib/python3.7/site-packages/streamer/cloud_node.py", line 166, in stop
  File "/Users/karthik/.virtualenvs/shaka-streamer/lib/python3.7/site-packages/streamer/cloud_node.py", line 155, in _thread_single_pass
  File "/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/subprocess.py", line 347, in check_call
subprocess.CalledProcessError: Command '['gsutil', '-q', '-h', 'Cache-Control: no-store, no-transform', '-m', 'rsync', '-C', '-r', '-J', '/var/folders/_n/yh4n50b56mbfcylxthnvhqxc0000gq/T/shaka-live-_38aidcf/cloud', 's3://bucket_url/shaka_streamer/streamer/test_upload']' returned non-zero exit status 1.

When I checked further, I found gsutil raises this error when -J arg is provided.

joeyparrish commented 3 years ago

We have a plan to get rid of the gsutil-based cloud upload in favor of Shaka Packager's HTTP output support. To make this work well, we will need to set up local authentication proxies for GCS and S3 to refresh and add auth tokens to the requests. (I just realized I need to file an issue for that.)

In the meantime, we can skip -J for s3:// URLs. It seems to still work for GCS.

joeyparrish commented 3 years ago

(Tracking CloudNode deprecation now in #47)