yadayada / acd_cli

An unmaintained command line interface and FUSE filesystem for Amazon (Cloud) Drive
Other
1.35k stars 165 forks source link

acd_cli amazon drive slow when streaming or directplay with plex #474

Closed amuteau closed 8 years ago

amuteau commented 8 years ago

Hello, I'm trying to play movies (1080p) from a linux server with a amazon cloud drive mount. the movie freeze , i don't understand why, I have 100mb bandwidth. When I copy a movie locally every thing work fine but hen I play the movie from the drive it's slow. direct download (copy file) = 90mb/s

capture d ecran 2016-11-05 a 13 40 48

Plex play = max 1-10 mb/s (sometimes Ob/s)

capture d ecran 2016-11-05 a 13 53 48

I think this is because of the cache configuration of plex server. Can you help me ?

EldonMcGuinness commented 8 years ago

While this seems very unlikely to be anything to do with acdcli, you could try to play the file in a media player app on your computer directly from the cloud and see what the yields. If it works fine then it is likely a plex issue, if not you might be able to get more information from the media player app as to what happening via a log or something similar.

I would also check your plex logs and head over to www.plex.tv and see if they can lend you a hand.

solracsf commented 8 years ago

I also use Amazon Drive with Plex and sometimes i have the exact same problem...sometimes, with some big files.

Sometimes, i can play any file without any issue, i have a Fiber connection 100/100 at home and a 1Gbps server.

Plex has actually a beta that also uses Amazon Drive, so i don't believe that the problem comes from Amazon Drive, if it was a problem, they wouldn't launch a commercial product with those issues. See https://www.plex.tv/cloud/

See also https://support.plex.tv/hc/en-us/articles/201575036-Why-is-my-video-stream-buffering-

amuteau commented 8 years ago

I also think this is a plex issue , something around the cache or the way plex cut a file to stream it. I'm looking for a config to increase the size of these files.

jxer commented 8 years ago

what program is that your using to monitor bandwidth

amuteau commented 8 years ago

I use iftop

amuteau commented 8 years ago

follow the plex discussion to know if there is a solution. https://forums.plex.tv/discussion/243820/acd-cli-amazon-drive-slow-when-streaming-or-directplay#latest

amuteau commented 8 years ago

I did change in the configuration :

[read]
;maximal number of simultaneously opened chunks per file
open_chunk_limit = 100
;
;;sets the connection/idle timeout when creating or reading a chunk [seconds]
timeout = 10

[transfer]
;;sets the read/write chunk size for the local file system [bytes]
fs_chunk_size = 1310720
chunk_retries = 5
connection_timeout = 10
idle_timeout = 20

It seems to be better. Do you know why ?

ftruzzi commented 8 years ago

Have you tried transferring a file that plays slowly via (S)FTP? I have the same problem and I noticed some files are slower than others, whether copying via FTP or streaming via Plex. Same thing happens using rclone + encfs. However, local read and copy is fast even with on-the-fly decryption.

I'm trying different solutions such as rclone + encfs (same results - some files transfer fast and other slow) or rclone + its 'crypt' functionality (still testing, will report back).

I have a 10MB/s connection at home so my expectation is to stream files from my VPS via ACD at about 8-10MB/s. I don't see why it shouldn't be doable since the VPS itself can read at write to ACD maxing out its connection!

amuteau commented 8 years ago

I don't think my problème is about connection At home i have fifer (800Mb download / 500 upload) My VPS have 100Mb connection And amazon .... don't know but enough

here the architecture [AMAZON DRIVE] > [VPS OVH] > [Home smart TV]

I can see in the logs of and_cli dropped connection

16-11-07 11:45:35.674 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/F7iC_nEvR9y-40b1aXuSSQ/content"
16-11-07 11:45:35.680 [INFO] [requests.packages.urllib3.connectionpool] - Resetting dropped connection: content-na.drive.amazonaws.com
16-11-07 11:45:37.444 [INFO] [acdcli.api.backoff_req] - GET "https://content-na.drive.amazonaws.com/cdproxy/nodes/F7iC_nEvR9y-40b1aXuSSQ/content"
16-11-07 11:45:37.449 [INFO] [requests.packages.urllib3.connectionpool] - Resetting dropped connection: content-na.drive.amazonaws.com

So , I agree with the fact that plex do a lot of request but I think acd_cli can't support it.

More over , I can't configure the fs_chunk_size in fuse.ini ... the parameter have no effect. 16-11-07 13:14:18.032 [DEBUG] [acdcli.acd_fuse] - <- read 131072

solracsf commented 8 years ago

You should try with your computer. Why?

Device Limitations

Some apps can have other limitations that may affect playback. For instance, the Plex for Smart TVs and Plex for PlayStation app can experience memory-related constraints. If you try to play content at too high a bitrate, the app can run out of memory available to it, which can result either in crashing or in poor playback, which may manifest as buffering.

In these cases, lowering the video streaming quality in the app to the recommended values may help.

I have two SmatTVs: one LG, one Toshiba. My problems are on the Toshiba, not on the LG. My LG is more recent and has more available RAM. On the LG i can play Direct Stream, not on the Toshiba. Both are wired, not wireless.

amuteau commented 8 years ago

Ok but my smart tv is able to play a movie when the file is stored on my plex server. The same file in the amazon mounted drive freeze.

Le 7 nov. 2016 à 13:40, Carlos Ferreira notifications@github.com a écrit :

You should try with your computer. Why?

Device Limitations

Some apps can have other limitations that may affect playback. For instance, the Plex for Smart TVs and Plex for PlayStation app can experience memory-related constraints. If you try to play content at too high a bitrate, the app can run out of memory available to it, which can result either in crashing or in poor playback, which may manifest as buffering.

In these cases, lowering the video streaming quality in the app to the recommended values may help.

I have two SmatTVs: one LG, one Toshiba. My problems are on the Toshiba, not on the LG. My LG is more recent and has more available RAM. On the LG i can play Direct Stream, not on the Toshiba. Both are wired, not wireless.

— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/yadayada/acd_cli/issues/474#issuecomment-258825516, or mute the thread https://github.com/notifications/unsubscribe-auth/ASwz5vhZ22aYnEEkBbyy_TKN8Ngbjyihks5q7xwggaJpZM4KqQaD.

notDavid commented 8 years ago

This is just a guess, but i think this problem might be related to whether a particular .mkv file is web optimized for streaming...

-- look at the section where it says:

All this seeking around trying to find the moov wastes time and bandwidth.

amuteau commented 8 years ago

May be this is a fuse issue. I'm trying to find a solution but how to control fuse configuration through acd_cli ?

ftruzzi commented 8 years ago

Any updates on this? I'm having pretty much the same issue.

amuteau commented 8 years ago

you can try my configuration. It seems to work better. At the beginning the movie is slow to start and there is one or two freeze but after 3 minutes the movie play without any freeze until the end.

[root@ns3287942 ~]# cat  .config/acd_cli/fuse.ini 
[read]
;maximal number of simultaneously opened chunks per file
open_chunk_limit = 500
;
;;sets the connection/idle timeout when creating or reading a chunk [seconds]
timeout = 60
;
;[write]
;;number of buffered chunks in the write queue
;;the size of the chunks may vary (e.g. 512B, 4KB, or 128KB)
;buffer_size = 32
;
;;sets the timeout for putting a chunk into the queue [seconds]
;timeout = 20
;
[transfer]
;;sets the read/write chunk size for the local file system [bytes]
fs_chunk_size = 13107200
chunk_retries = 4
connection_timeout = 30
idle_timeout = 60

let me know if it's work for you

jxer commented 8 years ago

see the same as well, crap performance

ftruzzi commented 8 years ago

For me it depends on the file, some stream at maximum speed and some are stuck at 3-500KB/s.

However it can be that one slow file streams fast a few minutes later, or vice versa.

notDavid commented 8 years ago

i don't have any problems at all. The solution is simple, (re-)encode your files to mp4 with the web-optimized flag. The video is then optimised for streaming over the web...

jxer commented 8 years ago

has nothing to do with the file, i can play 1 stream all day long, doesn't matter the file, if i fire 2-3 streams up it goes to hell, even using ftp off the mount causes its to backup from time to time

ftruzzi commented 8 years ago

Yeah some files are also slow over FTP, sometimes even while copying (downloading, not streaming). I can't really get my head around this, it's quite an unpredictable behavior.

jxer commented 8 years ago

its because there is some big latency on the ACD side, there is a difference when you start one big transfer vs randomly accessing, i have seen during full rclone sessions, my bandwidth go from 460mbit, to 20mb, for like 10-15 seconds..

im not sure acdcli is the problem, but good luck figuring it out, it does seem like its waiting a long time watching io stats

jxer commented 8 years ago

starting to believe this is simply congestion on the way to and from acd, performance this morning is through the roof

karbowiak commented 8 years ago

If ACD downloaded bigger chunks (say 8MB chunks) from Amazon, and then pre-buffered another 5-10 of those, we'd not be having any of these issues.

The issue is simply from how it obeys fuse, and requests 128KB chunks from Amazon. I asked about this in #452 :(

amuteau commented 8 years ago

I agree , extend chunks size should be the solution.

charlymr commented 8 years ago

Guys. No matter what you do, I don't think it will get much better. i am using a different approach, I am buffering what I really want to watch locally and then leave "archive" there. I know it is accessible but will eventually sync it locally as the streaming will not be performant. I have a very good internet connection.

The explanation I am seeing is the following, ACD is build on top of Amazon infrastructure S3 and I believe they are moving files not used too often on to slower part (Glacier). So, wherever your file is on the high demand, medium demand or lower demand part of the infrastructure at the time you access it will make big difference in the performance.

So let's say you upload a 1080p movie today. Well within the next 3/4 days, it is probable it will stay in the High demand part. Then will move into medium and low demand. So if you start accessing a file, it will move up the chain. The more you access it over time, the more it will move. etc. But if you are not accessing it for a while, it will simply go to low demand and the performance will be just the worse you can get the next time you try access the file.

Another interesting thing, is that they will probably give priority depending the type of file. I am pretty sure they will not give priority to MKV as the service is not really intended to be use that way (otherwise they would not have the 20 min media limitation). Then you will probably have crypted file, with no type so RAW data. Here I would say it will just be standard performance. The high priority is probably given to Photo since this is what they intended to service to be.

Anyway, that being said, it is pretty cool to have unlimited storage. But bare in mind you do need to have some limit in there. I mean, you cannot expect to have a high demand unlimited cloud service for that price, otherwise all the other would have already close down.

charlymr commented 8 years ago

And yeah of course congestion will make a big difference. They cannot provide high demand services to everyone at the same time. So depending the time of the day, you will also have differences in term of performance.

msh100 commented 8 years ago

The explanation I am seeing is the following, ACD is build on top of Amazon infrastructure S3 and I believe they are moving files not used too often on to slower part (Glacier). So, wherever your file is on the high demand, medium demand or lower demand part of the infrastructure at the time you access it will make big difference in the performance.

Almost definitely not the case. Files are accessible instantly (not a trait of Glacier). Chances are the contention is at the level of the proxies passing files from S3 to the client.

charlymr commented 8 years ago

Ok so maybe not glacier, agree. I use to have files and it was taking 2/3 hours when requesting access

solracsf commented 8 years ago

Also, don't forget that ACD claims that:

3.2 Usage Restrictions and Limits. The Services are offered in the United States. We may restrict access from other locations. There may be limits on the types of content you can store and share using the Services, such as file types we do not support, and on the number or type of devices you can use to access the Services. We may impose other restrictions on use of the Services.

karbowiak commented 8 years ago

Sadly the issue is the download chunk size, and the fact it doesn't automatically download x amount of chunks ahead of the current chunk, instead FUSE requests chunks that are 128KB in size, and it requests them as the machine needs them.

With a bigger download chunk size, a LRU cache that stored downloaded chunks of data for later reuse incase it needs to be accessed again, and the ability to download X amount of chunks ahead-of-current-chunk we'd not have any of these issues.

Sadly, we do not have bigger download chunks, it does not download any chunks ahead-of-current-chunk, and we do not have a read cache, so we do have these issues.

This makes me sad :( And i can't write python, otherwise i would've dedicated time to writing it

charlymr commented 8 years ago

it seems to have change back in May: https://github.com/osxfuse/osxfuse/commit/92d437837b6adf497bc9825dc5a4f17ce0b57f86

(On OSX)

ftruzzi commented 8 years ago

Streaming from the Plex web interface seems to improve the situation for me. The audio is transcoded but the video is directly streamed.

Try playing large files from the web interface. I guess that the transcoder acts as some kind of read cache and makes things better.

I've also opened a thread on the Plex forums: https://forums.plex.tv/discussion/245218/audio-only-transcodes-noticeably-faster-than-direct-play/p1?new=1

solracsf commented 8 years ago

@ftruzzi i can see the same exact behavior on my side. Sometimes, the same exact file that freezes on Win10 Plex App plays just fine on https://app.plex.tv/web/app

Also, scan files helps a lot as it helps Plex better decoding them: Parameter -a on https://support.plex.tv/hc/en-us/articles/201242707-Plex-Media-Scanner-via-Command-Line

ftruzzi commented 8 years ago

No improvements from my side. I'll try to recompile the kernel with different fuse read settings. Hope to have some better results by the end of the week.

See https://sourceforge.net/p/fuse/mailman/message/21762717/

EldonMcGuinness commented 8 years ago

@ftruzzi If the webui plays fine then how could it be a fuse/acdcli issue? Would not it be using the same interface to read the files, that being acdcli?

@karbowiak with regards to your previous comment, would not the readahead be a function of the client? If Plex is to read ahead of the current needed chunk, that would not be controlled by acd or fuse, unless it somehow is specifically preventing it. Instead, it would be up to Plex to ask for the desired chunks ahead of time. I know this is how KODI/XBMC handles buffering issues by asking for packets in advance.

karbowiak commented 8 years ago

@EldonMcGuinness you are indeed correct, but seeing as Plex isn't likely to do anything anyone who isn't working there (and even then, not) - we're pretty much the ones who have to work around the problem.

A different thing is tho, that, 128KB is a tiny, tiny, tiny block, and when you add https overhead on top, and so forth - it's going to be a very tiny part of the total time, that's spent on downloading the actual data. You're going to be spending far more time opening, negotiating and requesting data. Which a bigger block size would fix, meaning we could spend more time going at ludicrous speeds, rather than less :P

yadayada commented 8 years ago

A different thing is tho, that, 128KB is a tiny, tiny, tiny block, and when you add https overhead on top, and so forth - it's going to be a very tiny part of the total time, that's spent on downloading the actual data. You're going to be spending far more time opening, negotiating and requesting data.

The block size of 128KB (or whatever it may be) does not influence the size of the requested chunk, which is preset to 500MB. Data of that 500MB chunk is then read sequentially, beginning at the requested offset, as needed.

Sequential reading will be fast, reading at random offsets will be punished with high latency.

karbowiak commented 8 years ago

The block size of 128KB (or whatever it may be) does not influence the size of the requested chunk, which is preset to 500MB. Data of that 500MB chunk is then read sequentially, beginning at the requested offset, as needed. Sequential reading will be fast, reading at random offsets will be punished with high latency.

From everything i've seen it only does that with the download/dl function, and not with the fuse mount.

That said, one way to speed up subsequent accesses, would be to cache said blocks of data locally, and check if they're actually located on the system already (cache them basically). Would speed up library updating in Plex/Emby/Whatever immensely :)

bdeluca commented 8 years ago

So I think the number of 128k read requests that can be in play is limited by the max_background setting in fuse, its set at 12 by default. I increased this dramatically (1024) and now I can download much faster.

If I upload and download it goes really slow again, the upload seems to be consuming all the max_background?

I wonder though if this is related to peak time on the amazon servers, it is very late.

I need to debug more.

karbowiak commented 8 years ago

@bdeluca where did you set this max_background option? /etc/fuse.conf ? or somewhere else? :)

bdeluca commented 8 years ago

/etc/fuse.conf

ftruzzi commented 8 years ago

Recompiling the kernel as linked in my previous comment considerably improved streaming speeds (no more slow files - acd_cli started requesting 1MB chunks from the fuse mount).

However, a couple of days ago I had very slow performance (downloads at about 50-60KB/s) but that might have been an Amazon issue since speeds were the same when downloading from the web interface.

I haven't done enough testing for the moment, and I don't have much free time. Recompiling the kernel looks promising though.

bdeluca commented 8 years ago

@ftruzzi do you have your kernel patch?

ftruzzi commented 8 years ago

I have no kernel patches but I only changed these two lines:

define VM_MAX_READAHEAD 1024

define FUSE_MAX_PAGES_PER_REQ 256

in files "include/linux/mm.h" and "fs/fuse/fuse_i.h", respectively.

I've tried with higher values (8M and 128M) and speeds were slow, but it was at the same time of that probable Amazon slowness I mentioned...

dellipse commented 8 years ago

Rather than make kernel patches, I tried to find a way to make Mr. Truzzi's changes with just configuration files. After some searching, I found that on Ubuntu 16.04 I could make the following changes to /etc/sysctl.conf and accomplish the same thing.

This percentage value controls the tendency of the kernel to reclaim

the memory which is used for caching of directory and inode objects.

#

At the default value of vfs_cache_pressure=100 the kernel will attempt to

reclaim dentries and inodes at a "fair" rate with respect to pagecache and

swapcache reclaim. Decreasing vfs_cache_pressure causes the kernel to prefer

to retain dentry and inode caches.

#

Basically, once something is swapped it's probably not immediately needed

any more, so increase the kernel's resistance a bit more trying to build up

a large cache again.

vm.vfs_cache_pressure = 75

Good to improve sequential reads (stop stuttering background music)

Can also be implemented per disk using udev rules

in KB, so only 9MB lost, so who cares nowadays?

vm.max-readahead=8192 vm.min-readahead=1024

Now I am getting sustained download speeds within the FUSE side of over 21 MB/s. That is an increase of over 10x the speed I was getting before on the same file. I have not tested uploads since I use another tool to upload to ACD, but this is very promising.

On Wed, Nov 30, 2016 at 5:50 AM Francesco Truzzi notifications@github.com wrote:

I have no kernel patches but I only changed these two lines:

define VM_MAX_READAHEAD 1024

define FUSE_MAX_PAGES_PER_REQ 256

in files "include/linux/mm.h" and "fs/fuse/fuse_i.h", respectively.

I've tried with higher values (8M and 128M) and speeds were slow, but it was at the same time of that probable Amazon slowness I mentioned...

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/yadayada/acd_cli/issues/474#issuecomment-263843375, or mute the thread https://github.com/notifications/unsubscribe-auth/AHXrWIZwpntwg97CjojHI5G77bYS8u22ks5rDVT6gaJpZM4KqQaD .

jxer commented 8 years ago

did you add all these settings? i have this file on debian but none of those settings are there

bdeluca commented 8 years ago

which ubuntu are you using? 16.04 doesn't have those sysctls

On 30 November 2016 at 21:13, jxer notifications@github.com wrote:

did you add all these settings? i have this file on debian but none of those settings are there

— You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub https://github.com/yadayada/acd_cli/issues/474#issuecomment-263965914, or mute the thread https://github.com/notifications/unsubscribe-auth/AAJUZWy2jPav-NhCIN7Qp7KevicMIAtGks5rDcrUgaJpZM4KqQaD .

karbowiak commented 8 years ago

@dellipse not available in 16.10 with kernel 4.8

dellipse commented 8 years ago

I added those settings at the bottom of that file. They are not there "out of the box".

I am running Ubuntu 16.04.1 LTS and kernel 4.4.0-51-generic

MK, are you sure this file does not work/exist in 16.10? It should, but if not I am sure there is a replacement for it somewhere in systemd. (I do NOT want to get into that discussion, it is at least as bad as debating politics and religion, maybe as bad as both combined!)

karbowiak commented 8 years ago

@dellipse

i think you're misunderstanding me, /etc/sysctl.conf IS available, but the vm.max-readahead and vm.min-readahead aren't.

They're simply not available in the kernel that ships with 16.10. (Which is kernel 4.8) I'll see if it's something that has to be enabled at compile time or something like that, but for now it seems i'm shit out of luck :P