astrada / google-drive-ocamlfuse

FUSE filesystem over Google Drive
https://astrada.github.io/google-drive-ocamlfuse/
MIT License
5.43k stars 346 forks source link

Good tool to use with plex? #247

Open Torsten85 opened 7 years ago

Torsten85 commented 7 years ago

Hey there, I've just stumbled over this repository and it looks promising. I'm currently looking for a good way to use my google drive with plex... the alternative (rclone) has a problem with the "downloadQuotaExceeded" error which is some kind of google abuse detection "feature". Plex probes a lot of files at a lot of locations on a library scan and that will trigger abuse detection.

Does this tool somehow plays around this error or will it also lock my google account?

Thanks in advance

astrada commented 7 years ago

Sorry, but I don't use Plex, so I don't really have an answer. Personally, I never experienced that error during test.

natoriousbigg commented 7 years ago

I've been testing GDO for about 24 hours now with Plex, did a few library scans, and haven't gotten banned yet. Had make some changes to the config file and the mount settings to make Plex stream stable.

Torsten85 commented 7 years ago

@scoopydude2002 Did you do some special settings except "allow_other" and "stream large files"? What is your transfer speed? When I try to copy one single file from the mount to the local hard drive I only get 3-5 MB/s which is too low for high bitrate movies.

natoriousbigg commented 7 years ago

@Torsten85 I'm getting about 10MB/s transfer (with 2 video streams running in the background) from the mount with these settings. Sometimes it's a lot higher due to memory caching. I don't really know exactly which settings helped, but this is what's working for me right now. I'm about to play a 80mbps video without buffer.

Config file. Customized settings are marked with ##:

apps_script_format=json
apps_script_icon=
async_upload=true
cache_directory=/tmp/gdrive-cache
client_id=
client_secret=
connect_timeout_ms=5000
curl_debug_off=false
data_directory=
delete_forever_in_trash_folder=false
docs_file_extension=true
document_format=odt
document_icon=
download_docs=false
drawing_format=png
drawing_icon=
form_format=zip
form_icon=
fusion_table_format=desktop
fusion_table_icon=
keep_duplicates=false
large_file_read_only=true ##
large_file_threshold_mb=16
log_directory=
lost_and_found=false
low_speed_limit=0
low_speed_time=0
map_format=desktop
map_icon=
max_cache_size_mb=512000 ## 500GB
max_download_speed=0
max_memory_cache_size=10737418240 ## 10GB
max_retries=8
max_upload_chunk_size=1099511627776
max_upload_speed=0
memory_buffer_size=8388608 ## 8MB
metadata_cache_time=60
presentation_format=pdf
presentation_icon=
read_ahead_buffers=5 ##
read_only=true
shared_with_me=false
spreadsheet_format=ods
spreadsheet_icon=
sqlite3_busy_timeout=5000
stream_large_files=true ##
umask=0o002
verification_code=
xdg_base_directory=false

google-drive-ocamlfuse mount settings:

google-drive-ocamlfuse -o allow_other,ro,atomic_o_trunc,large_read,default_permissions /mnt/gdrive

unionfs mount:

unionfs-fuse -o cow,allow_other,direct_io,atomic_o_trunc,large_read,default_permissions /mnt/local=rw:/mnt/gdrive=ro /mnt/union
Torsten85 commented 7 years ago

thanks @scoopydude2002, that helps a lot. I'll try your config tonight.

I see your using unionfs for local storage. do you have some scripts (rclone?) in place to automatically upload your local folder to google drive?

sorry if this gets a little bit offtopic.

natoriousbigg commented 7 years ago

I do use rclone to upload, can post in a bit. I did a full library scan with Plex and got 42,000 API hits (mostly drive.files.get) without getting banned so far:

screen shot 2017-01-30 at 12 38 26 am

Torsten85 commented 7 years ago

@scoopydude2002 thats a great info. May I ask how big your media library is?

natoriousbigg commented 7 years ago

My library is currently only 20TB right now. Any here's my upload script. Make sure you configure it for your setup:

#!/bin/bash

today=`date '+%m-%d-%Y'`;

RCLONE_BIN="rclone"

LOCALDIR="/mnt/local"
LOCALDIR_TV="tv"
LOCALDIR_MOVIES="movies"

GDRIVE_RCLONE_REMOTE="gdrive"

TV_UPLOAD_OPTIONS="--no-traverse --acd-upload-wait-per-gb 90s --bwlimit 20M --delete-after --verbose"
TV_MOVE_MIN_AGE="4w"
TV_COPY_MIN_AGE="1w"
MOVIES_UPLOAD_OPTIONS="--no-traverse --acd-upload-wait-per-gb 90s --bwlimit 20M --delete-after --verbose"
MOVIES_MOVE_MIN_AGE="52w"
MOVIES_COPY_MIN_AGE="1w"

echo "Starting TV move."
${RCLONE_BIN} move $TV_UPLOAD_OPTIONS --min-age $TV_MOVE_MIN_AGE --log-file="$(dirname $0)/${today}-tv_move.log" "${LOCALDIR}/$LOCALDIR_TV" "${GDRIVE_RCLONE_REMOTE}:$LOCALDIR_TV"

echo "Starting TV copy."
${RCLONE_BIN} copy $TV_UPLOAD_OPTIONS --min-age $TV_COPY_MIN_AGE --log-file="$(dirname $0)/${today}-tv_copy.log" "${LOCALDIR}/$LOCALDIR_TV" "${GDRIVE_RCLONE_REMOTE}:$LOCALDIR_TV"

echo "Starting Movies move."
${RCLONE_BIN} move $MOVIES_UPLOAD_OPTIONS --min-age $MOVIES_MOVE_MIN_AGE --log-file="$(dirname $0)/${today}-movies_move.log" "${LOCALDIR}/$LOCALDIR_MOVIES" "${GDRIVE_RCLONE_REMOTE}:$LOCALDIR_MOVIES"

echo "Starting Movies copy."
${RCLONE_BIN} copy $MOVIES_UPLOAD_OPTIONS --min-age $MOVIES_COPY_MIN_AGE --log-file="$(dirname $0)/${today}-movies_copy.log" "${LOCALDIR}/$LOCALDIR_MOVIES" "${GDRIVE_RCLONE_REMOTE}:$LOCALDIR_MOVIES"

echo "Upload Complete."

The "--acd-upload-wait-per-gb" option is only needed if you're uploading to ACD. If you're using GDrive it'll just be ignored.

Torsten85 commented 7 years ago

@scoopydude2002 Thanks. One question remains concerning your upload script: Why do you copy and move? Shouldn't just moving be enough?

natoriousbigg commented 7 years ago

@Torsten85 You notice the dates, I would copy files older than 1 weeks then and then move files that are over than 2 weeks. I want to keep files longer locally as a cache but don't want to risk losing them if the HD crashes.

Torsten85 commented 7 years ago

@scoopydude2002 ah ok. Thanks for your support! My setup is now pretty much like yours and its working great most of the time. But apporx. once a day, google-drive-ocamlfuse suddenly stops. The logs (curl.log & gdfuse.log) are both empty... did you experience the same?

natoriousbigg commented 7 years ago

@Torsten85 I'm getting frequent mount dropping as well. You'll have to turn on debug mode "-debug" to enable logging. I have it on but I have a script that checks the mount and remount is. But whenever this happens the log gets overwritten. I'll wonder if it's related issue #236 and the memory cache setting that we're using. I'll post it as a new issue if I can catch it in the log.

natoriousbigg commented 7 years ago

@Torsten85 I'm not sure how your set up is but I actually use unionfs to stack my local drive on top of gdrive and then on top of ACD (unionfs-fuse /mnt/local=rw:/mnt/gdrive=ro:/mnt/acd=ro). With the above unionfs settings, whenever the google-drive-ocamlfuse mount drops the ACD mount takes over as failover mechanism. I didn't realize this was possible just something I noticed and found it useful.

Torsten85 commented 7 years ago

@scoopydude2002 you could simply adjust your remount script to do something like this bevor remounting:

cat curl.log >> curl.combined.log
cat gdfuse.log >> gdfuse.combined.log

that would keep all logs concatenated in one file... or am I missing something here?

stickenhoffen commented 7 years ago

I'm pretty sure the issue with mounts dropping is the multithreadness.

With a single thread it's smooth sailing, but of course doesn't perform the same.

stickenhoffen commented 7 years ago

I think I just invented a new word.

Torsten85 commented 7 years ago

@scoopydude2002 would you share your cronjob script that checks if the mount still exists and remounts it if necessary? Would help a lot.

jmoriau commented 7 years ago

@scoopydude2002 I'm experiencing the same thing, random mount drops "Transport endpoint is not connected".

natoriousbigg commented 7 years ago

@Torsten85 I can post in a bit but it's a complicated set of scripts. It'll only work if you have the same set up: local drive mounted on gdrive mount on acd . You can use acdcli or rclone to mount acd. But I use rclone crypt and not encfs.

@jmoriau The mount does drop about once a day.

jmoriau commented 7 years ago

@scoopydude2002 yeah same, I added a cronjob to unmount/remount every morning.

Tangoes commented 7 years ago

I recently installed on my headless server, and with fuse, I get 30 Mbits/s speeds... very bad for plex!

dany20mh commented 7 years ago

@Torsten85 @scoopydude2002 @jmoriau I have problem with the mount drops too, it do it around 1-2 days, I still don't know what cause of the problem. I appreciate if you guys share how you handle the drop of mount and remounting.

Because mine when it drop it's not there but the system say the drive is still mounted until I unmount it.

I use this code, but it's not working properly when it drops.

LOCALMOUNTPOINT="/Users/danial/mtng"

if mount | grep "on $LOCALMOUNTPOINT" > /dev/null; then
    echo "mounted"
else
    echo "not mounted"
fi

So I appreciate it if you guys help me too.

ghost commented 7 years ago

I've been having a problem, I've been trying to get my pms on a vps, to connect to google drive, but plex wont see any subfolders of the mount, even though i know they are there thanks to ls. I'm sure its a permission problem, but I have been unable to figure out how to get it working.

If anyone could help this poor linux noob, I would greatly appreciate it.

dany20mh commented 7 years ago

@SuperGaco Make sure you are using allow_other

ghost commented 7 years ago

@dany20mh that was it thanks :D I feel like an idiot now.

dany20mh commented 7 years ago

@SuperGaco It happens to all of us, glad I could help.

ne0ark commented 7 years ago

Isn't large_read deprecated in newer kernels? Also you might want to try auto_cache not sure if works with ocaml (@astrada) ? Anyone tried direct_io mount option?

   auto_cache
          This option enables automatic flushing of the data cache on open(2). The cache will only be flushed if the modification time or the size of the file has changed.
dany20mh commented 7 years ago

@ne0ark I never used any of these options, what are those exactly ?

astrada commented 7 years ago

Isn't large_read deprecated in newer kernels?

Yes, it should now be a no-op.

Also you might want to try auto_cache not sure if works with ocaml (@astrada) ?

It should work, but I never tried.

Anyone tried direct_io mount option?

Not really. I tested it when I was trying to avoid to download Google Docs during readdir (because the file size of Google Docs is not known until I have the exported file) but it gave me problems with libreoffice (it could not determine the correct file size).

ml3000 commented 7 years ago

Why do you mount as read only?

Animosity022 commented 7 years ago

So, are you getting the problem while using a unionfs type mount in addition?

I've been running solid for a number of days with plex scans, playing, analyzing and it's been working like a charm.

Last night, I was being bold and was going to add my mount using a unionfs to make this a bit smoother and less hands on. As soon as I added that, I got multiple where my mount got that error and I had to restart the mount. I was tired so I didn't end up saving the logs, but once removed, I went to bed and everything scanned superb.

I had ~90k API calls over the last 5 hours and no ban that I can tell with all working well.

I'm basically doing this first:

# Mount GD Fuse
/home/felix/.opam/system/bin/google-drive-ocamlfuse /GD -o allow_other

# Mount the 3 Directories via rclone for the encrypt
/usr/bin/rclone mount \
--allow-other \
--read-only \
--default-permissions \
--uid 1000 \
--gid 1000 \
--umask 002 \
--acd-templink-threshold 0 \
--buffer-size 100M \
--timeout 5s \
--contimeout 5s \
--syslog \
--stats 1m \
-v \
media: /media &

as my data is encrypted.

and I was trying to do something like this:

unionfs -o allow_other /movies=RW:/media/Movies=RO /Movies/

/movies is my local RW and the RO is my GD mounted decrypted rclone mount.

I'm super hesitant to touch anything as we watched a number of movies, tv shows over the last few days with superb streaming, no buffering and things just working flawless without the unionfs tossed in the mix.

tjustice86 commented 7 years ago

@scoopydude2002 Is there a link on a how-to to setup my VPS like you have yours? I'm running into issues and I'm not too linux-savy.

GottZ commented 6 years ago

@natoriousbigg how do you get around sudden unmounts of ocaml-fuse? i experienced it to be quite unstable in the long run. does not even try to reconnect in such a state but lags the whole system on each file system access.. tbh right now i'm even creating my own google drive fuse wrapper to get around that but i'd like to avoid spending too much time on that.

fun fact: i even have plex cloud.. its horrible. don't use it.

mjr0483 commented 5 years ago

What is 500GB in reference to here? I only have 128GB SSD and this value seems way out? max_cache_size_mb=512000 ## 500GB

hitech95 commented 4 years ago

Does it means that this will use up to 10GB of RAM?

max_memory_cache_size=10737418240 ## 10GB