madslundt / cloud-media-scripts

Upload and stream media from the cloud with or without encryption. Cache all new and recently streamed media locally to access quickly and reduce API calls
MIT License
88 stars 11 forks source link
cloud cloud-media cloud-storage crypt google-drive mount plex plex-media-server plexdrive rclone rclone-configuration stream

These scripts are created to have your media synced between your cloud- and local store. All media is always encrypted before being uploaded. This also means if you loose your encryption keys you can't read your media.

Plexdrive version 4.0.0 and Rclone version 1.39 is used.

The config right now is configured to have atleast 1 TB for caching and a decent internet connection. If you have a smaller drive or just want to optimize it click here.

Cloud-media-scripts included in a docker image. Check it out here

Easy install

git, curl and bash is needed to run easy install.

sudo apt-get install git-core curl bash -y

Now run:

bash <( curl -Ls https://github.com/madslundt/cloud-media-scripts/raw/master/INSTALL )

By default this will place cloud-media-scripts in the directory ./cloud-media-scripts. An extra argument can be added to change this

bash <( curl -Ls https://github.com/madslundt/cloud-media-scripts/raw/master/INSTALL ) [PATH]

This has only been tested on Ubuntu 16.04+. Please create an issue if you have any problems.

Upgrading

If you want to make sure you are using the latest version of cloud-media-scripts or needs to update it just go in to your cloud-media-scripts root folder and run INSTALL again bash INSTALL. This will ask you if you want to upgrade and will NOT modify or change your config.

Your config, cloud folders, local folders, rclone config and plexdrive will NOT be overwritten.

Content

How this works?

Following services are used to sync, encrypt/decrypt and mount media:

This gives us a total of 5 directories:

Cloud data is mounted to a local folder (cloud_encrypt_dir). This folder is then decrypted and mounted to a local folder (cloud_decrypt_dir).

A local folder (local_decrypt_dir) is created to contain media stored locally. The local folder (local_decrypt_dir) and cloud folder (cloud_decrypt_dir) are then mounted to a third folder (local_media_dir) with certain permissions - local folder with Read/Write permissions. The cloud folder is set to Read-only permissions.

Everytime new media is retrieved it should be added to local_media_dir or local_decrypt_dir. By adding new data to local_media_dir it will automatically write it to local_decrypt_dir because of the permissions and the pooling priority setup. At this moment the media has not been uploaded to the cloud yet but only appears locally.

When running cloudupload it makes sure to upload the files from local_decrypt_dir to the cloud. This will only upload to the cloud and the file appears both locally and on the cloud. However, in local_media_dir it only appears as one file. If your move_ind is set to 1 it will move the file to the cloud instead of copying.

Later media is going to be removed locally from local_decrypt_dir. This is done by running rmlocal which depends on remove_files_based_on setting. remove_files_based_on can be set to space, time or instant. This command makes sure to move files to cloud and then afterwards remove them locally. Media is then only removed from local_decrypt_dir and still appears in local_media_dir because it is still accessable from the cloud.

If remove_files_based_on is set to space it will only move data to the cloud (if local media size has exceeded remove_files_when_space_exceeds GB) starting from the oldest accessed file and will only free up atleast freeup_atleast GB. If time is set it will only move files older than remove_files_older_than to the cloud. If instant is set it will make sure to move all media to the cloud for then afterwards removing it locally.

Media is always uploaded to cloud before removing it locally.

UML diagram

Plexdrive

Plexdrive is used to mount Google Drive to a local folder (cloud_encrypt_dir).

Plexdrive version 4.0.0 requires a running MongoDB server. This is not included in the scripts but can either be installed from .deb packages or in a Docker container.

Plexdrive create two files: config.json and token.json. This is used to get access to Google Drive. These can either be set up via Plexdrive or by using the templates located in the plexdrive directory (copy the files, name them config.json and token.json and insert your Google API details).

Rclone

Rclone is used to encrypt, decrypt and upload files to the cloud. Rclone is used to mount and decrypt Plexdrive to a different folder (cloud_decrypt_dir). Rclone encrypts and uploads from a local folder (local_decrypt_dir) to the cloud.

Rclone creates a config file: config.json. This is used to get access to the cloud provider and encryption/decryption keys. This can either be set up via Rclone or by using the templates located in the rclone directory (just copy the file and name it rclone.conf).

Some have reported permission issues with Rclone directory. If that occurs it can be fixed by setting uid/gid variables under the # Mount user Id section in config.

FS-Pooling

To pool the data together in your target media folder, either UnionFS or MergerFS is used to mount both cloud and local media to a local folder (local_media_dir).

This option can be set within config by changing pool_choice.

The reason for these permissions are that when writing to the local folder (local_media_dir) it will not try to write it directly to the cloud folder, but instead to the local media (local_decrypt_dir). Later this will be encrypted and uploaded to the cloud by Rclone.

Installation without easy install

  1. Change config to match your settings.
  2. Change paths to config in all script files.
  3. Run bash setup.sh and follow the instructions*.
  4. Run ./mount.remote to mount plexdrive and decrypt by using rclone.

To unmount run ./umount.remote

*If this doesn't work, follow the manual setup instructions here.

Rclone setup

Most of the configuration to set up is done through Rclone. Read their documentation here.

3 remotes are needed:

Rclone documentation if needed click here

View my example for an rclone configuration here.

Good idea to backup your Rclone configuration and Plexdrive configuration and cache for easier setup next time.

Manually

To install the necessary stuff manually do the following:

  1. Install unionfs-fuse.
  2. Install bc.
  3. Install GNU screen.
  4. Install Rclone 1.39.
  5. Install Plexdrive 4.0.0.
  6. Create the folders pointing, in the config file, to local_decrypt_dir and plexdrive_temp_dir.
  7. Run rclone bin, installed in step 4, with the parameter --config=RCLONE_CONFIG config where RCLONE_CONFIG is the variable set in the config file.
  8. Set up Google Drive remote, Crypt for Google Drive remove (rclone_cloud_endpoint) and crypt for local directory (rclone_local_endpoint).
  9. Run plexdrive bin, installed in step 5, with the parameters --config=PLEXDRIVE_DIR --mongo-database=MONGO_DATABASE --mongo-host=MONGO_HOST --mongo-user=MONGO_USER --mongo-password=MONGO_PASSWORD. Remember to match the parameters with the variables in the config file.
  10. Enter authorization to your Google Drive.
  11. Cancel Plexdrive by pressing CTRL+C. Run PlexDrive with GNU screen: screen -dmS plexdrive PLEXDRIVE_BIN --config=PLEXDRIVE_DIR --mongo-database=MONGO_DATABASE --mongo-host=MONGO_HOST --mongo-user=MONGO_USER --mongo-password=MONGO_PASSWORD PLEXDRIVE_OPTIONS CLOUD_ENCRYPT_DIR.
  12. Exit screen session by pressing CTRL+A then D.

Setup cronjobs

My suggestions for cronjobs is in the file cron. These should be inserted into crontab -e. I suggest to wait a minute to start Plex to make sure the mount is up and running.

If you have a small local disk you may change upload and remove local content to do it more often.

*_If 'space' is set it will only move content to cloud, starting from the oldest accessed file, if media size has exceeded remove_files_when_space_exceeds, and will free up atleast freeup_atleast. If 'time' is set it will only move files to cloud older than remove_files_older_than. If 'instant' is set it will move all files to cloud when running. Only when file has been successfully moved to cloud it will be deleted locally._

Media is never deleted locally before being uploaded successfully to the cloud.

OBS: mountcheck is used to check if mount is up. I've had some problems where either Plexdrive or Rclone stops the mount. mountcheck will make sure to mount your stuff again if something like this happens.

My setup

My setup with this is quite simple.

I've an Intel NUC with only 128GB ssd. This is connected to a 4TB external hard drive that contains local media recently downloaded (local_decrypt_dir) and recently streamed media (plexdrive cache plexdrive_temp_dir).

I'm running this with up to 1TB plexdrive cache (--clear-chunk-max-size=1000G) and removing files based on space (remove_files_based_on="space") when local-decrypt-dir exceeds 2TB (remove_files_when_space_exceeds=2000) and frees up atleast 1TB (freeup_atleast=1000).

Optimize configuration WIP

Space

Right now the config is set for atleast 1 TB drive.

To use these scripts on a smaller drive, make these changes to the config:

Plexdrive

Misc. config

Internet connection

Depending on your internet connection, you can optimize when plexdrive download chunks.

Plexdrive

Upgrade

You can easily upgrade those scripts with the following command

git pull origin master

Donate

If you want to support the project or just buy me a beer I accept Paypal and bitcoins.

paypal

BitCoin address: 18fXu7Ty9RB4prZCpD8CDD1AyhHaRS1ef3

bitcoin