Open JustinGrote opened 8 years ago
I had a similar problem, but when using "inotifywait" to detect new/changed files to upload them to ACD. I worked around it with this:
#!/bin/bash
FOLDER="/home/user/.local"
inotifywait -m -r $FOLDER -e close_write -e moved_to |
while read path action file; do
echo "The file '$file' appeared in directory '$path' via '$action'"
var=`echo "$path" | sed -e "s@$FOLDER@@"`
sleep 5
acd_cli ul -rsf "$path"/"$file" /$var/
done
It checks the .local folder for changes, It gets the path and filename from inotifywait when something happens, and creates a variable var with the relative path, by removing the .local path from the complete path. Then it uploads to /[relative path]/
Like I said you can work around it by instantiating acd_cli in parallel, but I've seen nothing that says this is safe to do yet and saw where developer recommended against it.
On Fri, Feb 19, 2016, 6:32 PM TerrorFactor notifications@github.com wrote:
I had a similar problem, but when using "inotifywait" to detect new/changed files to upload them to ACD. I worked around it with this:
!/bin/bash
FOLDER="/home/user/.local" inotifywait -m -r $FOLDER -e close_write -e moved_to | while read path action file; do echo "The file '$file' appeared in directory '$path' via '$action'" var=
echo "$path" | sed -e "s@$FOLDER@@"
sleep 5 acd_cli ul -rsf "$path"/"$file" /$var/ done— Reply to this email directly or view it on GitHub https://github.com/yadayada/acd_cli/issues/265#issuecomment-186489748.
@TerrorFactor - This is a really interesting option, I have been trying to keep a local copy replicated to ACD for backup purposes - in my case I am running the code below but find I can spend over an hour walking the disk eash itteration just to learn nothing has changed - I am going to investigate using inotifywait as a daemon to accomplish the same effect, but without needing to thrash my drives every time this script runs (it's hourly now).
#!/bin/bash
date
if (ping -c 3 Chromecast &> /dev/null)||(ping -c 3 glasd0s &> /dev/null) ; then
echo "Devices are online"
if ( pgrep acd_cli ) ; then
echo " acd_cli is currently running, stopping it now"
# Attempt to kill acd_cli
ps -ef | grep acd_cli | awk '{print $2}' | xargs kill
else
echo "acd_cli is not currently running"
fi
else
echo "Devices are all offline"
# Check to see if acd_cli is already in memory
if ( pgrep acd_cli ) ; then
echo " acd_cli is currently running"
else
echo " acd_cli is not running, starting it now"
/usr/local/bin/acd_cli sync
/usr/local/bin/acd_cli upload -x 4 /newdata/data/ / 2> >(tee /home/cpare/acd .log >&2)
fi
fi
Update - This modification has been in place for the last 24hrs and looks really go so far - I have this shell script scheduled to launch every hour in the event the app exits.
#!/bin/bash
date
# Check to see if acd_cli is already in memory
if ( pgrep inotifywait ) ; then
echo " acd_cli is currently running"
else
echo " acd_cli is not running, starting it now"
/usr/local/bin/acd_cli sync
### Attempt to synch everything in the event something was added before this app started
/usr/local/bin/acd_cli upload -x 4 /newdata/data/ / 2> >(tee /home/cpare/acd.log >&2)
FOLDER="/newdata/data"
inotifywait -m -r $FOLDER -e close_write -e moved_to |
while read path action file; do
echo "The file '$file' appeared in directory '$path' via '$action'"
var=`echo "$path" | sed -e "s@$FOLDER@@"`
sleep 5
#acd_cli ul -rsf "$path"/"$file" /$var/
/usr/local/bin/acd_cli ul -rsf "$path"/"$file" /data/
done
fi
I am using inotifywait myself, and I get around the thread issue by testing for a lock file once inotifywait throws a changed file, if it exists sleeping for 1, repeat until the file is gone. Once that thread advances, it creates the lock file and does its business. I have not had an issue with two uploads attempting to begin at the same time, and it's been running for a while now. I have an older version up on github if you want to improve upon it (trust me you can, I am no guru!)
Hi, i tryd this script too
#!/bin/bash
FOLDER="/root/toupload"
inotifywait -m $FOLDER -e close_write -e moved_to |
while read path action file; do
(
acd_cli ul -rsf $FOLDER/"$file" /Daten/
) &
done
But it only move files whats inside /root/toupload when in folder get enother Folder like /root/toupload/Movies/Moviename/Moviename.mkv it wont move the folder... ore delete it... Why it didnt didnt upload the folder with subfolders?
any help on this?
Is what your trying to do this:
You have a folder /root/toupload. You want to move newly created files into /Daten
Like this:
/root/toupload/Movie/2010/Moviename.mkv Upload to: /Daten/Movie/2010/Moviename.mkv?
Use sed and basename to strip out the /root/toupload and create a variable with the destination folder.
I think this will work, I pulled it out of a script I use, but haven't tested it. You should get the idea"
'''
dest_temp=sed s:/root/toupload::g <<< $(dirname "${file}")
acd_cli ul -ref "$FOLDER/$file" "/Daten/$dest_folder"
'''
hm... just errors
./autoupload.sh: line 6: s:/root/Daten::g: No such file or directory
16-08-07 07:30:54.610 [ERROR] [acd_cli] - Path "/root/Daten/Rom - S02E10 - De Patre Vostro.dts" does not exist.
i have now this
#!/bin/bash
FOLDER="/root/Daten"
inotifywait -r -m $FOLDER -e close_write -e moved_to |
while read path action file; do
(
dest_temp=sed s:/root/Daten::g <<< $(dirname "${file}")
acd_cli ul "$FOLDER/$file" "/Daten/$dest_folder"
) &
done
may help me again, cant get it working, im not good at this :/
Send me an email, Debug stuff in bug reporting isn't quite the way to go :)
got it working now, but only on 1 Word Folder... as soon as there is a space in name of a folder it breaks
The file 'CLOSE_WRITE,CLOSE txt.txt' appeared in directory '/root/Daten/Daten/TV' via 'Shows/' 16-08-13 21:26:35.409 [CRITICAL] [acd_cli] - Could not resolve path "/Daten/TV". Path is TV Shows :D
Put double quotes around the acd_cli arguments.
"$FOLDER/$file"
Feature Summary Add a rsync style --relative flag so that relative paths are preserved during the upload process
Example Source Dir Structure /my/file/library/dir1/file1.txt /my/file/library/dir2/file2.txt /my/file/library/dir4/sub4/file4.txt /my/file/library/dir4/sub4/filenoupload.txt /my/file/library/dir5/sub5/subfile5.txt /my/file/library/dir5/file5.txt
Command cd /my/file/library acd_cli upload --relative dir1/file1.txt dir2/file2.txt dir4/sub4/file4.txt dir5 /ACDTest
Resulting ACD structure /ACDTest/dir1/file1.txt /ACDTest/dir2/file2.txt /ACDTest/dir4/sub4/file4.txt /ACDTest/dir5/sub5/subfile5.txt /ACDTest/dir5/file5.txt
Additional Thoughts Having a -p option similar to acd_cli mkdir to auto-create the dirs if they don't exist would be great, however not a big deal as it's pretty easy to pre-create them before.
Known Workarounds
Impetus I am working on a cache destaging script for unionfs that uploads to ACD based on last modification date, so you can move files from local storage to ACD while transparently preserving the directory structure.
Unfortunately these files might be in multiple different directories. I'd like to be able to pull from multiple locations and have those all upload via acd_cli using acd_cli's concurrency logic and have it preserve the paths upon arrival. Currently attempting this causes them all to be flattened into the same folder.
Love acd_cli, you've made my ACD life so much better, I wouldn't use ACD without it.