bbrown430 / plex-poster-set-helper

A tool to help upload sets of posters from ThePosterDB and MediUX to your Plex server in seconds!
74 stars 17 forks source link

Enhancement:- Skip already downloaded assets. #38

Open morpheus2n2 opened 2 months ago

morpheus2n2 commented 2 months ago

I think it would be handy if the script would keep a record of what it had already updated so that when you run it again it skips over assets its already downloaded and pushed to plex, and only updates anything that's new or wasn't pushed last time due to it not being on plex. This is handy if you are updating your server from a particular user, so you don't end up pulling the same assets multiple time :D

GenerlAce commented 2 months ago

@morpheus2n2 not sure if you're still using this. But i have a script that runs a bulk for this program, and will hash tag after run. so everything old will be skipped. and whats nice is if it has a bad link, it wont stop, it will fail then move on to the next link. i have the bash script placed in the root folder of the app, then use bulk_list.txt as normal.


# File containing the URLs
url_file="bulk_import.txt"

# Loop through each line in the file
while IFS= read -r line || [ -n "$line" ]; do
    # Skip lines that are commented out
    if [[ $line == \#* ]]; then
        continue
    fi

    # Run the Python script with the URL
    echo "Running command for $line"
    python3 plex_poster_set_helper.py "$line"

    # Comment out the URL after processing
    sed -i "s|^$line|# $line|" "$url_file"

done < "$url_file"

echo "All commands have been executed."
David-Moment commented 2 months ago

if i have this right, i can update the bulk import to say be:

https://theposterdb.com/user/fwlolx

https://theposterdb.com/user/RedHeadJedi

And it should update everything with the first user, then update only missed ones with the second user and so forth?

GenerlAce commented 2 months ago

No, this just runs in succession. So if you're doing full bulk, it'll overwrite any duplicates as the list goes on, I believe. Im not sure this program has the ability to skip dupes. My bash script just runs through the bulk_list one at a time, and then # the links as it runs through them. M

morpheus2n2 commented 2 months ago

Cool, I was thinking more of when you just used the CMD with the site address, but no worrys as I have run in to loads of issues with it as I can't make it progress past a set page number for my fav user, it will 100% fail on the same page

GenerlAce commented 2 months ago

ah, i understand what you mean. yeah, that would be nice, like a reference table, so when X Movie or Y Show has been processed, it'll be skipped on all other subsequent runs. That would be nice to cut down on unnecessary traffic to the websites too.

amanonymouse733 commented 1 month ago

@GenerlAce can you say more about how to use this? Should I be saving your script as a .txt in the root folder of the script and running it with the "bulk" command?

GenerlAce commented 1 month ago

@GenerlAce can you say more about how to use this? Should I be saving your script as a .txt in the root folder of the script and running it with the "bulk" command?

Sure, basically this script replaces my use for the "bulk import" command. I was having issues with some links failing the bulk import, but then it wouldn't continue to next sets. So this script instead uses the same bulk_import.txt style, but rather than run through all in one command, it runs a single import per link, line by line and then hashes out after it processes it. So even if a link fails, it will process it and continue to the next poster set. And this way I also don't rerun already processed links.

amanonymouse733 commented 1 month ago

@GenerlAce can you say more about how to use this? Should I be saving your script as a .txt in the root folder of the script and running it with the "bulk" command?

Sure, basically this script replaces my use for the "bulk import" command. I was having issues with some links failing the bulk import, but then it wouldn't continue to next sets. So this script instead uses the same bulk_import.txt style, but rather than run through all in one command, it runs a single import per link, line by line and then hashes out after it processes it. So even if a link fails, it will process it and continue to the next poster set. And this way I also don't rerun already processed links.

Thanks, but I should have been more explicitly clear - after setting up bulk_import.txt as normal, how am I executing your script? ELI5 please

GenerlAce commented 1 month ago

@GenerlAce can you say more about how to use this? Should I be saving your script as a .txt in the root folder of the script and running it with the "bulk" command?

Sure, basically this script replaces my use for the "bulk import" command. I was having issues with some links failing the bulk import, but then it wouldn't continue to next sets. So this script instead uses the same bulk_import.txt style, but rather than run through all in one command, it runs a single import per link, line by line and then hashes out after it processes it. So even if a link fails, it will process it and continue to the next poster set. And this way I also don't rerun already processed links.

Thanks, but I should have been more explicitly clear - after setting up bulk_import.txt as normal, how am I executing your script? ELI5 please

Ah. No worries. So it's just a bash script made with the help of ChatGPT. I'm running on Ubuntu Linux. So my steps are to "cd /path/to/plex-poster-helper" directory. Then make the script, I use nano editor so "nano run_poster_helper.sh" and paste the script contents into it and save it. Make the script executable "chmod +x run_poster_helper.sh". Then to run it, just "./run_poster_helper.sh". As long as you have the bulk_import.txt setup with your links. It'll run line by line. The only issue with the script is, even if the link fails, or config error, it will hashtag out the links. So make sure you can do a run of the plex-poster-helper python regularly first. That way you know config and everything is good. Then from there you can run with the bash script. There may be better or more efficient ways to achieve this. But this was my current efforts and so far has worked well for my use case. Hope that helps more.

amanonymouse733 commented 1 month ago

@GenerlAce can you say more about how to use this? Should I be saving your script as a .txt in the root folder of the script and running it with the "bulk" command?

Sure, basically this script replaces my use for the "bulk import" command. I was having issues with some links failing the bulk import, but then it wouldn't continue to next sets. So this script instead uses the same bulk_import.txt style, but rather than run through all in one command, it runs a single import per link, line by line and then hashes out after it processes it. So even if a link fails, it will process it and continue to the next poster set. And this way I also don't rerun already processed links.

Thanks, but I should have been more explicitly clear - after setting up bulk_import.txt as normal, how am I executing your script? ELI5 please

Ah. No worries. So it's just a bash script made with the help of ChatGPT. I'm running on Ubuntu Linux. So my steps are to "cd /path/to/plex-poster-helper" directory. Then make the script, I use nano editor so "nano run_poster_helper.sh" and paste the script contents into it and save it. Make the script executable "chmod +x run_poster_helper.sh". Then to run it, just "./run_poster_helper.sh". As long as you have the bulk_import.txt setup with your links. It'll run line by line. The only issue with the script is, even if the link fails, or config error, it will hashtag out the links. So make sure you can do a run of the plex-poster-helper python regularly first. That way you know config and everything is good. Then from there you can run with the bash script. There may be better or more efficient ways to achieve this. But this was my current efforts and so far has worked well for my use case. Hope that helps more.

Ok so I thought as much, but when executing (using WSL/ubuntu) I get:

Running command for // Star Trek: TNG Title Cards
Traceback (most recent call last):
  File "/mnt/c/Users/....plex_poster_set_helper.py", line 1, in <module>
    import requests
ModuleNotFoundError: No module named 'requests'
sed: preserving permissions for ‘./sedP4yNUr’: Operation not permitted
Running command for https://mediux.pro/sets/22505
Traceback (most recent call last):
  File "/mnt/c/Users/....plex_poster_set_helper.py", line 1, in <

module>
    import requests
ModuleNotFoundError: No module named 'requests'
sed: preserving permissions for ‘./sedDJyZCx’: Operation not permitted

Disregard, I'm an idiot - forgot to reinstall the requirements.