Closed aussiehuddo closed 2 years ago
Looks like it can't find the database on the system. What is the path that you gave to the docker container and how did you set it up? Can you create a screenshot of your unraid configuration? (Redact any API Keys tho).
Plex Data Directory: /mnt/user/appdata/plex/Library/Application Support/Plex Media Server/
I have the linuxserver Plex docker, could it be a permissions issue?
What does the command find /mnt/user/appdata/plex -name "*.db"
return?
I just ran it in the Unraid Console (not the docker console).
/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.db
/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.library.blobs.db
/mnt/user/appdata/plex/Library/Application Support/Plex Media Server/Plug-in Support/Databases/com.plexapp.plugins.trakttv.db
What happens when you set "Override database location" to /plexdata/Plug-in Support/Databases
?
Docker is running now and looks to be updating ratings in the logs. Thanks.
Yeah there was a bug where when supplied with an empty string through the docker template the tool would think theres a custom path supplied. It is fixed now but you can leave your config as it is if you want.
Now I have other errors in the log, I think it is a database corruption. https://pastebin.com/pj5MEyFx
Config is unchanged.
There's definitely a corruption on the data for some of the items. I've never seen whatever the cryptic mess is in there before.
You can suppress the errors by adding DONT_THROW_ON_ENCODING_ERROR
to the capabilities configuration on the docker container.
It will then not attempt to parse the corrupt extra data
fields and replace them with an empty version. Since the data is messed up anyways it won't have an effect on Plex since the data is corrupt to begin with and Plex wouldn't know how to interpret it.
I reverted to an old database and have run it again, this time with a TVDB v3 API key. Works well on movies and my smaller 4K TV library but seems to have an issue with my large TV library (850 shows, 40,000+ episodes).
These are the last lines of the log entry:
[INFO ] - 2021-12-06 19:40:05 @ ImdbPipeline.transformMetadata: Transformed entries for 36223 items(s).
[INFO ] - 2021-12-06 19:40:05 @ ImdbPipeline.updateDatabase: Updating 36223 via batch request...
[INFO ] - 2021-12-06 19:40:06 @ ImdbDatabaseSupport.testPlexSqliteBinaryVersion: Plex SQLite binary version: 3.35.5 | COMPILER=clang-11.0.1 | ENABLE_COLUMN_METADATA | ENABLE_FTS3 | ENABLE_FTS3_PARENTHESIS | ENABLE_ICU | ENABLE_RTREE | ENABLE_UNLOCK_NOTIFY | MAX_EXPR_DEPTH=2048 | OMIT_DEPRECATED | THREADSAFE=1 |
[INFO ] - 2021-12-06 19:40:06 @ ImdbDatabaseSupport.requestBatchUpdateOf: Running batch update for 8775 items with new plex agent.
2.5 hours later and plex has not been updated and no change to the log file. Is this expected with a large library, how long should it take?
My JVM_MAX_HEAP value is 1g
Is the docker still running? Any updates on it now?
I suspect that sqlite3 can't handle that many statements being transmitted over the console interface via Java. In this case it would probably make sense to chunk it so it updates max. 500 a run.
If the tool is still running - can you stop it and send me the database by mail so I can reproduce this on my system?
Nothing changed overnight. I have sent you a dropbox link to marc@herschel.io
Yeah I've recived it. Will take a look over the next days!
Looking into the source code I'm pretty sure the memory has been absolutley fucked. I'm keeping all the return codes of every single query as debugging data. In theory the update should not only be lightning fast but it should also terminate after 60s in case it ever hangs. My guess is that something is getting screwed with the memory so I'd have to implement a flag that does not keep this data with large batches of updates (> 1000).
Nothing to do with memory at all, altough I have changed the query generation process to a lazy loading one to avoid excessive string creation.
Is the docker still running? If so could you open the console and look via top
or htop
how much memory is consumed?
Nevermind
Okay the issue is actually a different one and has nothing to do with memory at all. When malformed SQL is supplied to the binary the error detection did not work and thus it hang indefinitly because the std::err was never resolved by the tool. I have now reordered the code that it works. In your case it is very likely malformed SQL that is likely malformed because of corrupt entries in the extra_values
field. If you update to the newest version and let it run it will report the malformed query to you which you can paste here to further diagnose the issues with your database. The tool will exit tho if it encounters malformed SQL.
Ran it again and it completed with no errors. The additional info is definitely helpful.
One suggestion, when it lists the episodes without IMDB IDs in the log it would be helpful to have the series name and even the season and episode number along with it if possible. This would make it easy to see which episodes need updating on TVDB.
Cool that it works now! Can you make an issue for that request? I can then track it better when I have time to touch the tool again.
Have set up on Unraid and when I run the docker it quickly stops again. Following errors retrieved from the log:
[ERROR] - 2021-11-25 22:20:42 @ SqliteDatabaseProvider.: Unable to locate database file @ /config/com.plexapp.plugins.library.db
[ERROR] - 2021-11-25 22:20:42 @ SqliteDatabaseProvider.: Exiting tool... Please check your configured database path...