Closed freehck closed 2 years ago
As it's a race condition problem, I suppose setting database.MAX_SCAN_THREADS to 1 will also help. Going to try.
upd: tested on tag v1.0.0
With MAX_SCAN_THREADS=1 works slowly but fine
This lines in your message are from 2 log files scan.log
INFO: 2022/07/11 17:23:29 file 550971.fb2 from f.fb2-550756-552903.zip has been added
INFO: 2022/07/11 17:23:30 file 186024.fb2 from f.fb2-185838-188548.zip has been added
opds.log
INFO: 2022/07/11 17:23:31 Router --->URL: [/opds]
I'm not sure that
2022/07/11 17:23:33 database is locked (5) (SQLITE_BUSY)
panic: database is locked (5) (SQLITE_BUSY)
indicates race in scan goroutine.
May be it arises because of simultaneous requests to http server with corresponding selects. I need to investigate the problem.
Just now 3 big fb2 archives and a dozen of separate epubs was loaded using current build and I have no such issue with MAX_SCAN_THREADS=3 on my 6 core Odroid SBC Ubuntu 20.04
Thanks for comments
I've got this error before and set db timeout not equal zero by adding dsn+"?_pragma=busy_timeout%3d10000
for sqlite driver
This lines in your message are from 2 log files
It's okay. I set both to /dev/stdout
in the config.
I'm not sure that <...> indicates race in scan goroutine.
May be it arises because of simultaneous requests to http server with corresponding selects.
I'm pretty sure it's scan goroutine.
After filling this issue I changed threads to 1 and it's been running without any problems for 21 hours. And still running fine.
And all these 21 hours the daemon handles GET /opds
request every 30 seconds (docker health checks).
I can't catch the the but add mutex in database.go
func (db *DB) NewBook(b *model.Book) int64 {
db.mx.Lock()
defer db.mx.Unlock()
Hope this will help 3 goroutines with mutex lock scan faster than one without it 1.5 times faster
Yep, this must help. Will backup the db, rebuild and try.
I've set threads 7 and run the daemon. I get a pile of messages like:
WARNING: 2022/07/15 13:50:54 file 373342.fb2 from f.fb2-372449-375702.zip is in stock already and has been skipped
@vinser Is there a way to ensure that workers performing the scan task don't take the same zip archive from books/new?
Waited for a while: the number of warnings about files being already in stock has decreased dramatically. No failures anymore. Locks work well, the issue's actually resolved.
May be you have this file in other archive or separate file too
Steps to reproduce: 1) take a big archive, for example fb2.flibusta.net (just for tests, we don't justify the piracy) 2) move all the zip files into books/new 3) wait
Restart helps