Open benmullins opened 5 years ago
I apologize, I posted this issue a bit too hastily. I don't think it's the problem detection task, but the delete old search results task.
Manually triggering that task causes one of the regular instances of several minutes of high CPU usage that got me looking into what Hydra was up to in the first place. It also starts generating temp.db
files (though the main nzbhydra.mv.db
file doesn't grow very noticeably beyond the 15 GB size it reached on the previous run; maybe to 15.1 or 15.2 GB at most).
2019-06-18 02:39:51.542 INFO --- [0.0-5076-exec-1] org.nzbhydra.tasks.HydraTaskScheduler : [Search: 840714, Host: 192.168.1.50] Running task "Delete old search results" now
2019-06-18 02:52:31.485 WARN --- [ HydraTask4] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 50200, SQLState: HYT00
2019-06-18 02:52:31.485 ERROR --- [ HydraTask4] o.h.engine.jdbc.spi.SqlExceptionHelper : Timeout trying to lock table {0}; SQL statement:
delete from SEARCHRESULT where FIRST_FOUND < DATEADD('SECOND', ?, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD where SEARCH_RESULT_ID is not null) [50200-199]
2019-06-18 02:52:31.527 ERROR --- [ HydraTask4] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
javax.persistence.PessimisticLockException: could not execute statement
Traceback omitted; identical to the one in the post above.
This makes more sense given what the error said in the first place. Although the inflation of the database file only happens the first time, this 12 or so minutes of CPU thrashing followed by an error dump in the log happens every time the "delete old search results" task runs, i.e. about once an hour.
There's already an issue about the database size but until now I never knew what caused the problem. I'm currently on vacation but will take a closer look when I'm back. Thanks for the analysis.
On Tue, 18 Jun 2019, 10:56 Benjamin Mullins, notifications@github.com wrote:
I apologize, I posted this issue a bit too hastily. I don't think it's the problem detection task, but the delete old search results task.
Manually triggering that task causes one of the regular instances of several minutes of high CPU usage that got me looking into what Hydra was up to in the first place. It also starts generating temp.db files (though the main nzbhydra.mv.db file doesn't grow very noticeably beyond the 15 GB size it reached on the previous run; maybe to 15.1 or 15.2 GB at most).
2019-06-18 02:39:51.542 INFO --- [0.0-5076-exec-1] org.nzbhydra.tasks.HydraTaskScheduler : [Search: 840714, Host: 192.168.1.50] Running task "Delete old search results" now 2019-06-18 02:52:31.485 WARN --- [ HydraTask4] o.h.engine.jdbc.spi.SqlExceptionHelper : SQL Error: 50200, SQLState: HYT00 2019-06-18 02:52:31.485 ERROR --- [ HydraTask4] o.h.engine.jdbc.spi.SqlExceptionHelper : Timeout trying to lock table {0}; SQL statement: delete from SEARCHRESULT where FIRST_FOUND < DATEADD('SECOND', ?, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD where SEARCH_RESULT_ID is not null) [50200-199] 2019-06-18 02:52:31.527 ERROR --- [ HydraTask4] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
javax.persistence.PessimisticLockException: could not execute statement
Traceback omitted; identical to the one in the post above.
This makes more sense given what the error said in the first place. Although the inflation of the database file only happens the first time, this 12 or so minutes of CPU thrashing followed by an error dump in the log happens every time the "delete old search results" task runs, i.e. about once an hour.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/theotherp/nzbhydra2/issues/405?email_source=notifications&email_token=ADNUA6ONJVK776VEHCSCTNDP3CWN7A5CNFSM4HY6EKCKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGODX53NSY#issuecomment-503035595, or mute the thread https://github.com/notifications/unsubscribe-auth/ADNUA6LJEG7H3A2OIULOX2DP3CWN7ANCNFSM4HY6EKCA .
~Would you be willing to send me your database file (as in upload and send the URL)? Full disclaimer: It contains not only your full history but also your API keys. I will not look at the contents of the database itself, just the metadata. I do not care for what you download and do not intend to steel anybody's API keys.~
Nevermind, I can reproduce this.
Seeing this same issue as well
Seeing the same thing
Same here as well. Went from 256 MB to 1.1 GB.
@SBBH 1.1GB is somewhat reasonable. In other cases the database may grow to 20 or 30 GB which certainly is not...
Have this as well. Size is at 22GB.
@naq90
Try this:
Add a new folder config
in the hydra main folder. There create a new file application.properties
with this content: spring.datasource.url=jdbc:h2:file:${nzbhydra.dataFolder:.}/database/nzbhydra;MAX_COMPACT_TIME=${main.databaseCompactTime:15000};WRITE_DELAY=5000;TRACE_MAX_FILE_SIZE=16;WRITE_DELAY=20000;RETENTION_TIME=1000
. Restart Hydra, give it a minute and restart again.
Monitor the database file size over a couple of days.
Same here on Windows7x64 Hydra2 at latest build as of today. Some time ago (not sure which build) my database has just exploded and seems to refuse to compact despite increasing the compact time to a much larger number (up to 15MIN now) and still growing daily unless I manually compact it.
Collin, have you tried the steps I described above?
On Mon, 7 Oct 2019, 12:38 Collin Chaffin, notifications@github.com wrote:
Same here on Windows7x64 Hydra2 at latest build as of today. Some time ago (not sure which build) my database has just exploded and seems to refuse to compact despite increasing the compact time to a much larger number (up to 15MIN now) and still growing daily unless I manually compact it.
— You are receiving this because you commented. Reply to this email directly, view it on GitHub https://github.com/theotherp/nzbhydra2/issues/405?email_source=notifications&email_token=ADNUA6LXMWFHBMCV7ERZA2DQNMGSDA5CNFSM4HY6EKCKYY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEAP2ERY#issuecomment-538944071, or mute the thread https://github.com/notifications/unsubscribe-auth/ADNUA6PVXQOTQCGK66JEAITQNMGSDANCNFSM4HY6EKCA .
Yes - db gets corrupted now on every shutdown and hydra will not restart. DB restore then starts fine - and again results in corruption and failure to restart. Could it be due to the last 2 builds having implemented a db update then rollback? I have iteratively updated through EVERY build on Windows to date for quite some time, having not missed a single iterative build so I did have my db updated 2 builds ago, then rolled back via the 10-06 build.
@CollinChaffin Please zip and upload your database file and send me the link via email to theotherp@gmx.de so I can test it myself. Disclaimer: The database not only contains your history but also indexer API keys. I'll remove the data before doing any tests and have no interest whatsoever in what you download or in your API keys as I have enough of them...
@theotherp I am using the latest docker container, and have the same issue after every "delete old search results" task. I "shutdown defrag" the H2 db, stop and remove the container and recreate it afterwards. The container is then running fine, db size is about 350MB and the process is behaving nicely.
As soon as I trigger the above mentioned task, CPU and memory usage skyrockets and the db is growing until it stops at 53GB.
I have tried application.properties mounted in /app/hydra2/config/ but I had to remove WRITE_DELAY=5000; from the config, because it complained about it being in there twice (so WRITE_DELAY=20000 is now my current setting). The issue remains.
2019-10-24 13:01:54.700 ERROR --- [ HydraTask2] o.h.engine.jdbc.spi.SqlExceptionHelper : Timeout trying to lock table {0}; SQL statement:
delete from SEARCHRESULT where FIRST_FOUND < DATEADD('SECOND', 1570704119, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD where SEARCH_RESULT_ID is not null) [50200-199]
2019-10-24 13:01:54.742 ERROR --- [ HydraTask2] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
javax.persistence.PessimisticLockException: could not execute statement
at org.hibernate.internal.ExceptionConverterImpl.wrapLockException(ExceptionConverterImpl.java:273)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:108)
at org.hibernate.internal.ExceptionConverterImpl.convert(ExceptionConverterImpl.java:181)
at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1604)
at org.nzbhydra.searching.cleanup.OldResultsCleanup.deleteOldResults(OldResultsCleanup.java:58)
at org.nzbhydra.searching.cleanup.OldResultsCleanup$$FastClassBySpringCGLIB$$cad1559f.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:98)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:186)
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:688)
at org.nzbhydra.searching.cleanup.OldResultsCleanup$$EnhancerBySpringCGLIB$$3d0fc2c8.deleteOldResults(<generated>)
at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
Caused by: org.hibernate.PessimisticLockException: could not execute statement
at org.hibernate.dialect.H2Dialect$3.convert(H2Dialect.java:353)
at org.hibernate.exception.internal.StandardSQLExceptionConverter.convert(StandardSQLExceptionConverter.java:42)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:113)
at org.hibernate.engine.jdbc.spi.SqlExceptionHelper.convert(SqlExceptionHelper.java:99)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:178)
at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)
at org.hibernate.internal.SessionImpl.executeNativeUpdate(SessionImpl.java:1593)
at org.hibernate.query.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:292)
at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1594)
at org.nzbhydra.searching.cleanup.OldResultsCleanup.deleteOldResults(OldResultsCleanup.java:58)
at org.nzbhydra.searching.cleanup.OldResultsCleanup$$FastClassBySpringCGLIB$$cad1559f.invoke(<generated>)
at org.springframework.cglib.proxy.MethodProxy.invoke(MethodProxy.java:218)
at org.springframework.aop.framework.CglibAopProxy$CglibMethodInvocation.invokeJoinpoint(CglibAopProxy.java:749)
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:163)
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:294)
Caused by: org.h2.jdbc.JdbcSQLTimeoutException: Timeout trying to lock table {0}; SQL statement:
delete from SEARCHRESULT where FIRST_FOUND < DATEADD('SECOND', 1570704119, DATE '1970-01-01') AND ID not in (select SEARCH_RESULT_ID from INDEXERNZBDOWNLOAD where SEARCH_RESULT_ID is not null) [50200-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:508)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:194)
at org.h2.command.Command.filterConcurrentUpdate(Command.java:333)
at org.h2.command.Command.executeUpdate(Command.java:274)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:200)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:154)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175)
at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)
at org.hibernate.internal.SessionImpl.executeNativeUpdate(SessionImpl.java:1593)
at org.hibernate.query.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:292)
at org.hibernate.query.internal.AbstractProducedQuery.executeUpdate(AbstractProducedQuery.java:1594)
at org.nzbhydra.searching.cleanup.OldResultsCleanup.deleteOldResults(OldResultsCleanup.java:58)
Caused by: org.h2.message.DbException: Row not found when trying to delete from index "PUBLIC.FKR5G21PDW3HHS1SEFVJY30TGMI_INDEX_C: 3952181201435365985" [90112-199]
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.mvstore.db.MVSecondaryIndex.remove(MVSecondaryIndex.java:236)
at org.h2.mvstore.db.MVTable.removeRow(MVTable.java:514)
at org.h2.command.dml.Delete.update(Delete.java:123)
at org.h2.command.CommandContainer.update(CommandContainer.java:133)
at org.h2.command.Command.executeUpdate(Command.java:267)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:200)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:154)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175)
at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)
at org.hibernate.internal.SessionImpl.executeNativeUpdate(SessionImpl.java:1593)
at org.hibernate.query.internal.NativeQueryImpl.doExecuteUpdate(NativeQueryImpl.java:292)
Caused by: org.h2.jdbc.JdbcSQLTransientException: Row not found when trying to delete from index "PUBLIC.FKR5G21PDW3HHS1SEFVJY30TGMI_INDEX_C: 3952181201435365985" [90112-199]
at org.h2.message.DbException.getJdbcSQLException(DbException.java:621)
at org.h2.message.DbException.getJdbcSQLException(DbException.java:427)
at org.h2.message.DbException.get(DbException.java:205)
at org.h2.message.DbException.get(DbException.java:181)
at org.h2.mvstore.db.MVSecondaryIndex.remove(MVSecondaryIndex.java:236)
at org.h2.mvstore.db.MVTable.removeRow(MVTable.java:514)
at org.h2.command.dml.Delete.update(Delete.java:123)
at org.h2.command.CommandContainer.update(CommandContainer.java:133)
at org.h2.command.Command.executeUpdate(Command.java:267)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdateInternal(JdbcPreparedStatement.java:200)
at org.h2.jdbc.JdbcPreparedStatement.executeUpdate(JdbcPreparedStatement.java:154)
at com.zaxxer.hikari.pool.ProxyPreparedStatement.executeUpdate(ProxyPreparedStatement.java:61)
at com.zaxxer.hikari.pool.HikariProxyPreparedStatement.executeUpdate(HikariProxyPreparedStatement.java)
at org.hibernate.engine.jdbc.internal.ResultSetReturnImpl.executeUpdate(ResultSetReturnImpl.java:175)
at org.hibernate.engine.query.spi.NativeSQLQueryPlan.performExecuteUpdate(NativeSQLQueryPlan.java:107)
The update to 2.7.6 should've recreated a new database. Did that fix anything?
I'm having an issue where my
nzbhydra.mv.db
file will inflate hugely shortly after startup, seemingly after the first time the problem detection task runs. I've seen the other issues on here about large database files, but I haven't seen anyone mention this specific problem.Basically, if I shut down or restart Hydra, the database file will be reduced in size to a little under 300 MB, but sometime within half an hour to a couple hours at most after the program starts, it will begin to inflate the database file to 15 GB (it always stops at 15 GB, every time). Once it gets started doing this this happens within around 10-12 minutes or so, hammering the CPU the entire time it's doing it.
I turned on debug logging, and this is an example of what is logged, starting around the time the database starts inflating and ending when it gets to its final destination of 15 GB:
I've watched this happen a few times and it seems like the DB inflation is triggered immediately after Hydra runs these problem detection tasks, at which point large temporary db files start getting written to the database directory and (I presume) rolled into the main db file. The SQL exception and Java traceback seem to be identical each time.
Although I don't think it has anything to do with this, since the memory usage was comfortably below 256MB anyway, I tried increasing the allocation to 512MB anyway because of the OOM detector task, and nothing changed.
And again, the database will be reduced back down to less than 300 MB if Hydra is shut down, but after starting again this cycle repeats. I've also tried running the manual shutdown defrag command mentioned in another issue, and while it did reduce my database size this inflation happened again afterward with no apparent change.
I'm using the linuxserver.io container, and it's up-to-date with the latest 2.6.11 release of Hydra.