ArtVentureX / sd-webui-agent-scheduler

641 stars 68 forks source link

Limits of the database? Getting frequent database is locked errors #151

Closed LankyPoet closed 1 year ago

LankyPoet commented 1 year ago

Hi, I love agent scheduler, thank you for making it. I am using the latest version with SD.Next (up to date installation of current master branch). I have a fast system with 13700k processor, 4090 GPU, 32GB DDR5, all high speed SSD, etc.

I am generally doing nothing else on the system other than sometimes web browsing (with a different browser) when SD.Next is running.

I am very frequently encountering “database is locked” SQLite errors while using agent scheduler. The quick fix for this per forums has been to exit, make a copy of the task scheduler database, delete the original, then restore the copy to the original location as the OS won’t have locked that new version.

Are there any limits or usage guidelines we should be following to prevent this from happening in the first place? I have seen it happen with 40, 100, or 250 jobs. So it’s not specific to size of queue. I sometimes add things relatively quickly (example: changing the placement of a word in a prompt to see how different the end result is), but I’d think my system could handle it.

Please let me know how to avoid the locked database errors. Thank you very much!

LankyPoet commented 1 year ago

Update: It just happened again. I have tried running the WebUI from both IE and Firefox separately but it makes no difference. Here is the exact info. I did review the web links it provided.

`Exception deleting tasks from database: (sqlite3.OperationalError) database is locked (Background on this error at: https://sqlalche.me/e/14/e3q8) Exception in thread Thread-156 (execute_task): Traceback (most recent call last): File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1094, in _commit_impl self.engine.dialect.do_commit(self.connection) File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\default.py", line 686, in do_commit dbapi_connection.commit() sqlite3.OperationalError: database is locked

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "C:\Users\Default.LivingRoomPC\AppData\Local\Programs\Python\Python310\lib\threading.py", line 1016, in _bootstrap_inner self.run() File "C:\Users\Default.LivingRoomPC\AppData\Local\Programs\Python\Python310\lib\threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "C:\sdnext\extensions-builtin\sd-webui-agent-scheduler\agent_scheduler\task_runner.py", line 410, in execute_task task = get_next_task() File "C:\sdnext\extensions-builtin\sd-webui-agent-scheduler\agent_scheduler\task_runner.py", line 512, in __get_pending_task deleted_rows = task_manager.delete_tasks(before=datetime.now() - timedelta(days=retention_days)) File "C:\sdnext\extensions-builtin\sd-webui-agent-scheduler\agent_scheduler\db\task.py", line 336, in delete_tasks raise e File "C:\sdnext\extensions-builtin\sd-webui-agent-scheduler\agent_scheduler\db\task.py", line 331, in delete_tasks session.commit() File "C:\sdnext\venv\lib\site-packages\sqlalchemy\orm\session.py", line 1454, in commit self._transaction.commit(_to_root=self.future) File "C:\sdnext\venv\lib\site-packages\sqlalchemy\orm\session.py", line 839, in commit trans.commit() File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2469, in commit self._do_commit() File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2659, in _do_commit self._connection_commit_impl() File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2630, in _connection_commit_impl self.connection._commit_impl() File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1096, in _commit_impl self._handle_dbapi_exception(e, None, None, None, None) File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\base.py", line 2134, in _handle_dbapiexception util.raise( File "C:\sdnext\venv\lib\site-packages\sqlalchemy\util\compat.py", line 211, in raise_ raise exception File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\base.py", line 1094, in _commit_impl self.engine.dialect.do_commit(self.connection) File "C:\sdnext\venv\lib\site-packages\sqlalchemy\engine\default.py", line 686, in do_commit dbapi_connection.commit() sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) database is locked (Background on this error at: https://sqlalche.me/e/14/e3q8)`

artventuredev commented 1 year ago

The quick fix for this per forums has been to exit, make a copy of the task scheduler database, delete the original, then restore the copy to the original location as the OS won’t have locked that new version.

Have you encountered this issue before, or is it recent? Typically, this isn't due to a system limitation, but often occurs when the database is undergoing an update that hasn't yet been completed.

LankyPoet commented 1 year ago

It's been going on for as long as I have been scheduling jobs to run with Agent Scheduler. Is there a way I can check status of any database update, or force that update to complete before processing any other jobs?

artventuredev commented 1 year ago

Is there a way I can check status of any database update

Usually, this shouldn't be the case. I suspect that the database might be quite large, which could cause updates to take longer than usual.

Could you provide a bit more information?

LankyPoet commented 1 year ago

I had 187 in history. 5.61 GB (6,027,235,328 bytes)

I cleared history to see if that helps. Anything else?

artventuredev commented 1 year ago

Wow, 5.6GB for a small history is quite substantial. Do you frequently queue img2img tasks, or tasks with images as input (maybe ControlNet)? All inputs, including images, are stored in the database, which could result in a large database size.

Clearing the history can sometimes help, but maintaining a large queue could also contribute to the issue. I'll make some adjustments to minimize the likelihood of this problem occurring.

LankyPoet commented 1 year ago

Thank you so much for your help. Should the database shrink since history was cleared? And yes, maybe 25% of the time I will use Reactor Fusion or FaceSwapLab with images to swap faces into the end result. I don't use img2img too much, typically txt2img tab for this.

artventuredev commented 1 year ago

Should the database shrink since history was cleared?

Yes it should.

I will use Reactor Fusion or FaceSwapLab with images to swap faces

You could try cropping the face input images to a smaller size before queuing the task. I believe that Faceswap works on low-resolution input (128px x 128px), so there's no need for excessively large input images.

LankyPoet commented 1 year ago

Great to know, thank you very much!