Open gaara4896 opened 6 years ago
index_all() queries 100 records everyloop and index them to whoosh, if time doesn't reach the timeout, maybe the package size of 100 records reach the max_allowed_packet limit (1mb by default)?
https://dev.mysql.com/doc/refman/5.5/en/packet-too-large.html
Database that I am using is mysql. I had few flask_sqlalchemy configuration which include as below:
What I am doing is I try to cron manual index_all(app) method using celery, which the task looks something like this
The trace back:
The error happen around 8 minute after it start indexing, far shoter than the timeout.
Is it a probblem specific to mysql? I only having problem with index_all()