Closed dehengxu closed 11 months ago
Hi @dehengxu thanks a lot for your feedback! The result backend is changed to use the default database, this will add new tables to the database such as celery_taskmeta. It's important to change the size of the blob in the result column if you work with Terraform stacks with multiple resources.
mysql> desc celery_taskmeta;
+-----------+--------------+------+-----+---------+----------------+
| Field | Type | Null | Key | Default | Extra |
+-----------+--------------+------+-----+---------+----------------+
| id | int | NO | PRI | NULL | auto_increment |
| task_id | varchar(155) | YES | UNI | NULL | |
| status | varchar(50) | YES | | NULL | |
| result | blob | YES | | NULL | |
| date_done | datetime | YES | | NULL | |
| traceback | text | YES | | NULL | |
| name | varchar(155) | YES | | NULL | |
| args | blob | YES | | NULL | |
| kwargs | blob | YES | | NULL | |
| worker | varchar(155) | YES | | NULL | |
| retries | int | YES | | NULL | |
| queue | varchar(155) | YES | | NULL | |
+-----------+--------------+------+-----+---------+----------------+
12 rows in set (0.00 sec)
Command for change size
ALTER TABLE celery_taskmeta MODIFY result MEDIUMBLOB;
ALTER TABLE celery_taskmeta MODIFY result LONGBLOB;
MEDIUMBLOB: Can store up to 16MB of data. LONGBLOB: Can store up to 4GB of data.
Thanks, I'll try it
I saw your update v2.25.0, worker celery use mysql as backend, but api-backend get_data.py also use redis as backend. Should api-backend read from redis ?
I config backend to db+mysql for api-backend, but it connect refused. Because get_data.py only read task status from redis.
HI @dehengxu Both the workers and the API backend use MySQL as the result backend. When the configuration is changed, it affects both. However, Redis is also used as both a broker, a cache and a "semaphore" to prevent race conditions. The latter is the case in ./src/shared/helpers/get_data.py. To alter the behavior of tasks, it is done through Celery in sld-api-backend/config/celery_config.py, which is designed for you to pass environment variables to modify it.
ROKER_USER = os.getenv("BROKER_USER", "")
BROKER_PASSWD = os.getenv("BROKER_PASSWD", "")
BROKER_SERVER = os.getenv("BROKER_SERVER", "redis") # use rabbit or redis
BROKER_SERVER_PORT = os.getenv(
"BROKER_SERVER_PORT", "6379"
) # use por 6379 for redis or 5672 for RabbitMQ
BROKER_TYPE = os.getenv("BROKER_TYPE", "redis") # use amqp for RabbitMQ or redis
# Redus backend config
BACKEND_TYPE = os.getenv("BACKEND_TYPE", "db+mysql")
BACKEND_USER = os.getenv("BACKEND_USER", "root")
BACKEND_PASSWD = os.getenv("BACKEND_PASSWD", "123")
BACKEND_SERVER = os.getenv("BACKEND_SERVER", "db")
BACKEND_DB = os.getenv("BACKEND_DB", "restapi")`
Regards
Both in module src.shared.helpers.get_data
and src.worker.tasks.terraform_worker
, use redis as result backend
r = redis.Redis(
host=settings.BACKEND_SERVER,
port=6379,
db=settings.BACKEND_DB,
charset="utf-8",
decode_responses=True,
)
this always create redis client, event I change BACKEND config to mysql , it connect failed
Do you mean this is should be BROKER_SERVER, BROKER_DB ?
@dehengxu Okay, I understand. I will release a hotfix as soon as possible.
Hi @dehengxu It has been fixed in version v2.26.1. Now the cache configuration is managed through these variables:
CACHE_USER: str = os.getenv("SLD_CACHE_USER", "")
CACHE_PASSWD: str = os.getenv("SLD_CACHE_PASSWD", "")
CACHE_SERVER: str = os.getenv("SLD_CACHE_SERVER", "redis")
Thank you very much for your collaboration, and I hope you can continue contributing. Regards
May be make deploy a deleting state, and delete deploy from db after resource destroied.