Open hsanderr opened 2 years ago
We had problems with kafka as well. In our case, the storage for the "logs" volume wasn't enough. We fixed it by increasing the storage space from 200Mi to several GiBs for now (the "logs" volumeClaimTemplate in thirdparty.yml), it looks pretty stable now, but still monitoring it from time to time to see if it could get problematic again.
Are there any further investigations of the problem? We are facing the same issue at our AKS deployment of thingsboard-pe.
Because it seems very strange to me, especially when the following config is given:
value: "js_eval.requests:100:1:delete --config=retention.ms=60000 --config=segment.bytes=26214400 --config=retention.bytes=104857600,tb_transport.api.requests:30:1:delete --config=retention.ms=60000 --config=segment.bytes=26214400 --config=retention.bytes=104857600,tb_rule_engine:30:1:delete --config=retention.ms=60000 --config=segment.bytes=26214400 --config=retention.bytes=104857600"
For more information, our used storage percentage:
You can see we needed to increase the logs
and the app-logs
volume.
I have installed TB via microservices using Azure Kubernetes service (we have followed this guide~). It worked for a few days but suddenly I wasn't not able to send HTTP requests to the platform anymore. I haven't changed any .yml file. When I run "kubectl get pods", I get:
tb-kafka-0 logs: logs-tb-kafka-0.txt
tb-node-1 logs: logs-tb-node-1.txt
Can anyone help me with this?