Closed rudokemper closed 2 months ago
FWIW I do not see this when I run locally from main
branch, with the following changes to docker-compose-non-dev.yml
:
x-superset-image: &superset-image guardiancr.azurecr.io/superset-docker:3.0.3_20240822-1541
I do see the "UserWarning: Using the in-memory storage for tracking rate limits as no storage was explicitly specified." one time only, from the superset_init
service of all places, but that init does finish.
As far as memory usage when I run locally, I've got 40GB and superset_app uses 0.5% of that and superset_worker uses 2% of that (on first start, without me using the app)
Thanks, that's helpful. I have also seen the UserWarning in the prod server logs (but similarly, it finishes).
I have made the same changes, but nevertheless get this error. But at least now I know it's a local issue on my end. I'll try to figure it out, and leave a solution in documentation if it's relevant enough.
Well! I tried to run again today in the interest of diagnosing any memory limits on my local Docker setup, and this time it worked just fine. Maybe my RAM was being exhausted by other processes running yesterday? No idea as of yet. But I'm now unblocked and it's sufficiently clear that it's just a local problem on my computer to close this issue here :shrug:
I'm encountering an issue where my
superset_app
container is terminated due to a timeout error when running locally (usingdocker-compose -f docker-compose-non-dev.yml up
). It does not seem related to recent changes like 16e90704a8e6fba31e0b4032aae176b96a162342 or a2e4c7dc3997a148e4984f33a1e3abd032592e25, as the same error occurs when running using the codebase from prior commits.Logs:
Signal 9 termination is a SIGKILL, so it seems like the worker is being terminated because of reaching a memory storage limit.
I experimented with increasing memory limits and reservations in the docker compose config like this:
A bit unsure on next steps so logging this for now. It is blocking me from testing changes to our Superset deployment locally before deploying to production.