When running the cache warmup, a lot of concurrent queries are sent around between the extractor, splash, and lighthouse container. Depending on the concurrency level this may create quite some memory pressure on the docker host.
Further, there can always be problems with memory leaking from long running python applications.
To circumvent both issues, it would be good to have memory limits in place in the docker-compose files, such that the containers either get killed by docker when they exceed the memory limit, or the applications within the container simply fail to allocate more memory when the limit is reached (i.e. they have a chance to work around the issue). The seconds approach would be preferable.
When running the cache warmup, a lot of concurrent queries are sent around between the
extractor
,splash
, andlighthouse
container. Depending on the concurrency level this may create quite some memory pressure on the docker host.Further, there can always be problems with memory leaking from long running python applications.
To circumvent both issues, it would be good to have memory limits in place in the docker-compose files, such that the containers either get killed by docker when they exceed the memory limit, or the applications within the container simply fail to allocate more memory when the limit is reached (i.e. they have a chance to work around the issue). The seconds approach would be preferable.