Closed godber closed 5 months ago
After further discussions with Peter and Joseph there are a number of other things that limit asset size:
It's possible that our choice of zip
archives for assets make them not streamable ... so we might be a bit stuck there too.
Regardless, we are, at the very least, going to look at reducing overall memory usage during the asset load process to increase that size.
Steps to recreate this issue locally:
Mock up a local teraaslice in kubernetes by running yarn k8s:minio --asset-storage='s3'
.
Upload said 60mb zipped asset using earl or add the zipped asset to the autoload folder to skip this step
Create and register a job that uses said 60mb asset asset and start the job
Run kubectl get pods -n ts-dev1
to view all the running pods in the namespace
The pod with the name that starts with ts-exc
should be seen restarting and having a status of OOM
The changes in https://github.com/terascope/teraslice/pull/3598 are sufficient to resolve this issue.
We have been doing further testing of the S3 backed asset store and we recently tested with an internal asset that was 60MB zipped. Unzipped the asset had the following composition:
It should be sufficient to create a mock asset with roughly the same characteristics and get a job to start up with this asset. The execution controller should then OOM when run in k8s using the default memory limit of
512MB
. We tested increasing the memory limit (to 6GB) and the execution controller did not OOM.Here is some of the log output:
If necessary, I can supply the internal asset separately.
cc @busma13