Setting up lithops config in ~/.lithops/config . For different environments (prod/staging), the configs should be different.
Change lithops version in requirements.txt
Commit this change and deploy on staging.
Run pip install -r requirements.txt on the local envs.
Select staging config and create docker image for Lambda and EC2.
For lambda function:
Create docker image and upload to ECR: cd engine/docker/lithops_aws_lambda && ./build_and_push.sh 3.2.0.a
On staging instance in config.json file change runtime to metaspace-aws-lambda:3.2.0.a in lithops -> aws_lambda section.
On staging instance run manually load_ds step [1] for creating lambda function for different amount of RAM (512, 1024, 2048, 4096, 8192) MB.
For each lambda function need to change CloudWatch log group to /aws/lambda/staging-lithops.
For EC2:
Create docker image and upload to ECR: cd engine/docker/lithops_aws_ec2 && ./build_and_push.sh 3.2.0.a
Create EC2 instances for different amount of RAM (32, 64, 128, 256) GB based on Ubuntu 22.04 and run this [2].
On staging instance in config.json file change runtime to metaspace-aws-ec2:3.2.0.a in lithops -> aws_ec2 section and also instance_id and ec2_instances.
On staging instance run manually load_ds step [1] for running lithops on each created EC instance (32, 64, 128, 256) GB.
[1]
It is easiest to iteratively hardcode runtime memory after this block
from sm.engine.annotation_lithops.executor import Executor
from sm.engine.annotation_lithops.io import load_cobj, load_cobjs
from sm.engine.dataset import Dataset
from sm.engine.config import SMConfig
from sm.engine.utils.perf_profile import NullProfiler
from sm.engine.annotation_lithops.annotation_job import ServerAnnotationJob
from sm.engine.db import DB, ConnectionPool
from sm.engine.es_export import ESExporter
from sm.engine.util import init_loggers
config = SMConfig()
config.set_path('/opt/dev/metaspace/metaspace/engine/conf/config.json')
config = config.get_conf()
perf = NullProfiler()
executor = Executor(config['lithops'], perf)
init_loggers(config['logs'])
db = DB()
es = ESExporter(db)
connection_pool = ConnectionPool(config['db'])
connection_pool.__enter__()
globals().update(locals())
ds_id = '2023-09-11_13h55m06s'
job = ServerAnnotationJob(executor, Dataset.load(DB(), ds_id), perf, use_cache=False)
pipe = job.pipe
pipe.clean()
pipe.load_ds(use_cache=False)
Lithops upgrade
pip install -r requirements.txt
on the local envs.cd engine/docker/lithops_aws_lambda && ./build_and_push.sh 3.2.0.a
config.json
file changeruntime
tometaspace-aws-lambda:3.2.0.a
inlithops
->aws_lambda
section.load_ds
step [1] for creating lambda function for different amount of RAM (512, 1024, 2048, 4096, 8192) MB./aws/lambda/staging-lithops
.cd engine/docker/lithops_aws_ec2 && ./build_and_push.sh 3.2.0.a
runtime
tometaspace-aws-ec2:3.2.0.a
inlithops
->aws_ec2
section and alsoinstance_id
andec2_instances
.load_ds
step [1] for running lithops on each created EC instance (32, 64, 128, 256) GB.[1] It is easiest to iteratively hardcode
runtime memory
after this block[2]