aws-neuron / aws-neuron-sdk

Powering AWS purpose-built machine learning chips. Blazing fast and cost effective, natively integrated into PyTorch and TensorFlow and integrated with your favorite AWS services
https://aws.amazon.com/machine-learning/neuron/
Other
470 stars 156 forks source link

[TorchServe] Load multiple models on a single node #460

Closed RobinFrcd closed 2 years ago

RobinFrcd commented 2 years ago

I currently have 4 models and as suggested in https://github.com/aws/aws-neuron-sdk/issues/441, I've put everything in a single handler.

The initialize method looks like this

self.models = dict()
model_dir = properties.get("model_dir")
for model_file in glob.glob(os.path.join(model_dir, "*.pt")):
    if "struct" in model_file:
        model_type = ModelType.structure
    elif "contour" in model_file:
        model_type = ModelType.contour
    elif "view" in model_file:
        model_type = ModelType.view
    print(f"🏋️‍ Loading {model_file} (Type: {model_type})")
    self.models[model_type] = torch.jit.load(model_file, map_location=self.device)
    self.models[model_type].eval()

When launching the .mar locally (with jit files), everything runs fine. But when launching it on inferentia, I get:

2022-08-02T21:15:37,334 [INFO ] W-9001-cv_models_220802_n-stdout MODEL_LOG - 🏋️‍ Loading /home/model-server/tmp/models/3e8087402ac8440faeee0b711778e4f8/brain_contours_220729.neuron.ts.pt (Type: contour)
2022-08-02T21:15:39,196 [INFO ] epollEventLoopGroup-4-1 ACCESS_LOG - /10.24.39.101:47602 "GET /metrics HTTP/1.1" 200 4
2022-08-02T21:15:39,197 [INFO ] epollEventLoopGroup-4-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:b59f0c1b8864,timestamp:1659474939
2022-08-02T21:15:54,067 [INFO ] epollEventLoopGroup-4-1 ACCESS_LOG - /10.24.39.101:47602 "GET /metrics HTTP/1.1" 200 5
2022-08-02T21:15:54,070 [INFO ] epollEventLoopGroup-4-1 TS_METRICS - Requests2XX.Count:1|#Level:Host|#hostname:b59f0c1b8864,timestamp:1659474939
2022-08-02T21:15:54,508 [WARN ] W-9000-cv_models_220802_n-stderr MODEL_LOG - 2022-Aug-02 21:15:54.0505    33:33    ERROR  TDRV:dmem_alloc                              Failed to alloc HOST memory: 2097152
2022-08-02T21:15:54,509 [WARN ] W-9000-cv_models_220802_n-stderr MODEL_LOG - 2022-Aug-02 21:15:54.0507    33:33    ERROR  TDRV:init_one_var                            Failed to allocate chunk pointer
2022-08-02T21:15:54,509 [WARN ] W-9000-cv_models_220802_n-stderr MODEL_LOG - 2022-Aug-02 21:15:54.0507    33:33    ERROR  TDRV:kbl_model_add                           create_io_chunks() error
2022-08-02T21:15:54,509 [WARN ] W-9000-cv_models_220802_n-stderr MODEL_LOG - 2022-Aug-02 21:15:54.0507    33:33    ERROR  NMGR:dlr_kelf_stage                          Failed to load subgraph
2022-08-02T21:15:54,509 [WARN ] W-9000-cv_models_220802_n-stderr MODEL_LOG - 2022-Aug-02 21:15:54.0507    33:33    ERROR  NMGR:stage_kelf_models                       Failed to stage graph: kelf-a.json to NeuronCore
2022-08-02T21:15:54,510 [WARN ] W-9000-cv_models_220802_n-stderr MODEL_LOG - 2022-Aug-02 21:15:54.0507    33:33    ERROR  NMGR:kmgr_load_nn_post_metrics               Failed to load NN: 1.11.4.0+97f99abe4-/tmp/tmpevjxk1ea, err: 4
2022-08-02T21:15:54,909 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG - Backend worker process died.
2022-08-02T21:15:54,910 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG - Traceback (most recent call last):
2022-08-02T21:15:54,911 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 210, in <module>
2022-08-02T21:15:54,913 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     worker.run_server()
2022-08-02T21:15:54,914 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 181, in run_server
2022-08-02T21:15:54,914 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     self.handle_connection(cl_socket)
2022-08-02T21:15:54,915 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 139, in handle_connection
2022-08-02T21:15:54,916 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     service, result, code = self.load_model(msg)
2022-08-02T21:15:54,916 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/venv/lib/python3.8/site-packages/ts/model_service_worker.py", line 104, in load_model
2022-08-02T21:15:54,917 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     service = model_loader.load(
2022-08-02T21:15:54,918 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/venv/lib/python3.8/site-packages/ts/model_loader.py", line 151, in load
2022-08-02T21:15:54,920 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     initialize_fn(service.context)
2022-08-02T21:15:54,920 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/model-server/tmp/models/3e8087402ac8440faeee0b711778e4f8/handler.py", line 73, in initialize
2022-08-02T21:15:54,921 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     self.models[model_type] = self._load_torchscript_model(model_file)
2022-08-02T21:15:54,922 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/venv/lib/python3.8/site-packages/ts/torch_handler/base_handler.py", line 115, in _load_torchscript_model
2022-08-02T21:15:54,922 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     return torch.jit.load(model_pt_path, map_location=self.device)
2022-08-02T21:15:54,923 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/venv/lib/python3.8/site-packages/torch_neuron/jit_load_wrapper.py", line 13, in wrapper
2022-08-02T21:15:54,924 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     script_module = jit_load(*args, **kwargs)
2022-08-02T21:15:54,924 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -   File "/home/venv/lib/python3.8/site-packages/torch/jit/_serialization.py", line 162, in load
2022-08-02T21:15:54,925 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG -     cpp_module = torch._C.import_ir_module(cu, str(f), map_location, _extra_files)
2022-08-02T21:15:54,925 [INFO ] W-9000-cv_models_220802_n-stdout MODEL_LOG - RuntimeError: Could not load the model status=4 message=Allocation Failure
2022-08-02T21:15:54,964 [INFO ] epollEventLoopGroup-5-1 org.pytorch.serve.wlm.WorkerThread - 9000 Worker disconnected. WORKER_STARTED
2022-08-02T21:15:54,970 [DEBUG] W-9000-cv_models_220802_n org.pytorch.serve.wlm.WorkerThread - System state is : WORKER_STARTED

On my local machine, the models take about 3G of RAM. On inferentia I'm running a inf1.xlarge 7.5GB of RAM available, so I guess it's not a RAM issue here.

I've built my models with

neuron-cc==1.11.4.0+97f99abe4
torch-neuron==1.10.2.2.3.0.0

Am I doing something wrong, here ? Thanks

RobinFrcd commented 2 years ago

Well, restarting the ECS instance solved the issue. I'm not sure what was the issue here :thinking: