Project-MONAI / MONAILabel

MONAI Label is an intelligent open source image labeling and learning tool.
https://docs.monai.io/projects/label
Apache License 2.0
615 stars 196 forks source link

Run monai-label in kubernetes #1728

Open ism93 opened 3 months ago

ism93 commented 3 months ago

Hello, is it possible to install monai-label on kubernetes? I'm trying to make it work via a classic deployment but I'm encountering the following problems in my pods:

[k8sgpu-01|monai-dev] imarson_adm@pc12:~/monai$ k logs monai-label-6cd85976c5-6rjtl

============= == PyTorch ==

NVIDIA Release 24.03 (build 85286408) PyTorch Version 2.3.0a0+40ec155e58 Container image Copyright (c) 2024, NVIDIA CORPORATION & AFFILIATES. All rights reserved. Copyright (c) 2014-2024 Facebook Inc. Copyright (c) 2011-2014 Idiap Research Institute (Ronan Collobert) Copyright (c) 2012-2014 Deepmind Technologies (Koray Kavukcuoglu) Copyright (c) 2011-2012 NEC Laboratories America (Koray Kavukcuoglu) Copyright (c) 2011-2013 NYU (Clement Farabet) Copyright (c) 2006-2010 NEC Laboratories America (Ronan Collobert, Leon Bottou, Iain Melvin, Jason Weston) Copyright (c) 2006 Idiap Research Institute (Samy Bengio) Copyright (c) 2001-2004 Idiap Research Institute (Ronan Collobert, Samy Bengio, Johnny Mariethoz) Copyright (c) 2015 Google Inc. Copyright (c) 2015 Yangqing Jia Copyright (c) 2013-2016 The Caffe contributors All rights reserved.

Various files include modifications (c) NVIDIA CORPORATION & AFFILIATES. All rights reserved.

This container image and its contents are governed by the NVIDIA Deep Learning Container License. By pulling and using the container, you accept the terms and conditions of this license: https://developer.nvidia.com/ngc/nvidia-deep-learning-container-license

NOTE: CUDA Forward Compatibility mode ENABLED. Using CUDA 12.4 driver version 550.54.14 with kernel driver version 535.129.03. See https://docs.nvidia.com/deploy/cuda-compatibility/ for details.

NOTE: The SHMEM allocation limit is set to the default of 64MB. This may be insufficient for PyTorch. NVIDIA recommends the use of the following flags: docker run --gpus all --ipc=host --ulimit memlock=-1 --ulimit stack=67108864 ...

=> And my pod is in crashLoopBackOff Do You have some advice for using it?

Thank you

diazandr3s commented 3 months ago

Hi @ism93,

I'd recommend first making sure the MONAI Label Docker container works on this instance. Have you tried that? What's the use case here? How are you planning to orchestrate the different pods including MONAI Label?

Please let us know,

ism93 commented 3 months ago

Hi, thank you for your answer. I can't run on docker because my GPU cluster is running on kubernetes. Yes I'm tryin to run monai-label image.

Thank you

diazandr3s commented 3 months ago

Hi @ism93,

As far as I know, Docker is used to create container images and run individual containers and Kubernetes uses these Docker containers as the basic unit of deployment and manages them across multiple hosts.

I'd suggest you first confirm that the MONAI Label Docker container works on your end before orchestrating more containers.