Open farhan888 opened 7 months ago
@farhan888 The easiest way is to use docker container. You can launch multiple containers on the same host.
@lxning I have been following that approach. However, creating new containers for every different environment can be quite redundant if the dependency lies on only one package (in my case the dependency is usually torch). Adding this feature will make life a lot easier. All we have to do then is transfer the mar files to the same container where torchserve is serving. Thanks for the suggestion though.
I do agree on what @farhan888 is pointing to. If this can be enabled in pytorch, this will help us a lot
🚀 The feature
Presently, TorchServe operates within a singular Python environment, limiting its flexibility for users who rely on multiple Python environments for their machine learning projects. This proposal aims to extend TorchServe's capabilities by introducing support for Python virtual environments. Enabling the use of virtual environments will empower users to deploy models in isolated Python environments, ensuring compatibility, version control, and dependency management.
Motivation, pitch
I am currently facing a challenge while deploying two distinct machine-learning models through TorchServe. These models have been developed by two different developers, each using their own unique conda environment. As a result, specific dependencies, including different Torch versions and site-packages, are required for the proper functioning of each model.
The issue arises due to the current limitation of TorchServe, which operates within a single conda environment. Consequently, deploying these models concurrently becomes problematic as TorchServe cannot accommodate the diverse dependency requirements of each model.