TheFoundryVisionmongers / nuke-ML-server

A Nuke client plug-in which connects to a Python server to allow Machine Learning inference in Nuke.
Apache License 2.0
134 stars 36 forks source link

Support multiple locations for models #18

Open ldmoser opened 3 years ago

ldmoser commented 3 years ago

Hi there! thank you so much for sharing this implementation.

The current code lists the models under the directory in which the server was launched, and that seems somewhat restrictive.

Would it be possible for you to add support to multiple locations for the models that the server can see?

Ideally I would like to launch the server and specify perhaps in an environment variable, multiple directories where it could search for models, separated by ":" (to follow Linux conventions).

And perhaps "baseModel.py" should be in a separate dir, so that one could point to it's location using PYTHONPATH when launching the server and that would make possible to all custom nodes, wherever they are, to import the base class.

How do you like that?

ringdk commented 3 years ago

Hello! Glad you're enjoying the ML Server!

Regarding model locations, baseModel.py and other nodes, we've gone with the current structure (and not added custom locations) mainly because it's simpler to see where the models are, and because locations can become confusing once you use Docker containers (which seem to be how most people are using the ML Server).

Although we haven't tried it, it's probably not difficult to either wrap the server or manually add a "sys.append.path" to your custom code. I've used symlinks in the past when I've needed to drop in new models to test, being mindful to ensure the paths are available and correct once in the Docker container.

Thanks again!

On Mon, Dec 7, 2020 at 10:38 PM Lucio notifications@github.com wrote:

Hi there! thank you so much for sharing this implementation.

The current code lists the models under the directory in which the server was launched, and that seems somewhat restrictive.

Would it be possible for you to add support to multiple locations for the models that the server can see?

Ideally I would like to launch the server and specify perhaps in an environment variable, multiple directories where it could search for models, separated by ":" (to follow Linux conventions).

And perhaps "baseModel.py" should be in a separate dir, so that one could point to it's location using PYTHONPATH when launching the server and that would make possible to all custom nodes, wherever they are, to import the base class.

How do you like that?

— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/TheFoundryVisionmongers/nuke-ML-server/issues/18, or unsubscribe https://github.com/notifications/unsubscribe-auth/AAIH2B3AKAK65TPM75TJKQ3STVKOLANCNFSM4URDVQNQ .

ldmoser commented 3 years ago

Hi, thanks!

Do you have an impression whether the people using Docker containers are using in the context of VFX pipelines or are single users? I have a VFX pipeline in mind and hence the need for more versatility on the configuration..

I think that the os.walk() call will not be affected by any changes in sys.append.path. So the only solution is to build on-the-fly the "models" dir with symlinks to all models this particular server is supposed to see. That may be adequate for individuals playing with some machine learning models, but to integrate in a vfx pipeline it just doesn't seem to scale well. It's hard to debug, and maintain.

About the baseModel.py location. I understand it makes easier for a single user to just test and modify on top of it. But in order to rely on this software in a pipeline we should be able to install it as is, without local changes. In that spirit, it would greatly help to separate what is API from what are model examples that one may not need installed.

Does that help motivate implementing the changes?

And if not so much, would you be open then, to consider merge requests to improve the package in those directions?

dekekincaid commented 3 years ago

@ringdk unfortunately we can't use docker for this.

@ldmoser We may just need Dmytro patch it in our branch.

dekekincaid commented 3 years ago

but yes, following up Lucio's comment, it would be nice to make this more flexible for both docker and local installs. Docker is great for services and we use it all the time for that but for what is essentially going to be just another process on a render farm using docker adds complexity which is not necessarily a great solution for us and I suspect other mid/large facilities. It also makes it very hard for us to burst to the cloud.

dkorolov commented 3 years ago

From a practical point of view, for use not only as a playground but also in production, it would be good to separate the base Nuke MLserver code and API from the model code. Then the base MLserver code will have a minimum of dependencies. And the base code of the server will not become outdated with every update of the machine learning packages.