TheFoundryVisionmongers / nuke-ML-server

A Nuke client plug-in which connects to a Python server to allow Machine Learning inference in Nuke.
Apache License 2.0
134 stars 36 forks source link

Training Capabilities? #9

Open samhodge opened 5 years ago

samhodge commented 5 years ago

Is it possible to do training from the the nuke-ML-server?

The examples are inference in Caffe2 from Facebook

Is it possible to pass label or other ground truth data to the ML model and have it learn from the toolset, it seems that is a inference only.

Where would the model checkpoints be stored?

Would the data rate be adequate?

ringdk commented 5 years ago

Great question!

This is high on our list of features, and we're hoping to have a release of something this week.

Re: passing data back to the model, yes, it's possible. We haven't explored this a lot yet but it's where we expect all of the interesting work to be.

Re: checkpoints, the TensorFlow training example we'll be adding will include a 'checkpoints' folder.

Re: data rate, from our tests it seems training in Docker is roughly on par with training outside Docker.

johannabar commented 5 years ago

Hi @samhodge,

We have just added training capabilities through the trainingTemplateTF model. You can find instructions on how to train and infer models using this template here.

It basically enables image-to-image training of an encoder-decoder model as follows:

  1. Fill in your groundtruth and input images in the data folder,
  2. Launch the training inside your Docker container,
  3. Infer using your trained model in Nuke through the nuke-ML-server.
samhodge commented 5 years ago

Is there scope to train something beyond a simple encoder decoder?

johannabar commented 5 years ago

At the moment, the multi-scale encoder-decoder is the only supported model. We are planning on adding new model building blocks to the template along the way.

In the meantime, you are always free to modify or add your own models inside the model_builder.py file!