Open phildow opened 5 years ago
Optionally separate directories. Also allow the train and predict directories to be the same, in which case they share variables:
model.json
model/
predict.pb
train.pb
variables/
variables.data-00000-of-00001
variables.index
Then allow a model to rewrite itself in place! We ship a centrally trained model with both prediction and training graphs, shared variables, the client uses it for prediction, but it also uses it for training and writes the results of training back to its variables directory, immediately using the locally improved model for local predictions. Then the variables can later be shipped back to the server for a federated round.
A flag in model.json can control this behavior, and the federated workflow is slightly different. In this case training is immediate and a federated round takes place independently of training. Currently training takes place only in response to a federated task. It's continuous local learning vs one-shot local learning.
The current spec (here) only supports one of
[predict, train, eval]
in a model bundle but we want model bundles to be able to support multiple modes with different inputs and outputs as required.Add optional predict, train, and eval fields to the root dictionary, themselves dictionaries, which take model, inputs, and outputs keys.
We may use this functionality in federated learning when we deploy bundles that support both training and prediction.