Note: Supersedes #58. Somehow the commit history got all messed up, so I just squashed the changes onto a new branch
Adds a server that evaluates Inferno scripts with ML features (i.e. Hasktorch integration), with support for loading models and configuration for a model store and cache. At the moment, it only supports a single model store (so no support for user model stores)
Note: Supersedes #58. Somehow the commit history got all messed up, so I just squashed the changes onto a new branch
Adds a server that evaluates Inferno scripts with ML features (i.e. Hasktorch integration), with support for loading models and configuration for a model store and cache. At the moment, it only supports a single model store (so no support for user model stores)
This is used in
inferno-ml-deploy
Note that this is largely a proof-of-concept at this point. An actual production system might be quite different