AIforGoodSimulator / model-server

MIT License
11 stars 9 forks source link

CodeFactor codecov wemake-python-styleguide Testing Dev-Build-Deploy UAT-Build-Deploy

Install

To start install required packages with

pip install -r requirements.txt

or first create and activate virtual environment

Go to root of the repo and run (or configure corresponding env. vars)

Copy the .env.default file to .env :

You also need to fill in the .env variable REDIS_URL with the details of the REDIS server. You can fill in the information with the details of a local REDIS instance or request the development REDIS parameters from :

[pardf](https://github.com/pardf)

[kariso2000](https://github.com/kariso2000)

[billlyzhaoyh](https://github.com/billlyzhaoyh)

[titorenko](https://github.com/titorenko)

[TensorMan](https://github.com/TensorMan)

Command line execution

To get commandline help:

python ai4good/runner/console_runner.py -h

Example execution (default camp is used):

python ai4good/runner/console_runner.py --profile custom --save_plots --save_report

CSV Report is saved in fs/reports, plots are in fs/figs, model result cache in fs/model_results.

Parameters are in fs/params + profile configuration in code right now, but to move to some kind of database in the future.

Webapp

Webapp can be started from PyCharm by running server.py main method or from terminal:

waitress-serve --port 8050 --host 0.0.0.0 ai4good.webapp.server:flask_app

Note waitress is for local development only and gunicorn is used for production deployment.

Azure deployment

First add azure remote

git remote add azure https://ai4good-sim2.scm.azurewebsites.net:443/ai4good-sim2.git

Note down deployment credentials from Deployment Center/Deployment Credentials on Azure portal for AI4Good-Sim2 app service.

Now just do

git push azure master

enter credentials when prompted.

Docker

Change directory

cd model-server

Build:

docker build -t model-server .

Test:

docker run model-server python -m unittest discover -s ai4good/ -p "test_*.py"

Run Example:

docker run model-server python ai4good/runner/console_runner.py --profile custom --save_plots --save_report

Container Command Line:

docker run -it model-server /bin/bash

Design overview

Model-server consists of following top level packages:

Tests

use run_tests cmd/sh to execute all tests

Instances

Difference instances of model server are available:

The following workflow should be used for development:

Please contact kariso2000 on slack for userid and password.

FAQ

Will the web server be a separate container? Yes

Where do we intend to save the Results and Graphs? Volume/NFS? Bucket Storage

Are we going to run this on ACI via docker context? or AKS? AKS

How will new models and their dependencies be added/integrated into the model server? Model server is just a framework you can have inception file to run your model on that model server. all commands are in github documentation already.

Or will this just become the base image to build a multistage container for new models? Yes

Orca is not available via pip and requires some dependency management. Alternatives? Which container or AKS does not have orca available. need to know the name.