Like this, I have a try to do something like this.
Figure 1 shows the training workflow startup screen
In this workflow, I implemented four algorithms (SVM, LR, LGBM, XGboost) using the API of Sklearn, Lightgbm, and Xgboost.
Every algorithm's parameters can fill in the value of key "params". In this case, the parameters of LGBM is "n_estimators=200;num_leaves=20".
The experiment tracking module is supported by MLFlow.The picture below shows the report of the experiment.
I register the model every time I run it.
When the model is trained, run the deployment workflow. Like this:
We can deploy the version 2 model to the k8s cluster.
And then we can see the deployment and pods
At the same time, we can access the service through the interface.
BTW, we can also connect the training workflow with the deployment workflow as a sub-workflow, like this.
What you expected to happen
None
How to reproduce
None
Anything else
The above workflow is based on the Shell task. But it is too complex to ml engineer. I hope to write new types of tasks that make them easier for users to use.
The training workflow contains one task. The code is as follows
Search before asking
What happened
I have seen a Machine Learning Platform post on Medium. The post talk about Lizhi Machine Learning Platform&Apache DolphinScheduler. https://medium.com/@DolphinScheduler/a-formidable-combination-of-lizhi-machine-learning-platform-dolphinscheduler-creates-new-paradigm-e445938f1af
Like this, I have a try to do something like this. Figure 1 shows the training workflow startup screen
In this workflow, I implemented four algorithms (SVM, LR, LGBM, XGboost) using the API of Sklearn, Lightgbm, and Xgboost. Every algorithm's parameters can fill in the value of key "params". In this case, the parameters of LGBM is "n_estimators=200;num_leaves=20".
The experiment tracking module is supported by MLFlow.The picture below shows the report of the experiment. I register the model every time I run it.
When the model is trained, run the deployment workflow. Like this:
We can deploy the version 2 model to the k8s cluster.
And then we can see the deployment and pods
At the same time, we can access the service through the interface.
BTW, we can also connect the training workflow with the deployment workflow as a sub-workflow, like this.
What you expected to happen
None
How to reproduce
None
Anything else
The above workflow is based on the Shell task. But it is too complex to ml engineer. I hope to write new types of tasks that make them easier for users to use.
The training workflow contains one task. The code is as follows
The deployment workflow contains two task.
The code of the "build docker" workflow is as follows
The code of the "create deployment" workflow which deploys the model to the k8s cluster is as follows
Version
dev
Are you willing to submit PR?
Code of Conduct