Closed jieguangzhou closed 2 years ago
Thank you for your feedback, we have received your issue, Please wait patiently for a reply.
#troubleshooting
Is somebody interested in the AI task about DS?
I add the GridSearch feature to the example like this. That we can search for the best parameters of a model.
After the model was trained, we also can see the parameter search report on the MLflow dashboard like this
The value of key "search_params" above is "max_depth=[5, 10];n_estimators=[100, 200]" for xgboost.
We also can search more params, all of the params we can see In API of the algorithm, for example: svm: "kernel=['linear', 'poly', 'rbf'];C=[0.5, 1.0]" ; lr: "penalty=['l1', 'l2'];C=[0.5, 1.0]" ; lightgbm: max_depth=[5, 10];n_estimators=[100, 200]
The above work is trying to build an MLops system using dolphinscheduler as an orchestration system. I think that will be cool if we add more and more popular machine learning tools in dolphinscheduler.
Hi, we are planning to initiate our exploration of AIOps at Apache SkyWalking community. Very interesting to see the discussions here.
Now I'm also looking at Dolphinscheduler to handle our workflow orchestration, and for now, we may as well go in the same direction as yours. I feel like integration with MLFlow package functionalities will be a good point to boost ML-developer experience to the next level.
Hi, we are planning to initiate our exploration of AIOps at Apache SkyWalking community. Very interesting to see the discussions here.
Now I'm also looking at Dolphinscheduler to handle our workflow orchestration, and for now, we may as well go in the same direction as yours. I feel like integration with MLFlow package functionalities will be a good point to boost ML-developer experience to the next level.
Hi, good to see you join the discussion. I just read about your discussion (https://github.com/apache/skywalking/discussions/8883). It should be a great project.
I think DolphinScheduler will be able to schedule the AIops scenario in the near future. I am enriching its scheduling features in the field of artificial intelligence, and the MVP product is being implemented.
We can keep talking about that. BTW, I might do some experiments with this data set, but I can't access it right now. https://github.com/CloudWise-OpenSource/GAIA-DataSet
Looking great! I am very optimistic about the prospects of this. And as I said in the mail thread, I think machine learning is also another kind of orchestration, and most of the machine learning source data or training samples are from data warehouses or data lakes, which we already supported in the current version. If we DolphinScheduler could support machine learning tasks then users could finish their jobs in one single tool instead of separately.
I'll be happy to follow this and provide help. Also happy to integrate and test the outcomes in the new SkyWalking ecosystem AIOps project.
Search before asking
Description
I have seen a Machine Learning Platform post on Medium. The post talk about Lizhi Machine Learning Platform&Apache DolphinScheduler. https://medium.com/@DolphinScheduler/a-formidable-combination-of-lizhi-machine-learning-platform-dolphinscheduler-creates-new-paradigm-e445938f1af
Like this, I try to do something like this. MLflow, sklearn, LightGBM, Xgboost, and DolphinScheduler are used. Figure 1 shows the training workflow startup screen
In this workflow, I implemented four algorithms (SVM, LR, LGBM, XGboost) using the API of Sklearn, Lightgbm, and Xgboost. Every algorithm's parameters can fill in the value of key "params". In this case, the parameters of LGBM is "n_estimators=200;num_leaves=20".
The experiment tracking module is supported by MLFlow.The picture below shows the report of the experiment.
I register the model every time I run it.
![image](https://user-images.githubusercontent.com/31528124/164980244-74ed3f86-d9d2-4f82-9263-daa4ab5ddc3a.png)
When the model is trained, run the deployment workflow. Like this:
We can deploy the version 2 model to the k8s cluster.
And then we can see the deployment and pods![image](https://user-images.githubusercontent.com/31528124/164980439-bb6634fd-e777-4aa2-927e-948c83cb6006.png)
At the same time, we can access the service through the interface.![image](https://user-images.githubusercontent.com/31528124/164980478-b84c088a-1980-4fe1-8a29-2a728b587599.png)
BTW, we can also connect the training workflow with the deployment workflow as a sub-workflow, like this.
The training workflow contains one task. The code is as follows
The deployment workflow contains two task.
The code of the "build docker" workflow is as follows
The code of the "create deployment" workflow which deploys the model to the k8s cluster is as follows
The above workflow is based on the Shell task. But it is too complex to ml engineer. I hope to write new types of tasks that make them easier for users to use.
Future work:
Use case
No response
Related issues
No response
Are you willing to submit a PR?
Code of Conduct