apache / dolphinscheduler

Apache DolphinScheduler is the modern data orchestration platform. Agile to create high performance workflow with low-code
https://dolphinscheduler.apache.org/
Apache License 2.0
12.94k stars 4.64k forks source link

[Discussion][Machine Learning] Support AI task and the open source project about MLops #9724

Closed jieguangzhou closed 2 years ago

jieguangzhou commented 2 years ago

Search before asking

What happened

I have seen a Machine Learning Platform post on Medium. The post talk about Lizhi Machine Learning Platform&Apache DolphinScheduler. https://medium.com/@DolphinScheduler/a-formidable-combination-of-lizhi-machine-learning-platform-dolphinscheduler-creates-new-paradigm-e445938f1af

Like this, I have a try to do something like this. Figure 1 shows the training workflow startup screen

image

In this workflow, I implemented four algorithms (SVM, LR, LGBM, XGboost) using the API of Sklearn, Lightgbm, and Xgboost. Every algorithm's parameters can fill in the value of key "params". In this case, the parameters of LGBM is "n_estimators=200;num_leaves=20".

The experiment tracking module is supported by MLFlow.The picture below shows the report of the experiment. image I register the model every time I run it. image

When the model is trained, run the deployment workflow. Like this:

image

We can deploy the version 2 model to the k8s cluster.

And then we can see the deployment and pods image

At the same time, we can access the service through the interface. image

BTW, we can also connect the training workflow with the deployment workflow as a sub-workflow, like this.

image

What you expected to happen

None

How to reproduce

None

Anything else

The above workflow is based on the Shell task. But it is too complex to ml engineer. I hope to write new types of tasks that make them easier for users to use.

The training workflow contains one task. The code is as follows

data_path=${data_path}
export MLFLOW_TRACKING_URI=${MLFLOW_TRACKING_URI}
echo $data_path
repo=https://github.com/jieguangzhou/mlflow_sklearn_gallery.git
mlflow run $repo -P algorithm=${algorithm} -P data_path=$data_path -P params="${params}" -P param_file=${param_file} -P model_name=${model_name} --experiment-name=${experiment_name}

echo "training finish"

The deployment workflow contains two task.

image

The code of the "build docker" workflow is as follows

eval $(minikub -p minikube docker-env)
export MLFLOW_TRACKING_URI=${MLFLOW_TRACKING_URI}
image_name=mlflow/${model_name}:${version}
echo $image_name
mlflow models build-docker -m "models:/${model_name}/${version}" -n $image_name --enable-mlserver

The code of the "create deployment" workflow which deploys the model to the k8s cluster is as follows

version_lower=$(echo "${version}" | tr '[:upper:]' '[:lower:]')
kubectl apply -f - << END
apiVersion: apps/v1
kind: Deployment
metadata:
  name: mlflow-${model_name}-$version_lower
spec:
  selector:
    matchLabels:
      app: mlflow
  replicas: 3 # tells deployment to run 2 pods matching the template
  template:
    metadata:
      labels:
        app: mlflow
    spec:
      containers:
      - name: mlflow-iris
        image: mlflow/${model_name}:${version}
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 8080

---
apiVersion: v1
kind: Service
metadata:
  name: mlflow-${model_name}-$version_lower
spec:
  ports:
  - port: 8080
    targetPort: 8080
  selector:
    app: mlflow
END

sleep 5s

kubectl port-forward deployment/mlflow-${model_name}-$version_lower ${deployment_port}:8080

Version

dev

Are you willing to submit PR?

Code of Conduct

github-actions[bot] commented 2 years ago

Thank you for your feedback, we have received your issue, Please wait patiently for a reply.

jieguangzhou commented 2 years ago

I build a wrong type of issue