project-codeflare / codeflare-sdk

An intuitive, easy-to-use python interface for batch resource requesting, access, job submission, and observation. Simplifying the developer's life while enabling access to high-performance compute resources, either in the cloud or on-prem.
Apache License 2.0
22 stars 40 forks source link

[Feature] Integration test case with Data Science Pipeline, CodeFlare and KubeRay #425

Open yuanchi2807 opened 9 months ago

yuanchi2807 commented 9 months ago

Name of Feature or Improvement

Create an integration test case to validate DSP, CodeFlare and KubeRay implementation.

Describe the Solution You Would Like to See

Test environment assumptions:

  1. Data Science Pipeline v1.
  2. Ray cluster shall consist of no more than 2 worker pods, with 2 CPU cores and less than 6 GB available for each pod.
  3. An integration test execution time shall be less than 20 mins in total.
  4. S3 storage may be available, if needed.
  5. Free of proprietary intellectual property.
  6. Public data only.

Proposed test case: Clustering text documents using k-means on scikit-learn education page.

https://scikit-learn.org/stable/auto_examples/text/plot_document_clustering.html

Data Science Pipeline stages:

  1. Downloading test data (https://scikit-learn.org/stable/auto_examples/text/plot_document_clustering.html#loading-text-data)
  2. Launch Ray cluster with two worker pods.
  3. Ray driver launches two Ray actors, deployed to a pod each. The first actor runs TfidfVectorizer, followed by Kmeans clustering and evaluation. The second actor runs HashingVectorizer, followed by Kmeans clustering and evaluation.
  4. Ray driver collects evaluation results from the two actors. Then it reports the summaries.
  5. Ray cluster is stopped and shutdown.
  6. Pipeline run is completed.

Expected test assets:

  1. DSP pipeline yaml to deploy and kick off test runs.
  2. Test image with Ray and document clustering code.
  3. CodeFlare image to deploy the test image.
  4. Preconfigured credentials and configmaps in the test environment.
yuanchi2807 commented 9 months ago

Cross posting from https://github.com/opendatahub-io/data-science-pipelines/issues/179

A prototype following the above solution design can be found at this link.

https://github.com/yuanchi2807/dsp_codeflare_int_testing

Ray application image can be pulled from quay.io/yuanchichang_ibm/integration_testing/dsp_codeflare_int_testing:0.1

The pipeline definition yet_another_ray_integration_test.py is modified from https://github.com/diegolovison/ods-ci/blob/ray_integration/ods_ci/tests/Resources/Files/pipeline-samples/ray_integration.py to point to the custom image and invokes docker_clustering_driver.py through Ray jobs API.

Please feel free to comment.

anishasthana commented 9 months ago

fyi @sutaakar

sutaakar commented 9 months ago

On the first look it looks fine to me. I will try to run it this week. Waiting for feedback from Diego, as he has more experience with Pipelines.

yuanchi2807 commented 9 months ago

On the first look it looks fine to me. I will try to run it this week. Waiting for feedback from Diego, as he has more experience with Pipelines.

My prototype is to test the water and can be enhanced to lengthen the pipeline.