avast / wanna-ml

Complete MLOps framework for Vertex-AI
MIT License
17 stars 3 forks source link

[FeatureRequest]: consume wanna.yaml from stdin like kubectl apply #128

Open racinmat opened 3 months ago

racinmat commented 3 months ago

Contact Details

No response

Is your feature request related to a problem? Please describe

I like to parameterize wanna.yaml for some grid search experiments. The yaml natively supports env vars, but they are not thread-safe, so I can't run them in parallel (since e.g. job creation is blocking).

Describe the solution you'd like

Supporting something like

wanna job run -f - <<EOF
wanna_project:
  name: wanna-mlops
  version: 0.0.0
  authors: 
    - matej.racinsky@gendigital.com

gcp_profiles:
  - profile_name: mlops-dev
  - ...

docker:
#  cloud_build: true
  cloud_build: false
  ...

jobs:
  - name: nvidia-driver
    worker:
      container:
        docker_image_ref: data
        command: []
      args: []
      machine_type: "n1-highmem-2"  # the cheapest machine with gpu
      gpu:
        accelerator_type: NVIDIA_TESLA_T4
        count: 1
    enable_web_access: true
    region: ${region}
    ...

Describe alternatives you've considered

I am creating some temporary files. I also used env vars, but it did not work due to their thread non-safety on windows

jobs:
  - name: nvidia-driver
    worker:
      container:
        docker_image_ref: data
        command: []
      args: []
      machine_type: "n1-highmem-2"  # the cheapest machine with gpu
      gpu:
        accelerator_type: NVIDIA_TESLA_T4
        count: 1
    enable_web_access: true
    region: ${region}
    network: projects/.../global/networks/${region}-...
    subnet: projects/.../regions/${region}/subnetworks/${region}-...

Additional context

No response