RedHatQuickCourses / rhods-pipelines

Automation using Data Science Pipelines
https://redhatquickcourses.github.io/rhods-pipelines/
2 stars 7 forks source link

Environment variable in the Data Science Project #22

Closed adelton closed 6 months ago

adelton commented 7 months ago

The page https://redhatquickcourses.github.io/rhods-pipelines/rhods-pipelines/1.33/chapter1/elyra-pipelines.html#_set_up_the_workbench says:

Now we will start a workbench.

  1. Before starting a workbench, you need to set an environment variable in the Data Science Project to prevent SSL certificate verification errors. Add a new environment variable and set it’s value as follows:

    PIPELINES_SSL_SA_CERTS=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt

    Consult the blog article at https://www.goglides.dev/bkpandey/certificateverifyfaile-during-pipeline-run-g8i for more details.

    If you fail to set this environment variable correctly, you will see an error when you run the pipeline:

    ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain...

  2. In the OpenShift AI dashboard, create a new data workbench and enter the following details:

    Name: fraud-detection-workbench [...]

This suggests that the environment variable can be somehow set in the Data Scrience Project globally, outside of creating a workbench. However, no such setting seems to be available:

Screenshot_2024-04-04_13-10-20

The course should be more explicit where the user can find the UI to set the environment variable.

Or, if the environment variable should be set for the specific workbench, that is in the Create workbench form below the Name and other parameters, so setting the environment variable should not be in step 1. above the setup that initiates creating the workbench.

Screenshot_2024-04-04_13-13-33

rsriniva commented 6 months ago

fixed in PR #25