This repository is for active development of the Azure SDK for Python. For consumers of the SDK we recommend visiting our public developer docs at https://learn.microsoft.com/python/azure/ or our versioned developer docs at https://azure.github.io/azure-sdk-for-python.
MIT License
4.53k
stars
2.76k
forks
source link
Creating a Custom Environment in Azure ML using my compute-cluster instead of Serverless #35636
I need to know how to create my custom environment forcing it to use my compute-cluster that already has "No public IP" on true.
I have tried create the custom environment by code:
my_env = Environment(name=env_name,
image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest",
conda_file=path,
description="I'm using azure sdk version 2.0")
my_env = ml_client.environments.create_or_update(my_env)
I have trying forcing it through running the code with a pipeline using
@dsl.pipeline(
compute = my_cluster
)
and didn´t work because the default job name "prepare image" is using the Serverless which in this moment has the parameter ServerlessComputeNoPublicIP=False (by default), my Azure ML is under a networking, then using my compute-cluster i thing won't be problems when creating the custom environment (what do you think?).
I have trying change ServerlessComputeNoPublicIP to True but i got the error:
Another option for my case, is if i have to use serverless as it is to "prepare image" job, how to avoid job download the snapshot (set it in disabled, something like that) into the Storge Account because i have the next error:
There are blocks from virtual networks, etc.
Another question: ServerlessComputeNoPublicIP parameter could be handle from the AML template when is deploy it?
I need to know how to create my custom environment forcing it to use my compute-cluster that already has "No public IP" on true.
I have tried create the custom environment by code:
my_env = Environment(name=env_name, image="mcr.microsoft.com/azureml/openmpi4.1.0-ubuntu20.04:latest", conda_file=path, description="I'm using azure sdk version 2.0") my_env = ml_client.environments.create_or_update(my_env)
I have trying forcing it through running the code with a pipeline using
@dsl.pipeline( compute = my_cluster ) and didn´t work because the default job name "prepare image" is using the Serverless which in this moment has the parameter ServerlessComputeNoPublicIP=False (by default), my Azure ML is under a networking, then using my compute-cluster i thing won't be problems when creating the custom environment (what do you think?).
I have trying change ServerlessComputeNoPublicIP to True but i got the error:
Another option for my case, is if i have to use serverless as it is to "prepare image" job, how to avoid job download the snapshot (set it in disabled, something like that) into the Storge Account because i have the next error:
There are blocks from virtual networks, etc.
Another question: ServerlessComputeNoPublicIP parameter could be handle from the AML template when is deploy it?
Thanks for your help in advanced!