hashicorp / terraform-provider-aws

The AWS Provider enables Terraform to manage AWS resources.
https://registry.terraform.io/providers/hashicorp/aws
Mozilla Public License 2.0
9.64k stars 9.02k forks source link

[Enhancement]: one to one mapping with sagemaker jumpstart model creation #35011

Open Bryson14 opened 7 months ago

Bryson14 commented 7 months ago

Terraform Core Version

1.6.5

AWS Provider Version

5.31.0

Affected Resource(s)

Sagemaker Engpoint config.

Expected Behavior

When creating a jumpstart endpoint through the SageMaker studio, you can create a LLM (like mistral) on an managed endpoint. There are few hacks you have to do to get this to work with Terraform because the values are these jumpstart images and s3 locations are not published. But by deploying a model on studio, then using aws cli to get the model's primary_container.environment and model_data_source, terraform can copy it.

The issue is that the aws_sagemaker_endpoint_configuration cannot support the configuration that sagemaker studio creates by default.

Here is the described endpoint configuration made by studio:

{
    "EndpointConfigName": "jumpstart-mistral-1703083557499",
    "EndpointConfigArn": "arn:aws:sagemaker:us-east-1:XXXX:endpoint-config/jumpstart-mistral-1703083557499",
    "ProductionVariants": [
        {
            "VariantName": "AllTraffic",
            "InitialInstanceCount": 1,
            "InstanceType": "ml.g5.2xlarge",
            "ManagedInstanceScaling": {
                "Status": "ENABLED",
                "MinInstanceCount": 1,
                "MaxInstanceCount": 20
            }
        }
    ],
    "CreationTime": "2023-12-20T14:45:57.903000+00:00",
    "ExecutionRoleArn": "arn:aws:iam::XXXX:role/service-role/AmazonSageMaker-ExecutionRole-20231215T143587",
    "EnableNetworkIsolation": true
}

Actual Behavior

With terraform, it is not possible to specify ManagedInstanceScaling:

 "ManagedInstanceScaling": {
                "Status": "ENABLED",
                "MinInstanceCount": 1,
                "MaxInstanceCount": 20
            }

It is also not possible to specify NetworkIsolation

This is the endpoint configuration created by terraform

{
    "EndpointConfigName": "chat-bot-sagemaker-config",
    "EndpointConfigArn": "arn:aws:sagemaker:us-east-1:XXX:endpoint-config/chat-bot-sagemaker-config",
    "ProductionVariants": [
        {
            "VariantName": "mistral-7b-variant",
            "ModelName": "llm-mistral-7b-instruct-model",
            "InitialInstanceCount": 1,
            "InstanceType": "ml.m5.2xlarge",
            "InitialVariantWeight": 1.0,
            "EnableSSMAccess": false
        }
    ],
    "CreationTime": "2023-12-19T22:54:12.219000+00:00",
    "EnableNetworkIsolation": false
}

Relevant Error/Panic Output Snippet

No response

Terraform Configuration Files

# README
# these values for environment and model data source were found by deploying a 
# JumpStart endpoint with sagemaker studio then copying the values on that model 
# using `aws sagemaker describe-model --model-name your_model_name`
# Without these, the endpoint will fail to deploy. You can check cloudwatch logs for the reason
# The standard way to deploy a endpoint is with boto3.sagemaker or python CDK.
# There are no resources online where to find the env and model data source info.
resource "aws_sagemaker_model" "mistral_sagemaker_model" {
  name               = "llm-mistral-7b-instruct-model"
  execution_role_arn = aws_iam_role.sagemaker_trust_role.arn

  primary_container {
    image = var.sagemaker_mistral_public_image
    mode  = "SingleModel"
    environment = {
      ENDPOINT_SERVER_TIMEOUT        = "3600"
      HF_MODEL_ID                    = "/opt/ml/model"
      MAX_BATCH_PREFILL_TOKENS       = "8191"
      MAX_INPUT_LENGTH               = "8191"
      MAX_TOTAL_TOKENS               = "8192"
      MODEL_CACHE_ROOT               = "/opt/ml/model"
      SAGEMAKER_ENV                  = "1"
      SAGEMAKER_MODEL_SERVER_WORKERS = "1"
      SAGEMAKER_PROGRAM              = "inference.py"
      SM_NUM_GPUS                    = "1"
    }

    model_data_source {
      s3_data_source {
        s3_uri = "s3://jumpstart-cache-prod-us-east-1/huggingface-llm/huggingface-llm-mistral-7b-instruct/artifacts/inference-prepack/v1.0.0/",
        s3_data_type = "S3Prefix"
        compression_type = "None"
      }
    }
  }

  tags = {
    Application = var.app_name

    ENDPOINT_SERVER_TIMEOUT        = "3600"
    HF_MODEL_ID                    = "/opt/ml/model"
    MAX_BATCH_PREFILL_TOKENS       = "8191"
    MAX_INPUT_LENGTH               = "8191"
    MAX_TOTAL_TOKENS               = "8192"
    MODEL_CACHE_ROOT               = "/opt/ml/model"
    SAGEMAKER_ENV                  = "1"
    SAGEMAKER_MODEL_SERVER_WORKERS = "1"
    SAGEMAKER_PROGRAM              = "inference.py"
    SM_NUM_GPUS                    = "1"
  }
}

resource "aws_sagemaker_endpoint_configuration" "config" {
  name = "chat-bot-sagemaker-config"

  production_variants {
    variant_name           = "mistral-7b-variant"
    model_name             = aws_sagemaker_model.mistral_sagemaker_model.name
    initial_instance_count = 1
    instance_type          = var.sagemaker_inference_compute_size
  }
  net

  tags = {
    Application = var.app_name
  }
}

resource "aws_sagemaker_endpoint" "endpoint" {
  name                 = "sagemaker-mistral-inference-ep"
  endpoint_config_name = aws_sagemaker_endpoint_configuration.config.name

  tags = {
    Application = var.app_name
  }
}

resource "aws_iam_role" "sagemaker_trust_role" {
  name = "sagemaker_role"

  assume_role_policy = <<-EOF
  {
    "Version": "2012-10-17",
    "Statement": [
      {
        "Action": "sts:AssumeRole",
        "Principal": {
          "Service": "sagemaker.amazonaws.com"
        },
        "Effect": "Allow",
        "Sid": ""
      }
    ]
  }
  EOF

  tags = {
    Application = var.app_name
  }
}

resource "aws_iam_role_policy_attachment" "sagemaker_full_access" {
  role       = aws_iam_role.sagemaker_trust_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonSageMakerFullAccess"
}

resource "aws_iam_role_policy_attachment" "s3_read_write_access" {
  role       = aws_iam_role.sagemaker_trust_role.name
  policy_arn = "arn:aws:iam::aws:policy/AmazonS3FullAccess"
}

Steps to Reproduce

run standard terraform init, plan, and apply and check the comparison between the endpoint configurations deployed by terraform and SageMaker studio UI.

Debug Output

No response

Panic Output

No response

Important Factoids

No response

References

No response

Would you like to implement a fix?

None

github-actions[bot] commented 7 months ago

Community Note

Voting for Prioritization

Volunteering to Work on This Issue

justinretzolk commented 6 months ago

Hey @Bryson14 👋 Thank you for taking the time to raise this! As a heads up, we consider adding additional arguments to existing resources to be an enhancement, so I've updated the labels with that in mind.

deepakbshetty commented 6 months ago

Hi @Bryson14 , Are you able to provide the ECR image you have used for sagemaker_mistral_public_image. Also, if you can provide a working example either in CLI or anywhere else would great. Most models I have tried do not support managed instance scaling, so its blocking me from writing a test case to enable this feature.

I have used example here - https://repost.aws/questions/QUODaQEyKNTbqWLYszAIYCIg/creating-jumpstart-sagemaker-endpoint-with-terraform-fails-with-model-needs-flash-attention

endpoint_configuration_test.go:162: Step 1/3 error: Error running apply: exit status 1

    Error: creating SageMaker Endpoint Configuration: ValidationException: ManagedInstanceScaling is not supported with the given EndpointConfig setup.
            status code: 400, request id: 32f1694c-6389-43f9-9bea-5245a1497bfd

      with aws_sagemaker_endpoint_configuration.test,
      on terraform_plugin_test.tf line 54, in resource "aws_sagemaker_endpoint_configuration" "test":
      54: resource "aws_sagemaker_endpoint_configuration" "test" {
dkhundley commented 6 months ago

Isn't enabling the network isolation done in the SageMaker model and not the endpoint config?

deepakbshetty commented 6 months ago

Yes, when a model is specified in endpoint config VPC/subnet details and network isolation cannot be specified and is mutually.exclusive. The endpoint config inherits VPC config and network isolation from model definition.

RLashofRegas commented 1 month ago

Not sure if it helps, but at least it may help someone who stumbles upon this issue later. To answer the issue mentioned above of "the values are these jumpstart images and s3 locations are not published". I was able to retrieve these programmatically like this:

(vs-code jupyter notebook script formatting)

# %%
from sagemaker.jumpstart.notebook_utils import list_jumpstart_models
from sagemaker import image_uris, model_uris

# %%
region = "us-west-2"  # Your region.
instance_type = "ml.g5.2xlarge"  # Your desired instance type. Note image will be different for gpu vs cpu instances.

# %%
# find model_id for a given search string
[m for m in list_jumpstart_models(region=region) if "mistral" in m]

# %%
model_id = "huggingface-llm-mistral-7b-instruct"

# %%
# find latest version of model_id
[m for m in list_jumpstart_models(filter=f"model_id=={model_id}", list_versions=True, region=region)]

# %%
model_version = "3.1.0"

# %%
image_uris.retrieve(framework=None, instance_type=instance_type, image_scope="inference", model_id=model_id, model_version=model_version, region=region)

# %%
model_uris.retrieve(instance_type=instance_type, model_scope="inference", model_id=model_id, model_version=model_version, region=region)