idealo / terraform-aws-opensearch

Terraform module to provision an OpenSearch cluster with SAML authentication.
Apache License 2.0
67 stars 57 forks source link

Module should not create role mapping for master_user_arn if advanced_security_options_internal_user_database_enabled enabled #34

Closed insider89 closed 1 year ago

insider89 commented 1 year ago

I am using following configuration to create opensearch cluster:

data "aws_secretsmanager_secret" "opensearch_admin" {
  name = "dev/opensearch/admin"
}

data "aws_secretsmanager_secret_version" "opensearch_admin" {
  secret_id = data.aws_secretsmanager_secret.opensearch_admin.id
}

locals {
  admin_credentials = jsondecode(data.aws_secretsmanager_secret_version.opensearch_admin.secret_string)
}

module "opensearch" {
  source  = "idealo/opensearch/aws"
  version = "1.4.0"

  cluster_name                                             = "opensearch"
  cluster_domain                                           = data.terraform_remote_state.dns.outputs.public_hosted_zone
  custom_endpoint_certificate_arn                          = data.terraform_remote_state.dns.outputs.acm_certificate_arn
  cluster_domain_private                                   = false
  cluster_version                                          = "2.5"
  hot_instance_type                                        = "m5.large.elasticsearch"
  hot_instance_count                                       = 1
  availability_zones                                       = 1
  subnet_ids                                               = [data.terraform_remote_state.vpc.outputs.database_subnets[0]]
  vpc_enabled                                              = true
  warm_instance_enabled                                    = false
  master_instance_enabled                                  = false
  saml_enabled                                             = false
  advanced_security_options_internal_user_database_enabled = true
  advanced_security_options_master_user_name               = local.admin_credentials.username
  advanced_security_options_master_user_password           = local.admin_credentials.password
  master_user_arn                                          = ""
  ebs_enabled                                              = true
  ebs_volume_size                                          = "20"
  security_group_ids                                       = [module.opensearch_sg.security_group_id]
}

provider "elasticsearch" {
  url         = module.opensearch.cluster_endpoint
  aws_region  = "eu-east-1"
  healthcheck = false
}

module "opensearch_sg" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "~> 4.17"

  name        = "opensearch"
  description = "Allow RDS port 443 within VPN ${data.terraform_remote_state.vpc.outputs.vpc_id}"
  vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id

  # ingress
  ingress_with_cidr_blocks = [
    {
      from_port   = 443
      to_port     = 443
      protocol    = "tcp"
      description = "Opensearch access from within VPC ${data.terraform_remote_state.vpc.outputs.vpc_id}"
      cidr_blocks = data.terraform_remote_state.vpc.outputs.vpc_cidr_block
    },
  ]
}

But it's failed with following error:

╷
│ Error: HTTP 403 Forbidden: Permission denied. Please ensure that the correct credentials are being used to access the cluster.
│
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_arn["all_access"],
│   on .terraform/modules/opensearch/role_mapping.tf line 16, in resource "elasticsearch_opensearch_roles_mapping" "master_user_arn":
│   16: resource "elasticsearch_opensearch_roles_mapping" "master_user_arn" {
│
╵
╷
│ Error: HTTP 403 Forbidden: Permission denied. Please ensure that the correct credentials are being used to access the cluster.
│
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_arn["security_manager"],
│   on .terraform/modules/opensearch/role_mapping.tf line 16, in resource "elasticsearch_opensearch_roles_mapping" "master_user_arn":
│   16: resource "elasticsearch_opensearch_roles_mapping" "master_user_arn" {

master_user_arn is empty, but it tries to create role_mapping for it.

> terraform version
Terraform v1.4.6
on darwin_arm64
+ provider registry.terraform.io/hashicorp/aws v4.67.0
+ provider registry.terraform.io/phillbaker/elasticsearch v2.0.7
steveteuber commented 1 year ago

Hi @insider89,

Thank you for opening this issue. I think the problem will be that you need to pass username and password in the provider configuration as well:

provider "elasticsearch" {
  url         = module.opensearch.cluster_endpoint
  aws_region  = "eu-east-1"
  healthcheck = false
  username    = local.admin_credentials.username
  password    = local.admin_credentials.password
}
insider89 commented 1 year ago

Hi @steveteuber . Just tried to add username and password, but got the same result:

data "aws_caller_identity" "current" {}

data "aws_secretsmanager_secret" "opensearch_admin" {
  name = "dev/opensearch/admin"
}

data "aws_secretsmanager_secret_version" "opensearch_admin" {
  secret_id = data.aws_secretsmanager_secret.opensearch_admin.id
}

locals {
  admin_credentials = jsondecode(data.aws_secretsmanager_secret_version.opensearch_admin.secret_string)
}

data "aws_region" "current" {}

module "opensearch" {
  source  = "idealo/opensearch/aws"
  version = "1.4.0"

  cluster_name                                             = "opensearch"
  cluster_domain                                           = data.terraform_remote_state.dns.outputs.public_hosted_zone
  custom_endpoint_certificate_arn                          = data.terraform_remote_state.dns.outputs.acm_certificate_arn
  cluster_domain_private                                   = false
  cluster_version                                          = "2.5"
  hot_instance_type                                        = "m5.large.elasticsearch"
  hot_instance_count                                       = 1
  availability_zones                                       = 1
  subnet_ids                                               = [data.terraform_remote_state.vpc.outputs.database_subnets[0]]
  vpc_enabled                                              = true
  warm_instance_enabled                                    = false
  master_instance_enabled                                  = false
  saml_enabled                                             = false
  advanced_security_options_internal_user_database_enabled = true
  advanced_security_options_master_user_name               = local.admin_credentials.username
  advanced_security_options_master_user_password           = local.admin_credentials.password
  advanced_security_options_enabled                        = true
  ebs_enabled                                              = true
  ebs_volume_size                                          = "20"
  security_group_ids                                       = [module.opensearch_sg.security_group_id]
}

provider "elasticsearch" {
  url         = module.opensearch.cluster_endpoint
  aws_region  = data.aws_region.current.name
  healthcheck = false
  username    = local.admin_credentials.username
  password    = local.admin_credentials.password
}

module "opensearch_sg" {
  source  = "terraform-aws-modules/security-group/aws"
  version = "~> 4.17"

  name        = "opensearch"
  description = "Allow RDS port 443 within VPN ${data.terraform_remote_state.vpc.outputs.vpc_id}"
  vpc_id      = data.terraform_remote_state.vpc.outputs.vpc_id

  # ingress
  ingress_with_cidr_blocks = [
    {
      from_port   = 443
      to_port     = 443
      protocol    = "tcp"
      description = "Opensearch access from within VPC ${data.terraform_remote_state.vpc.outputs.vpc_id}"
      cidr_blocks = data.terraform_remote_state.vpc.outputs.vpc_cidr_block
    },
  ]
}

And the error

are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_arn["all_access"] will be created
  + resource "elasticsearch_opensearch_roles_mapping" "master_user_arn" {
      + backend_roles = [
          + "",
        ]
      + id            = (known after apply)
      + role_name     = "all_access"
    }

  # module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_arn["security_manager"] will be created
  + resource "elasticsearch_opensearch_roles_mapping" "master_user_arn" {
      + backend_roles = [
          + "",
        ]
      + id            = (known after apply)
      + role_name     = "security_manager"
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_arn["all_access"]: Creating...
module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_arn["security_manager"]: Creating...
╷
│ Error: HTTP 403 Forbidden: Permission denied. Please ensure that the correct credentials are being used to access the cluster.
│
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_arn["security_manager"],
│   on .terraform/modules/opensearch/role_mapping.tf line 16, in resource "elasticsearch_opensearch_roles_mapping" "master_user_arn":
│   16: resource "elasticsearch_opensearch_roles_mapping" "master_user_arn" {
│
╵
╷
│ Error: HTTP 403 Forbidden: Permission denied. Please ensure that the correct credentials are being used to access the cluster.
│
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_arn["all_access"],
│   on .terraform/modules/opensearch/role_mapping.tf line 16, in resource "elasticsearch_opensearch_roles_mapping" "master_user_arn":
│   16: resource "elasticsearch_opensearch_roles_mapping" "master_user_arn" {
│
╵
Releasing state lock. This may take a few moments...
steveteuber commented 1 year ago

Now, the role mappings are no longer created when using the internal user database. Could you try it again, please?

egarbi commented 1 year ago

Now, the role mappings are no longer created when using the internal user database. Could you try it again, please?

I think you still need to map the role, just it should be done with the internal user instead of master_user_arn. That's the way I have it. So it should be just a conditional to decide which one to map. See the screenshot for my cluster

Screenshot 2023-05-23 at 10 16 00

user admin is the internal user defined by advanced_security_options_master_user_name

Or maybe that's done automatically while creating the cluster, in that case, ignore my message. @insider89 can confirm after applying. I cannot test by myself at the moment

insider89 commented 1 year ago

@egarbi Could you please advice what additional configuration require to map the user instead of master_user_arn?

egarbi commented 1 year ago

@egarbi Could you please advice what additional configuration require to map the user instead of master_user_arn?

well, I guess you can pass this variable without having to touch the code (using version 1.4.0)

role_mappings = {
  all_access = {
    users = {
      [var.advanced_security_options_master_user_name]
    }
  }
  security_manager = {
    users = {
      [var.advanced_security_options_master_user_name]
    }
  }
}

but again, I'm not sure if that action is actually automatically done when you pass advanced_security_options_internal_user_database_enabled = true on cluster creation. If that is the case then the latest version 1.4.1 will just work since @steveteuber merged the fix

insider89 commented 1 year ago

@steveteuber @egarbi I can confirm that I don't have an error during applying in v1.4.1.

But in the UI I don't see any mapping (should it be?). Please note, that I didn't apply mapping suggested by @egarbi . Screenshot 2023-05-23 at 11 47 59

insider89 commented 1 year ago

BTW, in the View role and identities I see the following mapping: Screenshot 2023-05-23 at 11 56 19 Screenshot 2023-05-23 at 11 55 13

And found it in Roles: Screenshot 2023-05-23 at 11 59 53

steveteuber commented 1 year ago

@egarbi I think you are right. But when I remember correctly, the role mappings are created automatically. Just to be sure that the role mappings are present, I'll create a new pull request.

steveteuber commented 1 year ago

I think this should work #36. Are there any objections?

insider89 commented 1 year ago

@steveteuber If am trying to upgrade from 1.4.1 to 1.4.2, I got 403 error again:

  # module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["all_access"] will be created
  + resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {
      + id        = (known after apply)
      + role_name = "all_access"
      + users     = (sensitive value)
    }

  # module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["security_manager"] will be created
  + resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {
      + id        = (known after apply)
      + role_name = "security_manager"
      + users     = (sensitive value)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["all_access"]: Creating...
module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["security_manager"]: Creating...
╷
│ Error: HTTP 403 Forbidden: Permission denied. Please ensure that the correct credentials are being used to access the cluster.
│
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["security_manager"],
│   on .terraform/modules/opensearch/role_mapping.tf line 31, in resource "elasticsearch_opensearch_roles_mapping" "master_user_name":
│   31: resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {
│
╵
╷
│ Error: HTTP 403 Forbidden: Permission denied. Please ensure that the correct credentials are being used to access the cluster.
│
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["all_access"],
│   on .terraform/modules/opensearch/role_mapping.tf line 31, in resource "elasticsearch_opensearch_roles_mapping" "master_user_name":
│   31: resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {

On 1.4.1 all looks good.

steveteuber commented 1 year ago

Hm, I thought so. Is it possible to create indices with your current credentials?

insider89 commented 1 year ago

@steveteuber Yes, I can create indices with current credentials.

steveteuber commented 1 year ago

Strange, then it should be possible to create/modify the role mappings... Does anyone else have an idea?

egarbi commented 1 year ago

Strange, then it should be possible to create/modify the role mappings... Does anyone else have an idea?

if I run this against my cluster endpoint it works without issues

curl -XPUT -u 'admin:password' "https://production.logs.example.com/_plugins/_security/api/rolesmapping/all_access" -H 'Content-Type: application/json' -d'
  {
    "backend_roles" : [],
    "hosts" : [],
    "users" : [ "admin" ]
  }'
{"status":"OK","message":"'all_access' updated."}

and this is basically the same the provider/module should do

egarbi commented 1 year ago

it looks like is related to this

@insider89 can you try adding sign_aws_requests = false to the provider definition, so it should be something like

provider "elasticsearch" {
  url         = module.opensearch.cluster_endpoint
  aws_region  = data.aws_region.current.name
  healthcheck = false
  sign_aws_requests = false
  username    = local.admin_credentials.username
  password    = local.admin_credentials.password
}
insider89 commented 1 year ago

@egarbi with sign_aws_requests = false, I've another error(looks like it's think I tried to deploy old version):

are indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["all_access"] will be created
  + resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {
      + id        = (known after apply)
      + role_name = "all_access"
      + users     = (sensitive value)
    }

  # module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["security_manager"] will be created
  + resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {
      + id        = (known after apply)
      + role_name = "security_manager"
      + users     = (sensitive value)
    }

Plan: 2 to add, 0 to change, 0 to destroy.

Do you want to perform these actions?
  Terraform will perform the actions described above.
  Only 'yes' will be accepted to approve.

  Enter a value: yes

module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["all_access"]: Creating...
module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["security_manager"]: Creating...
╷
│ Error: ElasticSearch version 2.5.0 is older than 6.0.0 and is not supported, flavor: 0.
│
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["security_manager"],
│   on .terraform/modules/opensearch/role_mapping.tf line 31, in resource "elasticsearch_opensearch_roles_mapping" "master_user_name":
│   31: resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {
│
╵
╷
│ Error: ElasticSearch version 2.5.0 is older than 6.0.0 and is not supported, flavor: 0.
│
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["all_access"],
│   on .terraform/modules/opensearch/role_mapping.tf line 31, in resource "elasticsearch_opensearch_roles_mapping" "master_user_name":
│   31: resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {
│
╵
Releasing state lock. This may take a few moments...
egarbi commented 1 year ago

@egarbi with sign_aws_requests = false, I've another error(looks like it's think I tried to deploy old version):

Not sure if related but make sure you have compatibility mode (with elasticsearch) enabled on your Amazon OpenSearch cluster. This cheats not compatible clients to believe they are speaking with an ES cluster.

insider89 commented 1 year ago

@egarbi Could you please point me where I can check it(didn't find any properties in my configurations).

insider89 commented 1 year ago

@egarbi looks like Elasticsearch terraform provider still does not support Opensearch v2, only v1. https://github.com/phillbaker/terraform-provider-elasticsearch/issues/323

With the following workaround I am able to successfully apply v1.4.2:

provider "elasticsearch" {
  url                   = module.opensearch.cluster_endpoint
  aws_region            = data.aws_region.current.name
  healthcheck           = false
  sign_aws_requests     = false
  elasticsearch_version = "7.17.7"
  username              = local.admin_credentials.username
  password              = local.admin_credentials.password
}
egarbi commented 1 year ago

@insider89 thanks for the feedback. @steveteuber you can now close this one

steveteuber commented 1 year ago

Ok, I will close this issue for now. Thanks for your help!

bala-auvaria commented 1 year ago
provider "elasticsearch" {
  url                   = module.opensearch.cluster_endpoint
  aws_region            = data.aws_region.current.name
  healthcheck           = false
  sign_aws_requests     = false
  elasticsearch_version = "7.17.7"
  username              = local.admin_credentials.username
  password              = local.admin_credentials.password
}

terraform apply works fine but

terraform destroy showing below error

module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["all_access"]: Refreshing state... [id=all_access]
╷
│ Error: elastic: Error 403 (Forbidden)
│ 
│   with module.opensearch.elasticsearch_opensearch_roles_mapping.master_user_name["all_access"],
│   on .terraform/modules/opensearch/role_mapping.tf line 31, in resource "elasticsearch_opensearch_roles_mapping" "master_user_name":
│   31: resource "elasticsearch_opensearch_roles_mapping" "master_user_name" {
│