hashicorp / terraform

Terraform enables you to safely and predictably create, change, and improve infrastructure. It is a source-available tool that codifies APIs into declarative configuration files that can be shared amongst team members, treated as code, edited, reviewed, and versioned.
https://www.terraform.io/
Other
42.3k stars 9.49k forks source link

Support recursive merging of maps - deepmerge function #24987

Closed bruceharrison1984 closed 2 years ago

bruceharrison1984 commented 4 years ago

Current Terraform Version

v0.12.16

Use-cases

The merge function doesn't honor maps with nested properties as I would expect. I would expect to be able to overwrite a single nested value, but instead nodes that are missing are omitted from the output. It feels more like the second arguments root level properties just overwrite those of the first argument.

variable object_one {
  type = map
  default = {
    user = {
      employer = "Theater"
      identity = {
        first_name = "bill"
        last_name  = "shakespeare"
      }
      location = {
        address = "1234 ABC Ln."
        country = "United States"
      }
    }
  }
}

variable object_two {
  type = map
  default = {
    user = {
      employer = "Theater"
      location = {
        address = "789 Algebra St."
      }
    }
  }
}

output user_output {
  value = merge(var.object_one, var.object_two)
}
Output:

user_output = {
  "user" = {
    "employer" = "Theater"
    "location" = {
      "address" = "789 Algebra St."
    }
  }
}

The problems I see here are:

Attempted Solutions

The issue can be worked around by using smaller chunks of maps and then building a larger map from the resulting merges. This may not be possible depending on the source of the maps (JSON/YAML files for example)

The other option is duplicating the data is both places to ensure that data is not dropped on merge.

variable object_one {
  type = map
  default = {
    user = {
      employer = "Theater"
      identity = {
        first_name = "bill"
        last_name  = "shakespeare"
      }
      location = {
        address = "1234 ABC Ln."
        country = "United States"
      }
    }
  }
}

variable object_two {
  type = map
  default = {
    user = {
      employer = "Theater"
      identity = {
        first_name = "bill"
        last_name  = "shakespeare"
      }
      location = {
        address = "789 Algebra St."
        country = "United States"
      }
    }
  }
}

output user_output {
  value = merge(var.object_one, var.object_two)
}
Outputs:

user_output = {
  "user" = {
    "employer" = "Theater"
    "identity" = {
      "first_name" = "bill"
      "last_name" = "shakespeare"
    }
    "location" = {
      "address" = "789 Algebra St."
      "country" = "United States"
    }
  }
}

This works fine, but kinda defeats the purpose of doing the merge in the first place.

Proposal

The output should be a true merge of the maps in a recursive fashion, rather than a simple root comparison. Using the first example, I expected the following output:

Outputs:

user_output = {
  "user" = {
    "employer" = "Theater"
    "identity" = {
      "first_name" = "bill"
      "last_name" = "shakespeare"
    }
    "location" = {
      "address" = "789 Algebra St."
      "country" = "United States"
    }
  }
}

Use case for this is when JSON/YAML files are used for holding configuration data based on workspaces or some other condition, the maps cannot be reliably merged without lots of duplication of values in the 'child' configuration data.

The introduction of a deepmerge function would allow for this behavior, and preserve any existing deployments that rely on the existing merge function.

References

The current documentation doesn't mention nested properties at all, so if this work is rejected, it may be worth making a note of this behavior in the documentation.

bruceharrison1984 commented 4 years ago

I just found another issue on the topic: https://github.com/hashicorp/terraform/issues/24236

So i think proposing a new function for this behavior is in-line with the conclusion of that issue.

bruceharrison1984 commented 4 years ago

PR: https://github.com/hashicorp/terraform/pull/25032

KyleKotowick commented 3 years ago

While waiting for that PR to be merged, I built a module to do this in pure Terraform.

GitHub: https://github.com/Invicton-Labs/terraform-null-deepmerge Terraform Registry: https://registry.terraform.io/modules/Invicton-Labs/deepmerge/null/latest

apparentlymart commented 3 years ago

Hi @bruceharrison1984! Thanks for this feature request, and for the proposed solution for it.

One reason we've been reluctant to offer a function like this so far is that there doesn't seem to be consistent expectations about what it should do across everyone who has proposed things like this in the past. (That includes both discussions about the merge function behavior, along with related things such as proposals that variable default should merge recursively, such as in #16517.)

Once we have a function like this in the language its behavior will be frozen for compatibility reasons, so I'd like to make sure of what problem we're aiming to solve and try to pick a set of behaviors that best solve the problem. To start though, I did a small survey of some similar functions in other ecosystems, and also compared those with the implementation you proposed in #25032:

Scenario #25032 npm deepmerge lodash _.merge rails Hash#deep_merge
any V into null V V V V
null into any V nothing null null null
primitive V into sequence S V V V V
sequence S into primitive V S S S S
primitive V into mapping M V V V V
mapping M into primitive V M M M M
primitive V into set S V n/a n/a V
set S into primitive V S n/a n/a S
sequence S into sequence T S concat(T,S) deep merge elements by index S
set S into set T S n/a n/a S
sequence S into set T S n/a n/a S
set T into sequence S T n/a n/a T

(lodash ._defaultsDeep seems to be, for our purposes, equivalent to _.merge but with the argument order inverted. Therefore I didn't include it as a separate column above, but I did review it.)

The good news is that the differences were not as severe as I expected: aside from the fact that JavaScript doesn't have a "set" type (which is why some of the cells contain "n/a"), the only significant difference between these four implementations is in the treatment of merging sequences into other sequences. Here we see three different possibilities represented:

This feature request doesn't include a "root" use-case ... that is, you stated that you want to merge values from JSON/YAML files but you didn't state what the goal of such a merge would be, and thus it's tough to say which of the above behaviors would be most appropriate for what you are aiming to achieve.

Considering the use-case of applying defaults values in partial data structures though, it actually feels to me like none of those possibilities really meets the need. It's common for data structures in Terraform to include lists and maps of nested objects, like the following example variable:

variable "networks" {
  type = object({
    base_cidr_block = string
    subnets = list(
      object({
        # This is using an experimental new feature in Terraform 0.14
        # to mark this attribute as optional.
        new_bits = optional(number)
        number   = number
      })
    )
  })
}

Our friends who want to concisely write out default values for variables have typically expressed that they want to be able to specify defaults for attributes in nested objects too, but none of the sequence-merging behaviors above really meet that need. The closest is the lodash _.merge case, but even that falls short because it requires the module author to predict how many items will be in the subnets list and duplicate the default for each of them:

locals {
  networks = deepmerge({
    subnets = [
      # assume there will always be five subnets given
      { new_bits = 8 },
      { new_bits = 8 },
      { new_bits = 8 },
      { new_bits = 8 },
      { new_bits = 8 },
    ]
  }, var.networks)
}

To meet this use-case it seems like instead what we want is a sort of prototypical example of what one subnet object should look like by default, which the function could then apply to all of the elements:

locals {
  networks = defaults(var.networks, {
    # "subnets" is a list of objects in the input variable,
    # but only a single object for the sake of specifying
    # the defaults. Terraform would merge the default
    # object with each of the elements in the input.
    subnets = { new_bits = 8 }
  })
}

To help consider how best to move forward with this feature request, I'm curious to hear more about your underlying problem that is motivating you to want to merge together data structures from different JSON/YAML files, to see if it is compatible with or at least similar to the default-value-merging use-case we've had requests about before. Since the focus of the Terraform language isn't data structure manipulation, we'd ideally like to add only a single function that addresses this family of use-cases, but we'll have to consider what all of the use-cases are first before we can see if that's possible.


Another relatively-minor difference in your implementation vs. the others I reviewed is that it causes attributes with null values to be omitted entirely in the result, rather than left as null, as you noted in the documentation.

Due to how Terraform's structural type system is designed, objects with null values are important to recognize type conformance, so if we were to move forward with that design we would need to ensure that it's able to preserve nulled attributes, to ensure that the result's type would always be a subtype of all of the input types.

This is more of an implementation detail than anything else, so I wanted to keep it separate from the discussion about the use-cases, but thought it worth noting now mostly so I don't forget about it in subsequent discussion.

KyleKotowick commented 3 years ago

Hi @apparentlymart , thank you for the very in-depth research. Since I'm particularly interested in this function, I'll provide a use case for it that I use a lot.

I use a branch-based system for multiple AWS accounts (each branch deploys to a different account) for a dev/test/prod system. Since each environment has different configuration values (e.g. instance types, domain names, etc.), but much of the configuration is consistent, I use a framework where there is a "global" config map (values used everywhere, and defaults), and then branch-specific config maps. I built a deepmerge module (see my comment before yours) and use it for this, where I do a deep merge of the global config map with the branch-specific config map, with values from the latter overwriting the former. This provides me with a single output config map which I then reference in all of my resources. By doing so, I can keep the same resource code and just modify the branch configs as required, and it all works perfectly.

So for example, my config might look like so:

locals {
  config_global = {
    app_name = {
      pretty = "My Application"
      code = "my-application"
    }
    rds = {
      database_engine = "aurora-mysql"
      database_name = "mydb"
    }
  }

  config-dev = {
    rds = {
      instance_type = "db.t3.small"
    }
    root_domain = "dev.mydomain.com"
  }

  config-test= {
    rds = {
      instance_type = "db.t3.small"
    }
    root_domain = "test.mydomain.com"
  }

  config-prod= {
    rds = {
      instance_type = "db.t3.large"
    }
    root_domain = "mydomain.com"
  }

  branch_configs = {
    dev = local.config-dev
    test = local.config-test
    prod = local.config-prod
  }

  config_branch = local.branch_configs[module.git_info.branch]

  config = module.deepmerge-config.merged
}

module "git_info" {
  source = "Invicton-Labs/git-info/external"
}

module "deepmerge-config" {
  source = "Invicton-Labs/deepmerge/null"
  maps = [
    local.config_global,
    local.config_branch
  ]
}

Then in my resources, I would just reference local.config.rds.instance_type and it would have the correct value for the currently checked-out branch. Being able to do the same thing without having to use my deepmerge module would be fantastic.

I built the deepmerge module to merge maps, but overwrite (and not append) sets/lists, although I don't have a particular reason for that. Ideally, it would be a function argument bool for whether it should overwrite or append set/list subfields.

apparentlymart commented 3 years ago

Hi @kkotowick! Thanks for sharing the use-case details.

You mentioned that your module currently overwrites sets and lists, making it similar to #25032 and Rails Hash#deep_merge in my earlier table. I'd be curious to hear if that specific behavior is important for your use-case, or merely a case of "it doesn't matter what it does so I just arbitrarily choose this option".

The reason I'd like to dig into that is that the function growing to have multiple optional behaviors is something I'd specifically like to avoid if at all possible, and so I'd like to first exhaust the possibility of solving this whole family of "applying default values into a partial data structure" with one function, and if your use-case considers that whole row of the table to be "don't care" then that will eliminate one of the constraints on moving in that direction.

Thanks again!

KyleKotowick commented 3 years ago

For my particular use-case, I'd rather it overwrite sets/lists. I might use sets/lists for a list of availability zones to deploy instances in, for example, and I don't want to accidentally have it deploy to both the global/default ones and the branch-specific ones (possibly doubling costs unintentionally).

For cases where I want it to merge things instead of overwrite, I explicitly use a map instead of a set/list. If I need to do that and key/value pairs make no sense, I might just use numbered keys so that the indexing functionality would be the same as if I were indexing a list.

I can see potential cases where it would be useful to append sets/lists, but I can't think of any where it would be useful but I couldn't use a map instead of a list/set. Of course, user choice is great so that's why I suggest an argument to let the user choose.

bruceharrison1984 commented 3 years ago

Hi @apparentlymart, I'm glad to hear that you are interested in implementing something like this!

My use case is similar to @kkotowick, and it sounds like I agree with his implementation as well. I'm just going to steal a quote because I think it was put succinctly:

I use a branch-based system for multiple AWS accounts (each branch deploys to a different account) for a dev/test/prod system. Since each environment has different configuration values (e.g. instance types, domain names, etc.), but much of the configuration is consistent, I use a framework where there is a "global" config map (values used everywhere, and defaults), and then branch-specific config maps. .... By doing so, I can keep the same resource code and just modify the branch configs as required, and it all works perfectly.

So my desired behavior would be:

We merge config maps just like @kkotowick , so this sounds like a common pattern that people have converged on. It also looks like we are using the values in similar ways.

First we use the current merge to combine them in variables.tf

  ##compile settings from ./env templates
  settings = merge(yamldecode(file("${path.module}/env/settings.yml")), 
              yamldecode(file("${path.module}/env/settings.${terraform.workspace}.yml")))

settings.yml

DataWarehouse:
  ScaleSize: DW100c
  Tags:
    power_policy: auto_off ##required for auto-shutoff in the evenings
  Database:
    MaxCapacity: 32212254720

current settings.dev.yml Requires duplicating all settings under DataWarehouse, otherwise they are wiped out

DataWarehouse:
  ScaleSize: DW100c
  Tags:
    power_policy: auto_off_and_on ##required for auto-shutoff in the evenings
  Database:
    MaxCapacity: 32212254720

desired settings.dev.yml Only include values that are different

DataWarehouse:
  Tags:
    power_policy: auto_off_and_on ##required for auto-shutoff in the evenings / turn back on in the morning

We consume the values in the following way:

resource "azurerm_sql_database" "mdw" {
  name                             = "${local.resource_prefix}-dw"
  resource_group_name              = azurerm_resource_group.mdw.name
  location                         = azurerm_resource_group.mdw.location
  server_name                      = azurerm_sql_server.mdw.name
  edition                          = "DataWarehouse"
  requested_service_objective_name = local.settings["DataWarehouse"]["ScaleSize"]

  tags = local.settings["DataWarehouse"]["Tags"]
}

One other option that hasn't yet been mentioned would be if Terraform natively supported this type of config-map/variable merging. So variables... but on steroids. Arguably larger and more difficult to implement, but that would remove the necessity of this function(at least in this case).

Again, thanks for shining a light on this and let me know if you need any more information. @jbergknoff may also want to weigh in since he also contributed to the PR.

bjorges commented 3 years ago

Something like the puppet lookup merge behaviours would be great! https://puppet.com/docs/puppet/6.17/function.html#lookup

limratechnologies commented 3 years ago

Terraform 0.14 is released but not seeing this merged?

jspiro commented 3 years ago

Soooo what's holding this up?

Our use case is that we use YAML + for_each as a declarative interface for everything. The YAML gets rather deep and complicated, by necessity, and merging becomes a rats nest like this because we can't do it automatically.

All I want to do is add a single value to an existing list!

branch_protection = merge(local.monorepo_branch_protection, {
    master = merge(local.monorepo_branch_protection.master, {
      required_status_checks = merge(local.monorepo_branch_protection.master.required_status_checks, {
        contexts = concat(local.monorepo_branch_protection.master.required_status_checks.contexts, [
          "codeowners-validator"
        ])
      })
    })
  })
bruceharrison1984 commented 3 years ago

@jspiro The Hashi folks seem to be handling this very delicately so they get it right the first time. There is a bit more info in the original feature request thread.

I don't see this getting merged anytime soon 😢

osterman commented 3 years ago

We've taken a different approach by creating a terraform provider that handles deep merging of json/yaml strings using mergo.

https://github.com/cloudposse/terraform-provider-utils

terraform {
  required_providers {
    utils = {
      source = "cloudposse/utils"
    }
  }
}

locals {
  yaml_data_1 = file("${path.module}/data1.yaml")
  yaml_data_2 = file("${path.module}/data2.yaml")
}

data "utils_deep_merge_yaml" "example" {
  inputs = [
    local.yaml_data_1,
    local.yaml_data_2
  ]
}

output "deep_merge_output" {
  value = data.utils_deep_merge_yaml.example.output
}

We're using strings to escape the challenges of working with complex types. Instead of reading files, one can also just call yamlencode or jsonencode on the HCL objects and use the respective data source (e.g. utils_deep_merge_yaml or utils_deep_merge_json).

schollii commented 3 years ago

@KyleKotowick

I built a module to do this in pure Terraform

I have tried it and it seems to work well and the approach seems sound at least for maps (I have not checked how you handle lists but I'm guessing based on your approach that you overwrite existing elements and append new ones, which is actually what I would expect in general -- if lists same length the earlier should be overwritten, if later one longer then its items should be appended -- cc @apparentlymart).

The installation was not easy because of a dependency but I raised an issue in your project with a workaround, in case others are interested in using this.

KyleKotowick commented 3 years ago

@KyleKotowick

I built a module to do this in pure Terraform

I have tried it and it seems to work well and the approach seems sound at least for maps (I have not checked how you handle lists but I'm guessing based on your approach that you overwrite existing elements and append new ones, which is actually what I would expect in general -- if lists same length the earlier should be overwritten, if later one longer then its items should be appended -- cc @apparentlymart).

The installation was not easy because of a dependency but I raised an issue in your project with a workaround, in case others are interested in using this.

Sorry about that, the new release (v0.1.1) should be good now.

gibsonje commented 3 years ago

I've tried various solutions in this thread and @KyleKotowick 's solution with a module worked perfectly for my use case merging maps that were 1-3 levels deep. It's the lightest weigh answer to the problem being a pure HCL module that I can reference from github and not have to setup a required provider. The provider referenced in this thread I had trouble with.

So far for my use case if a deep merge function existed in terraform and worked exactly how this module works I'd be happy.

limratechnologies commented 2 years ago

When is it going to be supported?

mbainter commented 2 years ago

I wonder if those could be sidestepped by just improving the yaml parser's capabilities? Support for draft anchor merging would help here. To use the example above:

settings_defaults.yaml:

DataWarehouse: &DataWarehouseDefaults
  ScaleSize: DW100c
  Tags:
    power_policy: auto_off ##required for auto-shutoff in the evenings
  Database:
    MaxCapacity: 32212254720
---

settings.yml:
```yaml
DataWarehouse:
  <<:*DataWarehouseDefaults

settings_dev.yaml:

DataWarehouse:
  <<: *DataWarehouseDefaults
  Tags:
    power_policy: auto_off_and_on ##required for auto-shutoff in the evenings / turn back on in the morning

Then you just read those in, and combine them in a simple terraform template, and then run yamldecode against them, and YAML itself can manage the merge in accordance with the specification.

raffraffraff commented 2 years ago

Hey, I just stumbled onto this issue and realized that some of you might be interested in the Hiera5 provider.

What it Hiera: a YAML key/value store created by PuppetLabs to help users separate data from puppet code What it does: Uses known facts about your environment (terraform vars like region and environment) to performs hierarchical data lookup

To use it, you would need to add the provider (obviously) and create a directory to store your YAML files. Here's an example:

hiera
├── hiera.yaml
├── common
│   ├── cloudflare.yaml
│   ├── vpc.yaml
│   ├── eks.yaml
├── environment
│   ├── dev.yaml
│   ├── prod.yaml
│   └── test.yam
├── region
│   ├── eu-west-1.yaml
│   ├── us-east-1.yaml
│   ├── us-gov-east-1.yaml
│   └── us-west-2.yaml
└── workspace
    ├── prod_us-east-1
    │   ├── aurora.yaml
    │   └── overrides.yaml
    └── usgov.prod_us-west-2
        └── eks.yaml

The file hiera.yaml tells Hiera how to perform lookups by providing it with a hierarchy section that looks something like this:

hierarchy:
  - name: Workspace
    glob: workspace/%{workspace}/*.yaml
  - name: Environment
    path: environment/%{environment}.yaml
  - name: Region
    path: region/%{region}.yaml
  - name: Common
    glob: common/*.yaml

My personal routine is to create workspaces that encode 'facts' about the environment that I'm deploying into. Eg: prod_us-east-1 means that it's a production VPC in the us-east-1 region of AWS. I parse these out of the terraform.workspace and feed them into the Hiera provider as hiera variables %{workspace}, %{environment}, %{region}. So when I ask Hiera for a large, complex hash that describes my VPC configuration, it goes to the Hiera directory and assembles my hash by merging values in order of importance. So data in 'Common' will be overwritten by data in 'Region', etc. Best part: Hiera can do deep merges.

Here's an example of how I'd use this in a module:

data "hiera5_json" "s3_buckets" { key = "s3_buckets" }
locals {
  s3_buckets = try(jsondecode(data.hiera5_json.s3_buckets.value),{})
}
resource "aws_s3_bucket" "this" {
    for_each   = local.s3_buckets
    bucket     = each.value["name"]
    acl        = try(each.value["acl"],"private")
    tags       = try(each.value["tags"],[])
}

If I wanted to ensure that an S3 bucket existed for each production 'workspace', I'd simply add the following YAML: FILE: ./hiera/environment/prod.yaml CONTENT:

s3_buckets:
    my-bucket-%{environment}-%{region}:
        acl: private

NOTE: Key names can contain vars, so this would create a unique bucket in each region, avoiding clashing on globally-named resources. This also exposes the ability to easily define buckets, queues, VPCs etc without having to know how to write Terraform code. You just have to write modules that act on data sourced from Hiera.

[Edit: apologies for all the edits, and if anyone wants to see an example codebase that uses this, that they can clone and run, let me know and I'll make one]

Pangstar commented 2 years ago

I also have this use case of deep merging configurations between base configurations and environment specific implementations and would like to see this functionality get in.

It seems a little sad that this ticket is almost 2 years old and people are still left hanging. If a decision has already been made not to add it natively to terraform then should this ticket still be open?

bruceharrison1984 commented 2 years ago

Probably not.

I've abandoned and closed the PR I put it, it's hopelessly out of date and Hashi has given little guidance on the issue. From what I can tell, Hashi-cloud is the future, not local deployments. Hashi-cloud deals with workspaces in an entirely different manner (terraform.workspace always is "default") so the use case most of us seem to have during local deploys doesn't seem to be the one that Hashi prefers.

schollii commented 2 years ago

I've been using Kyle Kotowick's deepmerge module for almost a year and works nicely, no surprises so far: https://registry.terraform.io/modules/Invicton-Labs/deepmerge

This capability has really simplified configuration, each of my modules now has a defaults and various overrides and I merge them with this module.

schollii commented 2 years ago

One caveat though is that infracost does not currently properly parse Kyle's deepmerge module (infracost does its own HCL parsing), see https://github.com/infracost/infracost/issues/1968 for details (but if you don't plan on using infracost then Kyle's deepmerge module is a definite win).

schollii commented 2 years ago

@apparentlymart you hit an important use case here that I have experienced:

To meet this use-case it seems like instead what we want is a sort of prototypical example of what one subnet object should look like by default, which the function could then apply to all of the elements:

I have rolled my own solution that does this by loading the prototypes, then generating a map using the actual keys (via https://github.com/schollii/terraform-local-tf-object-create and https://github.com/schollii/terraform-local-tf-object-query) and merging all (using Kyle's deepmerge but might have been able to use cloudposse's provider too).

pathob commented 1 year ago

@bruceharrison1984 Sad to see this closed after this long discussion and with 127 thumbs up...

philip-harvey commented 1 year ago

This really should have been marked as a bug since the merge function doesn't merge maps, it just overwrites and you never know which one will be the final result.

schollii commented 1 year ago

For literals (numbers, strings, bools) and lists (which get overwritten in all deep-merge schemes that I've seen), merge() behaves as documented: if two maps have same key, the literal / list value gets overwritten, and keys that are in only one of the two maps end up in the final map. It's only when the shared key's value is a map and object that merge() does not behave as most would expect, overwriting the map or object instead of propagating the merge operation to the next level.

So for map values that are objects and maps, it could make sense to think of merge() behavior as a bug but HC will almost surely say it is like this by design and therefore not a bug, PLUS (and more importantly) "fixing" this would break backward compatibility (since by now too many people rely on the shallow merge of merge()).

The only appropriate way ahead is to specify a deepmerge() function that is acceptable to HC; I have tried in #31815

apparentlymart commented 1 year ago

For our purposes here "bug" specifically means that a feature is not working as designed or as documented.

Shallow and deep merge are separate operations that are both useful in different situations, so this issue represented a request to change Terraform's design to include a function that does something one might descibe as a "deep merge", not to correct a difference between the existing merge function's implementation and its design.

With that said: given that it still isn't clear exactly what "deep merge" means (there is no consistent definition of it across other languages for us to draw from, and no clear criteria for us to decide which of the interpretations is the best one) I think this one seems best served by introducing the ability for providers to contribute new functions to Terraform (#2771) and then provider developers can write the deep merge functions they need and the different variants can coexist in different namespaces.

In the meantime it's possible to implement something that is functionally equivalent to a function in a provider using a data source, at the expense of the syntax being far less convenient to use. For that reason I suggest that anyone for which this is a very pressing problem can develop a provider with a data source implementing the desired logic and then, once Terraform supports provider-contributed functions in a later release, also expose the same logic as a function to make it more convenient to use.

github-actions[bot] commented 1 year ago

I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.