Open samschlegel opened 4 years ago
Can also confirm that the -state-out file is never written to, and if I give it a bogus path for -state, it doesn't error at all. It seems like -state and -state-out are just completely broken with import when using a remote backend
Does this need more info to be triaged? As is the -state/-state-out
flags are completely broken when using a remote backend with terraform import
Hi @samschlegel,
The -state
and -state-out
options are supported only for the local backend, as a legacy way to override its settings for backward compatibility for earlier versions of Terraform that didn't yet have the "backend" concept. They are not intended for use in new systems.
I'm going to label this issue as a bug to start, but it's likely that the outcome of it will be for these options to generate an explicit error when used with a non-local backend, rather than to extend support for them to other backends, because they are remnants of a legacy workflow that is no longer actively maintained and may they well be removed altogether in a future Terraform release.
If you prefer, we could consider this issue instead as an enhancement request for the use-case of temporarily working offline. It's not a use-case we've seen before, but if there's enough interest in it then we could consider other ways to achieve it that are built to work well with the backend architecture rather than relying on these legacy options. I think we'd want to consider whether such a thing can be used safely: it seems likely to lead to a forked state unless the team communicates out-of-band and effectively creates a human-driven "lock" on the state; perhaps this explicit "work offline" mechanism should be able to take a lock for the duration of working offline to make it explicit that nobody else should do anything that would create a new state snapshot.
In the short term, I think the closest thing to what you want would be to use terraform state pull
to pull the state locally and then temporarily unconfigure the remote backend in your configuration. You can run terraform init -reconfigure
to switch to the new backend configuration (or the absense of one) without any state migration. Once you're finished, you can restore the original remote backend configuration, run terarform init -reconfigure
one more time to reactivate it, and then use terraform state push
to upload the new state snapshot you've generated locally.
Thanks for the reply!
The
-state
and-state-out
options are supported only for the local backend, as a legacy way to override its settings for backward compatibility for earlier versions of Terraform that didn't yet have the "backend" concept. They are not intended for use in new systems.
Ah okay that makes sense.
I'm going to label this issue as a bug to start, but it's likely that the outcome of it will be for these options to generate an explicit error when used with a non-local backend, rather than to extend support for them to other backends, because they are remnants of a legacy workflow that is no longer actively maintained and may they well be removed altogether in a future Terraform release.
I think if these are legacy and don't support remote backends, Some kind of warning, confirmation, or preferably an error when attempting to use these with a remote backend I think would be sufficient. Is state mv
also intended to not support these flags when using a remote backend?
If you prefer, we could consider this issue instead as an enhancement request for the use-case of temporarily working offline. It's not a use-case we've seen before, but if there's enough interest in it then we could consider other ways to achieve it that are built to work well with the backend architecture rather than relying on these legacy options.
I think we could probably treat this as two separate issues. One for the the "bug" that it silently ignores these flags when using remote backends without indication, and one for an enhancement to make it support this for offline use cases.
I think we'd want to consider whether such a thing can be used safely: it seems likely to lead to a forked state unless the team communicates out-of-band and effectively creates a human-driven "lock" on the state; perhaps this explicit "work offline" mechanism should be able to take a lock for the duration of working offline to make it explicit that nobody else should do anything that would create a new state snapshot.
Ah I was going to say shouldn't lineage/serial solve this, but I think I was under the impression that lineage was some kind of hash of the previous state, not a uuid generated with the initial state. That's a good callout as this very well could have bitten us if we rolled this out.
In the short term, I think the closest thing to what you want would be to use
terraform state pull
to pull the state locally and then temporarily unconfigure the remote backend in your configuration. You can runterraform init -reconfigure
to switch to the new backend configuration (or the absense of one) without any state migration. Once you're finished, you can restore the original remote backend configuration, runterarform init -reconfigure
one more time to reactivate it, and then useterraform state push
to upload the new state snapshot you've generated locally.
Yeah I started down this road but it felt pretty gross. I think the road we're going to take is just ask for confirmation before running all the import and then just run them in sequence.
I just ran into this issue, and thought it might be useful to quickly outline what I was trying to do and why it surprised me.
I had just imported a resource (using the aws provider, if it matters at all) into the remote state, and wanted to check if a different type of import id would also work. So I did:
terraform state pull > state.json
, terraform state rm -state=state.json aws_resource_type.resource_name
, terraform import -state=state.json aws_resource_type.resource_name import_id
I was expecting terraform to try to import into the local state file such that I can observe whether the import succeeds, and if so what is the imported values. In this case fortunately that resource did exist in the remote state, so terraform errored out, but it caused quite a few head-scratching moments afterwards.
I had expected this to work as the help text on terraform import
says that it will write to the source state file, which in this case should have been the local path specified:
-state=PATH Path to the source state file. Defaults to the configured
backend, or "terraform.tfstate"
-state-out=PATH Path to the destination state file to write to. If this
isn't specified, the source state file will be used. This
can be a new or existing path.
I don't really know if I'd need the working-offline type of use-case, but at least the proposed error would be very helpful to identify this limitation, preferably with a note in the help text as well.
Hi all!
I just wanted to add some clarify here since my last comment left it a bit ambiguous what this issue was representing, while I was awaiting the response from @samschlegel.
We now have this labelled as a bug, and the bug it's intending to represent is that the -state
and -state-out
options are only for the situation where the configuration contains no backend
block at all; indeed, they are a legacy way to specify state storage locations from before the concept of a backend
existed.
Therefore I think the solution to this bug would include:
If either -state
or -state-out
is set on the command line, check if there is an explicit active backend (regardless of type) and return an error if so, explaining that these are legacy options that are incompatible with backends.
I think this error should appear even in situations where there's an explicit backend "local"
block, because in that case the backend configuration should be the authority for where state snapshots are written, not the command line. (terraform init -backend-config="..."
provides the backend-oriented way to vary this when needed.)
Nice to have: if these options are used without a backend
block, accept them but generate a warning saying that they are deprecated in favor of the local backend options.
I expect it will be hard to implement such a warning universally, because not all of Terraform's subcommands have existing support for emitting warnings, but it should at least be possible to do this for terraform import
because it already uses lots of internal machinery that can generate error and warning diagnostics, and so this would just be one additional warning from its perspective.
As far as I'm aware we've not heard a lot of demand for a "working offline" mechanism in the meantime since my last comment, so I'd suggest that we put that part aside for now. However, if anyone would like to discuss use-cases around that further then I'd suggest to open a Feature Request issue. I expect that in such an issue we'd be more keen to talk about the use-cases and constraints motivating the request rather than a suggested implementation, at least for initial discussion while we figure out how this might interact with other existing design constraints.
Terraform Version
Terraform Configuration
Happens with GCS as the configured backend, but I image would happen with any of them.
Expected Behavior
terraform import
should not persist state to the remote backend when -state/-state-out are usedActual Behavior
terraform import
persisted state to the remote backendSteps to Reproduce
terraform state pull > temp.tfstate
terraform import -state=temp.tfstate <some_address> <some_id>
terraform import <some_address> <some_id>
The second import will error with
Error: Resource already managed by Terraform
due to it modifying the remote stateAdditional Context
We are wanting to import without modifying the remote state so that we can do dry runs, and avoid having to lock and hit the network on each import invocation. This bug led to us importing a bunch of resources during testing that have to be manually removed from the state.