Closed bitemyapp closed 2 years ago
Hi @bitemyapp 👋 Apologies for the long delay in responding here. Is this still an issue with recent versions of the framework?
I'm not on that team or project any longer and I'm about to have my third kid in three years but you never know, a miracle might happen and I may be able to find time for a repro.
I ended up settling on using maps for everything IIRC. The weird marshaling issues (int64 and all that) I worked around by updating to what was a pre-release at the time: v0.6.1-0.20220311204952-c584343531cb
Before I spend time attempting a repro, could you tell me what the API actually guarantees me here? I had questions in the original posting like:
Now if you tell me that the list type value has be made stable by the provider that's fine but I'd like to know why it worked fine before but doesn't now. I'm going to look into the ordered set type now too.
What behavior is actually aberrant and acceptable here? What responsibility does the provider have in this?
The stuff that used to be a list now uses tfsdk.MapNestedAttributes
:
"parent_datasets": {
Required: true,
Attributes: tfsdk.MapNestedAttributes(schemaCoriariaParentDatasets("parent_datasets."), tfsdk.MapNestedAttributesOptions{}),
},
"validations": {
Required: true,
Attributes: tfsdk.MapNestedAttributes(schemaCoriariaValidations(), tfsdk.MapNestedAttributesOptions{}),
},
This made the collections stable between plan/apply cycles.
Sidebar: please add proper sum types (enums w/ cases that can carry extra data around), a lot of the ugly here was due to having to emulate enums with optional and manual enforcement of exclusiveness. e.g.
func schemaCoriariaSliceRange(parent string) map[string]tfsdk.Attribute {
return map[string]tfsdk.Attribute{
"unitary": {
Type: types.BoolType,
Optional: true,
// ConflictsWith doesn't work in plugin framework, not sure
// if it's merely unimplemented or if they intend to do something
// different but if it's gone entirely that would be unfortunate.
// ConflictsWith is how I was enforcing enum value exclusivity.
// ConflictsWith: []string{parent + "temporal", parent + "numeric"},
},
"temporal": {
Optional: true,
// ConflictsWith: []string{parent + "unitary", parent + "numeric"},
Attributes: tfsdk.SingleNestedAttributes(schemaCoriariaTemporalSliceRange(parent + "temporal.")),
},
"numeric": {
Optional: true,
// ConflictsWith: []string{parent + "unitary", parent + "temporal"},
Attributes: tfsdk.SingleNestedAttributes(schemaCoriariaNumericSliceRange()),
},
"most_recent": {
Type: types.BoolType,
Optional: true,
},
}
}
You can see in the comments that they were meant to be exclusive, but ConflictsWith wasn't working so I wasn't able to leverage that either.
For comparison, here's what the same type looks like in Golang (which also doesn't have sum types):
type SliceRange struct {
Enum *SliceRangeEnum
SliceRangeClass *SliceRangeClass
}
type SliceRangeClass struct {
Temporal *Temporal `json:"temporal,omitempty"`
Numeric *Numeric `json:"numeric,omitempty"`
}
type SliceRangeEnum string
const (
SliceRangeUnitary SliceRangeEnum = "unitary"
SliceRangeMostRecent SliceRangeEnum = "most_recent"
)
type Temporal struct {
End TemporalSliceBound `json:"end"`
Interval TemporalInterval `json:"interval"`
Start TemporalSliceBound `json:"start"`
}
type TemporalSliceBound struct {
Fixed *string `json:"fixed,omitempty"`
Relative *TemporalInterval `json:"relative,omitempty"`
}
type TemporalInterval struct {
IntervalQuantity int64 `json:"interval_quantity"`
IntervalType IntervalType `json:"interval_type"`
}
type IntervalType string
const (
Day IntervalType = "day"
Hour IntervalType = "hour"
Month IntervalType = "month"
Quarter IntervalType = "quarter"
Year IntervalType = "year"
)
And here's the same type in Rust:
/// Describes the range at which slices should exist
pub enum SliceRange {
Temporal {
start: TemporalSliceBound,
end: TemporalSliceBound,
interval: TemporalInterval,
},
Numeric { start: i32, end: i32, interval: i32 },
Unitary,
MostRecent,
}
pub enum TemporalSliceBound {
Fixed(NaiveDateTime),
Relative(TemporalInterval),
}
pub enum TemporalInterval {
Hour(i32),
Day(i32),
Month(i32),
Quarter(i32),
Year(i32),
}
I generated JSON Schemas for the API types using the schemars
crate, fed those JSON Schemas into quicktype.io to generate the Golang client, then created a parallel hierarchy of types that adapted the client API type representation to the tfsdk schema types. I mention all this in case it spares someone similarly situated to myself some pain.
I think it took me about 3 months to finish the provider and have it running bug-free in production and I was working on virtually nothing else and going as fast as I possibly could.
I also mention all this because in order to change those collection types from maps to lists to see if the lists have a stable ordering now I would need to change:
So if you tell me "list ordering should be stable now" I will happily believe you and close this ticket with relish.
Thank you for the response, @bitemyapp.
In Terraform's type system, cty
, and subsequently the type system of providers on this side of the Terraform plugin protocol, a list type is defined as an ordered collection of elements of the same type. cty.List
specification. For an unordered collection of elements of the same type, that is considered a set type.
Terraform has various rules/constraints around value handling as part of the resource instance change lifecycle. In particular with list types, it is expected that the configuration, plan, and state values all match in terms of ordering. If there is a remote system involved with how the provider handles data (such as potential for ordering transformations after Create, Read, or Update operations), it is the responsibility of the provider to either normalize the list ordering to match the given configuration/state, use a set type, or (with the framework's attr
package interfaces) potentially implement a custom type that translates the behaviors of the remote system to match Terraform's rules. In the custom type situation, #70 is likely very relevant in that discussion.
The framework's list type implementation is very straightforward in the sense that it is simply a Go slice of elements ([]attr.Value
to be concise). That slice is never reordered in the framework or through its various translations to/from the Terraform plugin protocol. We have never heard other bug reports of something amiss with that ordering handling. The closest issues in prior (0.12.0 or earlier) versions of the framework were with plan modification handling of set types, where the framework was errantly using schema path-based logic instead of mutating the Go slice elements based on index. Now everything, even for sets, is consistently based on Go slice indexing under the hood. If you are seeing behavior that suggests that the framework's translation a Go slice of any type to the framework's list type Go slice is reordering elements, we would love to hear more.
To answer your sidebar items, there are quite a few options provider developers have (some developed since you last mentioned you were actively developing the provider). I'll start with enumeration validation/types.
The first is the simplest, where a framework-defined primitive type is used, such as Type: types.StringType
, then attribute also declares validators, such as Validators: []tfsdk.AttributeValidator{/*...*/}
. The terraform-plugin-framework-validators
Go module contains some common use validators which can solve this particular need, such as stringvalidator.OneOf()
. For example:
// This attribute will only accept configuration values of "one", "two", or "three".
tfsdk.Attribute{
// Required: true or Optional: true
Type: types.StringType,
Validators: []tfsdk.AttributeValidator{
stringvalidator.OneOf("one", "two", "three"),
},
}
If implementing enumeration-based attributes in that manner is tedious across resources, that validation logic can be wrapped into a custom attribute type that implements the xattr.TypeWithValidate
interface. Those custom type attributes can also implement additional Validators
on top of the type's validation. If the attribute type requires extra data already, implementing a custom type may already be required for your use case.
In more complex validation cases, such as validation across multiple attributes, it's likely that performing the validation at the data source, provider, or resource schema level is easier. For example, resources can implement declarative "entire configuration" validation via the resource.ResourceWithConfigValidators
interface:
func (r ExampleResource) ConfigValidators(ctx context.Context) []resource.ConfigValidator {
return []resource.ConfigValidator{/*...*/}
}
An imperative style is available via the resource.ResourceWithValidateConfig
interface. The nice part about the declarative style is that the terraform-plugin-framework-validators
Go module also provides some common use case validators there as well, such as resourcevalidator.Conflicting()
:
// This resource will not accept configuration values for both "attribute_one" and "attribute_two".
func (r ExampleResource) ConfigValidators(ctx context.Context) []resource.ConfigValidator {
return []resource.ConfigValidator{
resourcevalidator.Conflicting(
path.MatchRoot("attribute_one"),
path.MatchRoot("attribute_two"),
),
}
}
Speaking of "ConflictsWith type logic, it's also supported as an attribute validator schemavalidator.ConflictsWith()
, if that is easier to reason about. Since the attribute validator accepts path expression parameters, it is possible to reference relative paths with it.
If you have other questions about available functionality for providers build on the framework, I would recommend reaching out in the Plugin Development section of HashiCorp Discuss where there are more folks able to answer and discuss topics rather than the issue tracker here which is only monitored by a few framework maintainers.
To summarize it sounds like:
Good-o, about what I expected. Not having sum types (enums that contain non-primitive values) does a lot of violence to peoples' data models in practice and I dearly wish it would become more common.
Here's an example of the Terraform HCL the schema is supporting:
resource "coriaria_dataset" "my_dataset" {
dataset_name = "my_dataset"
backend = {
catalog_id = "5121234567"
database_name = "my_prod_database"
table_name = "my_dataset"
s3_data_availability_type = "batch"
backend_check_strategy = "once"
}
builder_kind = {
athena_builder = {
query = "my_dataset.athena"
query_argument_type = "daily_incremental"
database_name = "my_prod_database"
table_name = "my_dataset"
bucket_name = "my-prod-database"
key = "athenaDB/my_dataset/"
compression = "snappy"
serde = "orc"
temp_database = "tmp"
temp_bucket = "my-tmp-bucket"
}
}
slice_range = {
temporal = {
start = {
fixed = "2022-07-07T00:00:00"
}
end = {
relative = {
interval_type = "day"
interval_quantity = 1
}
}
interval = {
interval_type = "day"
interval_quantity = 1
}
}
}
parent_datasets = {
(coriaria_dataset.some_other_dataset.dataset_name) = {
dependency_link = {
unbounded_upstream = true
}
}
(coriaria_dataset.another_dataset.dataset_name) = {
dependency_link = {
identical = true
}
}
(coriaria_dataset.log_dataset.dataset_name) = {
dependency_link = {
bounded_upstream = {
relative_lower_bound = {
interval_type = "day"
interval_quantity = 1
}
relative_upper_bound = {
interval_type = "day"
interval_quantity = 0
}
}
}
}
}
validations = {
"my_dataset count matches log_dataset" = {
faceoff_validation = {
template_name = "my_dataset_matches_log_dataset.yaml"
validation_kind = "faceoff"
faceoff_argument_type = "daily"
database_name = "my_database"
}
}
"my_dataset matches primary key count" = {
faceoff_validation = {
template_name = "my_dataset_primary_key_check.yaml"
validation_kind = "faceoff"
faceoff_argument_type = "daily"
database_name = "my_database"
}
}
}
}
Part of the reason it was so painful is that there were no public examples of a Terraform provider w/ a more complex schema/model. I'd like to open source the provider or make a facsimile that demonstrates the techniques so that I can spare others the toil I went through.
Closing as I don't think there's a bug here other than my feature request for enum-cases-containing-non-prim-values. Cheers and thank you for your time 🙏
I'm going to lock this issue because it has been closed for 30 days ⏳. This helps our maintainers find and focus on the active issues. If you have found a problem that seems similar to this, please open a new issue and complete the issue template so we can capture all the details necessary to investigate further.
incidentally I was using HEAD of this repository in the course of debugging a very confusing issue with a non-null
int64
value getting nulled out on the way through parsing and validation duringterraform plan
. It looked fine in the tfstate too. Now I've bumped into an issue with lists not being stable on HEAD. I didn't see this withv0.5.0
I think.Now if you tell me that the list type value has be made stable by the provider that's fine but I'd like to know why it worked fine before but doesn't now. I'm going to look into the ordered set type now too.
Update: Never happened before tonight but now it's happening with these versions too:
Update 2: I added stable sorting to the parent dataset list in the API and in the Provider and I'm still getting this error.
Module version
Relevant provider source code
Some elision, but tried to isolate it to the relevant part.
Terraform Configuration Files
Debug Output
Take-away on the debug output is that the API responded with a differently ordered list:
Expected Behavior
The plan should've been clean and matched the state after apply.
Actual Behavior
Steps to Reproduce
I think you could repro it with any list type.