Closed ravilr closed 4 months ago
For 1.) any objections to add support for the mergo's MergeWithOverride behavior supported by native P&T through keepMapValues: false
into the function-p-and-t as an additional toField patch policy like below : ?
// ToFieldPath patch policies.
const (
- ToFieldPathPolicyReplace ToFieldPathPolicy = "Replace"
- ToFieldPathPolicyMergeObject ToFieldPathPolicy = "MergeObject"
- ToFieldPathPolicyAppendArray ToFieldPathPolicy = "AppendArray"
+ ToFieldPathPolicyReplace ToFieldPathPolicy = "Replace"
+ ToFieldPathPolicyMergeObject ToFieldPathPolicy = "MergeObject" # equivalent to `keepMapValues: true`
+ ToFieldPathPolicyMergeObjectWithOverride ToFieldPathPolicy = "MergeObjectWithOverride" # equivalent to `keepMapValues: false`
+ ToFieldPathPolicyAppendArray ToFieldPathPolicy = "AppendArray"
And for 2.) looking for help on why the difference in deduplicating slice items on AppendSlice merge behavior b/w native versus function p&t.
We use almost exactly the same code on crossplane and here, and we map the new toFieldPathPolicy to the exact same merge policies under the hood, see here.
For 1.) any objections to add support for the mergo's MergeWithOverride behavior supported by native P&T through keepMapValues: false into the function-p-and-t as an additional toField patch policy like below : ?
We already set WithOverride, see here.
the default
Replace
toField merge in function-p&t isn't equivalent to Native P&T'skeepMapValues: false
, the non-overlapping map attributes from destination are lost with Replace.
keepMapValues
as per the docs Specifies that already existing values in a merged map should be preserved
, so in case of it being set to false
, the fact that already existing fields are preserved is actually a bug on Crossplane side, it shouldn't happen afaict. You should set MergeObject
if that's what you want.
AppendArray in function-p&t doesn't seem to de-duplicate slices on merge, like the Native P&T does.
Switching the array to strings make it deduplicate the array as needed, I think there is some difference with how we deserialize resources and therefore the types these are assigned at runtime, resulting here in 12345678
being a float64
in the source array and an int64
in the destination one, I'll have to dig further on why this is the case or fix it in the deduplication code.
Regarding why AppendArray doesn't work as expected in this case above.
Digging a bit further, we have some inconsistencies around how we parse things and therefore the types they result to.
json.Unmarshal
converts 12345678
to an int64
, but we do not go through that when reading observed/desired input resources, which results in 12345678
being parsed to a float64
instead.
But then we do parse using json.Unmarshal
resource bases, resulting in an int64
, which when we get to append to the slice, is compared against a float64
and considered different here.
Properly parsing the base maintaining the original float64
is not enough, as we still overwrite it with the int64
on the first patch because here we still round-trip using json.Unmarshal
before storing the new result.
So the right solution would be to always round-trip to json.Unmarshal
, avoiding the optimization introduced here, so that we can be sure we are always comparing apples to apples.
So the right solution would be to always round-trip to json.Unmarshal, avoiding the optimization introduced here, so that we can be sure we are always comparing apples to apples.
I can live with that, but it's a little unfortunate.
Avoid floating-point values as much as possible, and never use them in spec. Floating-point values cannot be reliably round-tripped (encoded and re-decoded) without changing, and have varying precision and representations across languages and architectures.
In a bunch of places we use https://github.com/kubernetes-sigs/json, which tries to avoid numbers ever being deserialized to float64. I'm guessing maybe we have an issue where protojson
is deserializing to float64 when we'd rather it deserialize to int64?
Regarding the Replace
issue, looks like we did miss on a fourth option, previously triggered by setting mergeOptions
to be not nil
, e.g. {}
was enough, which indeed lead to not unset the dst and merge WithOverride
here.
So, @ravilr, please do open a PR with your change ToFieldPathPolicyMergeObjectWithOverride
🙏
FWIW we could use a better name than MergeObjectWithOverride if we can think of one (we don't need to leak Mergo's API). I'm fine with MergeObjectWithOverride if we think it's the best/clearest way to express this though.
It seems like there's two scenarios we're missing in the new API.
With the old API you could express the following scenarios with mergeOptions
:
nil
{keepMapValues: true}
{appendSlice: true}
[^1]{}
{appendSlice: true, keepMapValues: true}
With the new API you can only express the following scenarios:
Replace
MergeObject
AppendArray
[^1]We've discussed scenario 4 in this issue. We probably need to support scenario 5 too.
I think there's two other issues.
First, the new API reads as if it's not recursive. For example toFieldPath: AppendArray
sounds like a policy you'd apply to a field path that was an array. However, it should probably actually be AppendArrays
(plural) because it would apply to all arrays recursively.
Second, AppendArray
sounds like it only affects arrays, but it actually affects objects too. For example toFieldPath: AppendArray
could be applied to a fieldpath containing only an object (with no nested arrays) and the result would actually be the same as the proposed MergeObjectWithOverride
. 🙃 i.e. We'd merge the object, overriding fields.
Given that:
I think we should consider a breaking change to fix this API. I take responsibility for the churn - I proposed the broken API based on an incorrect understanding of how mergeOptions
actually worked.
FWIW the motivation behind changing the API at all was:
[^1]: If an object key has a slice value, mergo appends the slice instead of overriding the key.
Here's one fairly wordy option that isn't another breaking change:
Replace
MergeObjects
ForceMergeObjectsAppendArrays
ForceMergeObjects
MergeObjectsAppendArrays
We could keep the existing policies and mark them deprecated to avoid breaking folks:
MergeObject
== MergeObjects
AppendArray
== ForceMergeObjectsAppendArrays
Another thing i noticed(even though don't have any such usages currently in our compositions) was the native P&T passes down mergeOptions to wildcarded ToFieldPatch'es : https://github.com/crossplane/crossplane/blob/v1.15.1/internal/controller/apiextensions/composite/composition_patches.go#L191 where as here, it isn't currently: https://github.com/crossplane-contrib/function-patch-and-transform/blob/v0.4.0/patches.go#L82
Shall i update #102 to address above, also ?
Shall i update #102 to address above, also ?
Please do, yes, definitely a bug
Shall i update https://github.com/crossplane-contrib/function-patch-and-transform/pull/102 to address above, also ?
@ravilr Are you planning to tackle this? I could either cut a new release of this function ASAP or wait until we have everything identified in this issue fixed.
Shall i update #102 to address above, also ?
@ravilr Are you planning to tackle this?
pushed https://github.com/crossplane-contrib/function-patch-and-transform/pull/103 for this. PTAL.
I could either cut a new release of this function ASAP or wait until we have everything identified in this issue fixed.
Thanks. I'm fine to wait to cut a release, till the other remaining issue of slice deduplication on appendSlice is also addressed.
@phisco opened https://github.com/crossplane-contrib/function-patch-and-transform/pull/105 to address the slice deduplication issue, while merging values containing integers. PTAL.
@phisco @jbw976 This feature looks pretty complete now. Please can we release a new image so we can start to use it?
What happened?
Seeing some compatibility issues in merge patch behavior between Native P&T and the latest function-p-and-t:v0.4.0. Looking for guidance on how to make the migration of compositions using native P&T to function-p-and-t seamless.
See repro below. To summarize as per my understanding/observation from below resources:
function-p-and-t don't have an equivalent of
policy.mergeOptions.keepMapValues: false
. the function-p&t defaulttoFieldPath: Replace
will replace the dst with src. the newly addedtoFieldPath: MergeObject
which is equivalent tokeepMapValues: true
isn't what is needed here. withkeepMapValues: false
, mergo's overwrite configuration gets enabled in native P&T https://github.com/darccio/mergo/blob/v1.0.0/merge.go#L322, which isn't possible in functions-p-and-t today..the AppendArray merge option in functions-p-and-t isn't resulting in de-duplication of slice items, resulting in duplicate slice items after merge. whereas in Native P&T,
appendSlice: true
results in deduplication of slice items, so no duplicate in the destination after merge.. Any idea, why do we see this difference in behavior b/w function and native ?How can we reproduce it?
the rendered desired MR resource by function-P&T isn't the same as the one rendered by Native P&T. the vimdiff in below image:
Replace
toField merge in function-p&t isn't equivalent to Native P&T'skeepMapValues: false
, the non-overlapping map attributes from destination are lost withReplace
.What environment did it happen in?
Crossplane version: v1.15.1 function-patch-and-transform: v0.4.0