Open daisukebe opened 1 year ago
@ZeDRoman we should consider whether we really want these properties to be maps:
one_or_many_map_property
seems unnecessary and can cause confusion because it gives two different ways to represent a single entry map. API clients doing reconciliation loops may be tripped up by this as well.@r-vasquez looks like what's happened here is that two map
-type configuration properties were added in 23.1.x, without picking up that rpk cluster config import/export/edit
don't handle maps.
It's a concern that automated tests in cluster_config_test.py
didn't pick this up, because they are meant to do a roundtrip of import/export to check for this kind of thing -- perhaps this happened because those new properties were empty, therefore survived a roundtrip in that state.
Thanks for the context @jcsp
I see that the schema returns the 2 properties that follow the same pattern:
{
"meta.name":"kafka_client_group_fetch_byte_rate_quota",
"meta.type":"array",
"meta.items.type":"config::client_group_quota"
},
{
"meta.name":"kafka_client_group_byte_rate_quota",
"meta.type":"array",
"meta.items.type":"config::client_group_quota"
},
To identify those, is it safe to parse properties based on the presence of ::
? Something like:
meta.type == array && strings.Contains(meta.items.type, "::")
w.r.t Test, yes, both properties have null as default so they survived the roundtrip, I'll add those properties to the test.
To identify those, is it safe to parse properties based on the presence of ::? Something like:
In general, no, but we might need to make this some kind of special hack for these properties. This is the first time someone has added something to cluster config that wasn't a basic type, so it is relying on whoever consumers the schema to know what "config::client_group_quota" means, which isn't great.
The type being reported as array
is a bug. Basically these two properties have bogus schema information :-(
ISTM that we should avoid trying to hack around this in rpk at the moment, leave this undocumented, and fix this first in redpanda and then in rpk?
The "one or many" part of one_or_many_map_property seems unnecessary and can cause confusion because it gives two different ways to represent a single entry map. API clients doing reconciliation loops may be tripped up by this as well.
I used simillar approach to simple one_or_many_property
property. Maybe @dotnwat can respond to this comment
FYI, I've made conversion_binding<> for a similar case where I needed the configuration data in a set, but @jcsp has pointed out the downsides of having a set in cluster property. So I've ended up with a vector in the property, and a bitmap in the binding (could be a set as well). Not in dev yet but hope to merge #10285 soon.
I used simillar approach to simple one_or_many_property property. Maybe @dotnwat can respond to this comment
What's the question @ZeDRoman? Generally I agree with others that one_or_many_property
should be discouraged / removed if possible. Historically it was added purely out of a curiosity of the yaml api. I probably should have removed it before it ended up infecting things.
Is there a consensus that this can be fixed? I hesitate to document e.g. the per-client/group quotas (which are a useful answer to Kafka's user quotas feature we lack) when there's this weird asterisk that you have to use the Admin API to configure them :/
Is there a consensus that this can be fixed? I hesitate to document e.g. the per-client/group quotas (which are a useful answer to Kafka's user quotas feature we lack) when there's this weird asterisk that you have to use the Admin API to configure them :/
I am thinking about best solution. I will answer later
Thank you
Did this get done?
Nope, it did not.
can we find even a basic/short-term solution to this (it presents to the user as a significant bug)?
We are chasing 5-7 such short-term things unfortunately and risk that they won't land by v23.3.1 anyway. So, yes we can, but not for another couple or so months.
@piyushredpanda I wonder if we'll experience this with the config constraints. @BenPope see above, will that be a map type?
hey has there been any updates on this or does anyone have any known workarounds, especially when running in k8s with the redpanda helm chart?
My team is building a metrics pipeline using redpanda and rate-limiting is a pretty important feature for us. But once we set kafka_client_group_byte_rate_quota
via the helm chart we can no longer make any config changes until we exec into a redpanda pod and basically re-running the script from the redpanda-configuration
job, except after the export we have to manually remove the rate-limit values.
@chrisseto for clarifying the Helm behavior.
But @rl0nergan FYI 24.2 will add support for setting client quotas via the Kafka AP AlterQtuoas API, and using a node-wise unit vs core-wise, which may prove easier to manage. The configs will be deprecated but remain for 1-2 releases
That's more or less expected behavior given this bug. The chart uses export
and import
to set configs in bulk. If RPK itself isn't playing nicely with this field, there's not much the helm chart can do.
figured that would be the case, thanks for confirming
Version & Environment
Redpanda version: (use
rpk version
): e133de2fWhen a config value is a map, rpk incorrectly represents the value which causes Redpanda server to fail parsing it. An example property is
kafka_client_group_byte_rate_quota
.What went wrong?
When the property has some valid values,
rpk cluster config edit
can't change anything due to failing to parse the bad yaml. In the example below, I fail to changedefault_topic_partitions
to 11 due to the validation error forkafka_client_group_byte_rate_quota
which is untouched.In the editor, the property values look like this.
What should have happened instead?
It should be something like this, which can be parsed by Redpanda.
How to reproduce the issue?
By default it's empty
Adding one rule: prefix: kafka
Adding second rule: prefix: redpanda
Additional information
Redpanda runs v23.1.7 and complains below
JIRA Link: CORE-1286