Closed zyxw59 closed 3 years ago
The use case here is a REST server which receives JSON objects, converts them to avro based on a pre-existing schema, and sends them to a Kafka queue. The schema looks something like this:
{
"name": "event",
"type": "record",
"fields": [
{
"name": "event_id",
"doc": "unique uuid for the event",
"type": "string"
},
{
"name": "properties",
"type": [
"null",
{
"type": "record",
"name": "Properties",
"fields": [
{
"name": "tab",
"type": "string"
}
]
}
],
"default": null
}
]
}
An example JSON object for this schema could be:
{
"event_id": "3756d9d1-78b3-4c45-9190-d9ca9c523cb7",
"properties": {
"tab": "some tab name"
}
}
When the JSON object is converted to Avro with avro_rs::types::Value::from
, the properties
field is represented as an Avro map. If the properties
field in the schema were just a record, this would be fine, since Value::resolve
already resolves maps as records (https://github.com/flavray/avro-rs/blob/main/src/types.rs#L682). But since the field is a union, it first has to resolve the union, which uses Value::validate
to find the appropriate union variant. Since Value::validate
does not accept a Value::Map
for a Schema::Record
, this fails, and the Avro object cannot be resolved with the schema.
More generally, I think there is the question of whether value.resolve(schema).is_ok()
implies value.validate(schema)
and vice-versa.
Got it! Thank you for the details! This looks good to me. 🙂
This is necessary for resolving maps as records in unions.