The context for this issue is in aws-sdk-kotlin#1220. Essentially we made a bad assumption that flat collections would always be serialized sequentially. In reality services are returning flat collections interspersed with other XML elements.
Our original approach to deserialization followed closely with kotlinx.serialization where we have a common Serializer and Deserializer interface. Each format we want to support (xml, json, etc) implements those and then codegen is the same across all types. The issue is (1) we end up duplicating information already in the model (field traits) and (2) we have to bend over backwards to make the format work within the interface instead of just creating a runtime type that more closely matches the medium. We discussed as a team our options for addressing this issue and decided to just refactor the way we do XML deserialization to closer match that of Go + Rust. This was something we had discussed prior to GA and just didn't have time to do. Rather than implement a one off workaround tailored specifically to this issue we're going to move in the desired end state which is to generate serializers/deserializers specific to each format (starting with just XML deserialization).
This is a large PR so I'm going to try and summarize the important bits for easier review. In particular because a lot of this PR is net new test code.
XmlParserGenerator guts were replaced to no longer use the common struct/union deserializer and instead generate something directly off the lower level serde-xml types. See Codegen Output below for example output and differences.
Some of the biggest structural differences are that we generate dedicated functions for list/map deserialization rather than doing it inline. This is partially to help with readability but also to maintain the correct deserializer state by construction since they require one or more inner loops themselves. Flattened collections accumulate values into the member such that every time a flattened member tag is hit we don't replace the previous collection (fixing the bug in aws-sdk-kotlin#1220).
XmlTagReader - new type that sits on top of XmlStreamReader and provides some small conveniences for iterating tokens and maintaining deserializer state.
Removed all previous XML deserializer unit tests in favor of a new module tests/codegen/serde-tests. This new module has a bit of overlap with the existing protocol tests but the iteration time is quicker and is independent of the protocols. This new module has greater coverage than our previous unit tests and I even found some bugs with how we were generating nested lists/maps inside union types as well as found #1040. The idea is the same as protocol tests, use the generated code to test with rather than hand writing tests that mimic the structure of generated code. This approach is both easier to write tests for but more accurate as it is testing the real codegen output as opposed to hand written versions of what we expect codegen to output.
The serde-benchmarks project contained a companion module serde-codegen-support that implemented a custom dummy protocol for json + xml. There wasn't anything specific to benchmarks here though so I refactored it to remove notion of benchmarks and moved it to tests/serde as a common module that can be re-used for both the serde benchmarks and the new codegen integration tests
Codegen Output
For the S3 ListObjectVersions output type:
Previously
private fun deserializeListObjectVersionsOperationBody(builder: ListObjectVersionsResponse.Builder, payload: ByteArray) {
val deserializer = XmlDeserializer(payload)
val COMMONPREFIXES_DESCRIPTOR = SdkFieldDescriptor(SerialKind.List, XmlSerialName("CommonPrefixes"), Flattened)
val DELETEMARKERS_DESCRIPTOR = SdkFieldDescriptor(SerialKind.List, XmlSerialName("DeleteMarker"), Flattened)
val DELIMITER_DESCRIPTOR = SdkFieldDescriptor(SerialKind.String, XmlSerialName("Delimiter"))
val ENCODINGTYPE_DESCRIPTOR = SdkFieldDescriptor(SerialKind.Enum, XmlSerialName("EncodingType"))
val ISTRUNCATED_DESCRIPTOR = SdkFieldDescriptor(SerialKind.Boolean, XmlSerialName("IsTruncated"))
val KEYMARKER_DESCRIPTOR = SdkFieldDescriptor(SerialKind.String, XmlSerialName("KeyMarker"))
val MAXKEYS_DESCRIPTOR = SdkFieldDescriptor(SerialKind.Integer, XmlSerialName("MaxKeys"))
val NAME_DESCRIPTOR = SdkFieldDescriptor(SerialKind.String, XmlSerialName("Name"))
val NEXTKEYMARKER_DESCRIPTOR = SdkFieldDescriptor(SerialKind.String, XmlSerialName("NextKeyMarker"))
val NEXTVERSIONIDMARKER_DESCRIPTOR = SdkFieldDescriptor(SerialKind.String, XmlSerialName("NextVersionIdMarker"))
val PREFIX_DESCRIPTOR = SdkFieldDescriptor(SerialKind.String, XmlSerialName("Prefix"))
val VERSIONIDMARKER_DESCRIPTOR = SdkFieldDescriptor(SerialKind.String, XmlSerialName("VersionIdMarker"))
val VERSIONS_DESCRIPTOR = SdkFieldDescriptor(SerialKind.List, XmlSerialName("Version"), Flattened)
val OBJ_DESCRIPTOR = SdkObjectDescriptor.build {
trait(XmlSerialName("ListVersionsResult"))
trait(XmlNamespace("http://s3.amazonaws.com/doc/2006-03-01/"))
field(COMMONPREFIXES_DESCRIPTOR)
field(DELETEMARKERS_DESCRIPTOR)
field(DELIMITER_DESCRIPTOR)
field(ENCODINGTYPE_DESCRIPTOR)
field(ISTRUNCATED_DESCRIPTOR)
field(KEYMARKER_DESCRIPTOR)
field(MAXKEYS_DESCRIPTOR)
field(NAME_DESCRIPTOR)
field(NEXTKEYMARKER_DESCRIPTOR)
field(NEXTVERSIONIDMARKER_DESCRIPTOR)
field(PREFIX_DESCRIPTOR)
field(VERSIONIDMARKER_DESCRIPTOR)
field(VERSIONS_DESCRIPTOR)
}
deserializer.deserializeStruct(OBJ_DESCRIPTOR) {
loop@while (true) {
when (findNextFieldIndex()) {
COMMONPREFIXES_DESCRIPTOR.index -> builder.commonPrefixes =
deserializer.deserializeList(COMMONPREFIXES_DESCRIPTOR) {
val col0 = mutableListOf<CommonPrefix>()
while (hasNextElement()) {
val el0 = if (nextHasValue()) { deserializeCommonPrefixDocument(deserializer) } else { deserializeNull(); continue }
col0.add(el0)
}
col0
}
DELETEMARKERS_DESCRIPTOR.index -> builder.deleteMarkers =
deserializer.deserializeList(DELETEMARKERS_DESCRIPTOR) {
val col0 = mutableListOf<DeleteMarkerEntry>()
while (hasNextElement()) {
val el0 = if (nextHasValue()) { deserializeDeleteMarkerEntryDocument(deserializer) } else { deserializeNull(); continue }
col0.add(el0)
}
col0
}
DELIMITER_DESCRIPTOR.index -> builder.delimiter = deserializeString()
ENCODINGTYPE_DESCRIPTOR.index -> builder.encodingType = deserializeString().let { EncodingType.fromValue(it) }
ISTRUNCATED_DESCRIPTOR.index -> builder.isTruncated = deserializeBoolean()
KEYMARKER_DESCRIPTOR.index -> builder.keyMarker = deserializeString()
MAXKEYS_DESCRIPTOR.index -> builder.maxKeys = deserializeInt()
NAME_DESCRIPTOR.index -> builder.name = deserializeString()
NEXTKEYMARKER_DESCRIPTOR.index -> builder.nextKeyMarker = deserializeString()
NEXTVERSIONIDMARKER_DESCRIPTOR.index -> builder.nextVersionIdMarker = deserializeString()
PREFIX_DESCRIPTOR.index -> builder.prefix = deserializeString()
VERSIONIDMARKER_DESCRIPTOR.index -> builder.versionIdMarker = deserializeString()
VERSIONS_DESCRIPTOR.index -> builder.versions =
deserializer.deserializeList(VERSIONS_DESCRIPTOR) {
val col0 = mutableListOf<ObjectVersion>()
while (hasNextElement()) {
val el0 = if (nextHasValue()) { deserializeObjectVersionDocument(deserializer) } else { deserializeNull(); continue }
col0.add(el0)
}
col0
}
null -> break@loop
else -> skipValue()
}
}
}
}
The 1.0.64 S3 release was 5,072,329 bytes. Local builds are coming in at 5,039,276 bytes (~0.6% smaller).
Benchmarks
I've updated the benchmarks. They are included inline here for easy review. The tl;dr is that the generated deserializers are adding less overhead to raw token lexing than before and as a result is faster.
The lexer internals didn't change so they are nearly the same as the prior baseline. The deserialize benchmark came
in at 33.566 ms/op compared to the prior 90.697 ms/op (62% faster).
Binary Compatibility
This change intentionally breaks binary compatibility on a few @InternalApi APIs:
XmlDeserializer - removed completely as it's no longer used
Issue \
aws-sdk-kotlin#1220
Description of changes
The context for this issue is in aws-sdk-kotlin#1220. Essentially we made a bad assumption that flat collections would always be serialized sequentially. In reality services are returning flat collections interspersed with other XML elements.
Our original approach to deserialization followed closely with kotlinx.serialization where we have a common
Serializer
andDeserializer
interface. Each format we want to support (xml, json, etc) implements those and then codegen is the same across all types. The issue is (1) we end up duplicating information already in the model (field traits) and (2) we have to bend over backwards to make the format work within the interface instead of just creating a runtime type that more closely matches the medium. We discussed as a team our options for addressing this issue and decided to just refactor the way we do XML deserialization to closer match that of Go + Rust. This was something we had discussed prior to GA and just didn't have time to do. Rather than implement a one off workaround tailored specifically to this issue we're going to move in the desired end state which is to generate serializers/deserializers specific to each format (starting with just XML deserialization).This is a large PR so I'm going to try and summarize the important bits for easier review. In particular because a lot of this PR is net new test code.
XmlParserGenerator
guts were replaced to no longer use the common struct/union deserializer and instead generate something directly off the lower levelserde-xml
types. See Codegen Output below for example output and differences.XmlTagReader
- new type that sits on top ofXmlStreamReader
and provides some small conveniences for iterating tokens and maintaining deserializer state.tests/codegen/serde-tests
. This new module has a bit of overlap with the existing protocol tests but the iteration time is quicker and is independent of the protocols. This new module has greater coverage than our previous unit tests and I even found some bugs with how we were generating nested lists/maps inside union types as well as found #1040. The idea is the same as protocol tests, use the generated code to test with rather than hand writing tests that mimic the structure of generated code. This approach is both easier to write tests for but more accurate as it is testing the real codegen output as opposed to hand written versions of what we expect codegen to output.serde-benchmarks
project contained a companion moduleserde-codegen-support
that implemented a custom dummy protocol for json + xml. There wasn't anything specific to benchmarks here though so I refactored it to remove notion of benchmarks and moved it totests/serde
as a common module that can be re-used for both the serde benchmarks and the new codegen integration testsCodegen Output
For the S3
ListObjectVersions
output type:Previously
After:
Effect on Artifact Sizes
The 1.0.64 S3 release was 5,072,329 bytes. Local builds are coming in at 5,039,276 bytes (~0.6% smaller).
Benchmarks
I've updated the benchmarks. They are included inline here for easy review. The tl;dr is that the generated deserializers are adding less overhead to raw token lexing than before and as a result is faster.
The lexer internals didn't change so they are nearly the same as the prior baseline. The deserialize benchmark came in at
33.566
ms/op compared to the prior90.697
ms/op (62% faster).Binary Compatibility
This change intentionally breaks binary compatibility on a few
@InternalApi
APIs:XmlDeserializer
- removed completely as it's no longer usedparseRestXmlError
/parseEc2QueryError
- removed erroneous suspendXmlToken
andXmlToken.QualifiedName
- renamed fields to improve readabilityBy submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.