This PR reduces allocations for Record & Array readers. While profiling memory footprint of deserialization
Fixture fixture = new Fixture();
var data = fixture
.Build<User>()
.With(u => u.Offerings, fixture.CreateMany<Offering>(50).ToList)
.CreateMany(1000)
.ToArray();
var serialized = AvroConvert.Serialize(data);
Console.WriteLine($"Serialized {serialized.Length}");
Console.ReadLine();
var deserialized = AvroConvert.Deserialize<User[]>(serialized);
Console.WriteLine($"Deserialized {deserialized.Length} users");
noticed few allocations that can be removed
In particular closure & ReadOnlyCollection<string> allocations from ResolveRecord
Avoid resizing list by setting its initial capacity on creation
As a result memory footprint improved
and ~ 9% improvement in Deserialize execution
Method
Mean
Error
StdDev
Allocated
Deserialize
62.26 ms
1.244 ms
1.574 ms
30.76 MB
VS
Method
Mean
Error
StdDev
Allocated
Deserialize
56.59 ms
1.108 ms
1.820 ms
27.34 MB
For reference, benchmark code
[MemoryDiagnoser(displayGenColumns: false)]
[HideColumns("RatioSD")]
public class ReaderBenchmarks
{
private byte[] _serializedUsers;
[GlobalSetup]
public void Setup()
{
var fixture = new Fixture();
var data = fixture
.Build<User>()
.With(u => u.Offerings, fixture.CreateMany<Offering>(50).ToList)
.CreateMany(1000)
.ToArray();
_serializedUsers = AvroConvert.Serialize(data);
}
[Benchmark]
public int Deserialize() => AvroConvert.Deserialize<User[]>(_serializedUsers).Length;
}
This PR reduces allocations for
Record
&Array
readers. While profiling memory footprint of deserializationnoticed few allocations that can be removed
In particular closure &
ReadOnlyCollection<string>
allocations fromResolveRecord
Avoid resizing list by setting its initial capacity on creation
As a result memory footprint improved
and ~ 9% improvement in
Deserialize
executionVS
For reference, benchmark code