Open mitsuhiko opened 2 years ago
Unsurprisingly a lot of the perf impact is allocations and deallocations. Compared to miniserde a big source of overhead appears to be Option<T>
which in case of deser requires an allocation at all times, whereas miniserde gets away with an unsafe cast of their visitor because it knows that it can always borrow the original visitor. We don't have that luxory.
The performance is indeed mostly trash because of excessive allocations. It's unclear to me how this could be avoided entirely with this design annoyingly. The only real option I see is to reuse allocations somehow but that might be significantly complicating the implementation. The most frustrating case right now is for sure the fact that Option<T>
needs to allocate which is double annoying if the T
itself is a sink that allocates. For instance Option<Vec<T>>
even if not used at all will allocate at the moment for no good reason.
I'm not so sure if the design of the library can be optimized much without sacrificing the dynamic dispatch. The only potential option I see is to use a custom allocator on the states. There are a lot of temporary allocations and I see some potential room for improvement. However this also requires to pass the state to functions that currently do not have it. For instance deserialize_into
currently does not get the state yet it creates all the boxed sink handles.
The JSON serializer/deserializer currently demonstrates that the performance of the entire system is pretty absymal. Running the same benchmark as with serde/miniserde yields significantly worse results: