Closed dirkbolte closed 9 months ago
Thank you for getting in touch. The ConversionService
and its components are part of the core-framework. I think it would make sense to raise the performance concern there.
Not sure whether it can be completely addressed there as both the source type and the processing is outside, but I will definitely do so.
I have a collection with > 10k documents. For one use case I need to fetch all of them. In order to minimize the actual data that is processed, I already added a projection along with a ReadConverter for the whole document in order to optimize document creation (aligned to https://docs.spring.io/spring-data/mongodb/reference/mongodb/mapping/custom-conversions.html#mongo.custom-converters.reader ) . The code looks similar to this:
This is registered via
I still found the query on a result set of this size to be slow. I was able to narrow it down to the actual converter resolution.
GenericConversionService
which callsgetConverter
twice. The expensive part is within handling of the converter cache, which comparesConverterCacheKey
, havingorg.bson.Document
as source type. Callingorg.springframework.core.convert.TypeDescriptor#equals
fororg.bson.Document
~20k times takes about 50% of the overall processing time (the same (and same amount of) comparison for my target entity is significantly less) (evaluated with IntelliJ profiler). Main contributors are the checks forisCollection
,isArray
andthe logic within
isMap`:As mitigation, I created a custom repository implementation, built the query object myself. This approach returned the plain document, so that I could call the converter myself (same code). This approach took somewhere between 5-10% of the CPU time of the initial approach.
Is there a way for me to improve the Converter resolution to avoid the repeating resolution - or for the
MappingMongoConverter
to optimize comparison or conversion?