[ x ] You have created a feature request first to discuss your contribution intent. Please reference the feature request ticket number in the pull request.
[ x ] You use the code formatters provided here and have them applied to your changes. Don't submit any formatting related changes.
[ x ] You submit test cases (unit or integration tests) that back your changes.
Issue description
Following up on the previous change #640 , the field name look up is still not optimal for usecases where the result contains a large chunk of data. Take r2dbc example usecase,
Mono.from(connectionFactory.create())
.flatMapMany(connection -> connection
.createStatement("SELECT firstname FROM PERSON WHERE age > $1")
.bind("$1", 42)
.execute())
.flatMap(result -> result
.map((row, rowMetadata) -> row.get("firstname", String.class)))
.doOnNext(System.out::println)
.subscribe();
imagine the result set returns 1 million rows of data, because currently the row level column name index map is computed on the fly per row, the same mapping would need to be computed 1 million times, which is pretty much futile work, as we only need to compute the mapping once. Meanwhile, looking at the implementation of pgjdbc, the mapping is computed per result set.
Another small change I made is to avoid putting caches on case matches, and defaults the match to lower case. Since metadata object now owns the column name lookup cache, and each row might be processed concurrently, the cache can have concurrency problem when multiple threads want to put cache simultaneously. So I am now making it read only.
[resolves #636]
Make sure that:
[ x ] You submit test cases (unit or integration tests) that back your changes.
Issue description
Following up on the previous change #640 , the field name look up is still not optimal for usecases where the result contains a large chunk of data. Take r2dbc example usecase,
imagine the result set returns 1 million rows of data, because currently the row level column name index map is computed on the fly per row, the same mapping would need to be computed 1 million times, which is pretty much futile work, as we only need to compute the mapping once. Meanwhile, looking at the implementation of pgjdbc, the mapping is computed per result set.
Another small change I made is to avoid putting caches on case matches, and defaults the match to lower case. Since metadata object now owns the column name lookup cache, and each row might be processed concurrently, the cache can have concurrency problem when multiple threads want to put cache simultaneously. So I am now making it read only.
New Public APIs
Additional context