Open hadighattas opened 3 weeks ago
hi - this issue looks awfully similar to the ones we have in the
because the issue is coming from the backend; currently the chunks sizes are not really configurable and if you're retrieving a huge amount of data, then it will be memory intensive (or crash with OOM, depending how limited the memory is)
You can give setting CLIENT_RESULT_CHUNK_SIZE=16
a shot [reference(https://docs.snowflake.com/en/sql-reference/parameters#client-result-chunk-size)] but I can imagine it won't change the situation very much.
What you can use as a workaround while the backend issue is sorted out, and should be working, is to use the LIMIT ... OFFSET ... argument to your select, to 'paginate' through the huge resultset, if you need to bring down the memory usage under a certain level.
What version of .NET driver are you using?
4.1.0
also tried to build and use latest version of master including fixes mentioned in https://github.com/snowflakedb/snowflake-connector-net/issues/1004What operating system and processor architecture are you using? macOS 14.7 ARM
What version of .NET framework are you using?
.net standard 2.0
What did you do?
Pulling a lot of data is memory intensive. We tried pulling 100M rows and the memory usage is averaging around ~800-900MB for a unit test, forcing garbage collection does not change anything. This test is using the dapper unbuffered API which fully supports streaming. The memory profiler is indicating that almost all of the objects allocated are in
SFReusableChunk
BlockResultData
.Profiler screenshots
Reproducing this issue is pretty straightforward, we pulled 100M records using this query of sample data
Memory buffer sizes (chunks) should be more conservative/configurable.
Reproducing this is pretty straightforward. I can do it necessary.