Closed prochac closed 1 month ago
hi - thanks for submitting this issue with us. can you please share the actual code which produces high memory usage for you in gosnowflake ? if it's not shareable, then a minimal viable reproduction application which when run, leads to the same issue ?
asking because other drivers have the same problem when someone tries to read the resultset into memory before working on it, and that's why it would be great to see your approach. Thank you so much in advance !
edit: also the difference between the first and the second screenshots, both seem to be having gosnowflake-related stack but the memory usage is very different. Is it one of the gosnowflake versions working well for you? If so, what version has low memory usage (bottom screenshot) and which one has the high one (upper screenshot). Thank you !
hi - thanks for submitting this issue with us. can you please share the actual code which produces high memory usage for you in gosnowflake ? if it's not shareable, then a minimal viable reproduction application which when run, leads to the same issue ?
asking because other drivers have the same problem when someone tries to read the resultset into memory before working on it, and that's why it would be great to see your approach. Thank you so much in advance !
edit: also the difference between the first and the second screenshots, both seem to be having gosnowflake-related stack but the memory usage is very different. Is it one of the gosnowflake versions working well for you? If so, what version has low memory usage (bottom screenshot) and which one has the high one (upper screenshot). Thank you !
I spent some time reproducing the same memory pattern in our test environment, unsuccessfully.
But now, after letting it go for a moment, I realised that we may have one legacy method that could iterate over all results to render HTTP response. That would also explain the irregularity of the memory pattern. Because it's not happening continuously. I noticed it from our monitoring just because it's not common.
I will check tomorrow, and hopefully we could blame our legacy code 😁
appreciate the efforts for reproduction a lot 👍 recent finding you mentioned indeed sounds something like promising. we'll be standing by this issue; let us know please how it went once you had a bit more time.
Sorry for taking it so long... priorities
Yes, it was the legacy endpoint.
Ne noticed high memory usage peak with snowflake driver.
The code isn't any complex. Just iterating sql rows returned from cursor. The code is shared with other database drivers, but it's only snowflake what causes high mem peaks.
Please answer these questions before submitting your issue. In order to accurately debug the issue this information is required. Thanks!
v1.10.0
x86_64 GNU/Linux
1.22.3
4.Server version:* E.g. 1.90.1
Not sure - we're ETL platform, and I'm not sure which pipeline caused it yet. IMO irrelevant.
Simple
(*sql.DB).QueryContext
with iterator over*sql.Rows
No memory peak, like with other db drivers we share code with.
Not very motivated to do it in production.