Closed michaelkleinhenz closed 5 years ago
[test]
Merging #2359 into master will decrease coverage by
0.06%
. The diff coverage is65.11%
.
@@ Coverage Diff @@
## master #2359 +/- ##
==========================================
- Coverage 69.9% 69.84% -0.07%
==========================================
Files 171 171
Lines 16928 16976 +48
==========================================
+ Hits 11834 11857 +23
- Misses 3936 3952 +16
- Partials 1158 1167 +9
Impacted Files | Coverage Δ | |
---|---|---|
controller/search.go | 70.24% <60.86%> (-1.91%) |
:arrow_down: |
controller/workitem.go | 78.82% <82.35%> (-0.4%) |
:arrow_down: |
workitem/workitem_repository.go | 67.79% <0%> (-0.23%) |
:arrow_down: |
Continue to review full report at Codecov.
Legend - Click here to learn more
Δ = absolute <relative> (impact)
,ø = not affected
,? = missing data
Powered by Codecov. Last update 0999ce9...fbc1661. Read the comment docs.
I am not getting how this PR ensure streaming is happening.
I remember using http.Flusher interface to stream: https://github.com/baijum/lipsum/blob/master/lipsum.go
@baijum the key here is that the prior implementation retrieved the whole matching WIs from the database, convrted them to CSV and returned it to the client. This version, however, retrieves a chunk of the matching WIs, converts the chunk and returns it to the client. It is not "technically" streaming, but it is the best way to describe the change.
In general, this makes it possible to return large CSV datasets to the client without having memory issues on the WIT node. And yes, there may be some caching and spooling involved in the lower levels, but I'd expect that the http implementation of the go library is rather robust on that. The key is that we're using ResponseWriter incrementally instead of as one-go.
@kwk changed the size of the generated test data to 242. This reduced the test time to less than a second on my notebook.