Open ankush981 opened 5 years ago
This is currently the default behavior. qsv reads the file into memory and then starts to parse the csv into an array of objects, after that the array gets processed according to the SQL statement. I think reading files into memory isn't that problematic. If I recall correctly node.js supports files up to 2GB out of the box.
The problem occurs either on 1. parsing the csv or 2. executing the sql statement or 3. rendering the result into a table. I'm not sure yet if we are exceeding the size limit of an array or if there is some kind of memory leak 🤔
Related #22
So I tested on the 28 MB StackOverflow results file, and have the following observations:
select Country from table
gives rapidly scrolling out and finishes in a few seconds (no problems here).
select * from table limit 10
-- such queries are also reasonably fast. Even with limit 100, this works fine.
select * from table
causes everything to freeze, and roughly speaking, I saw memory usage jumping from 100 MB to 200 MB to 600 MB.
I have a feeling that your point "3. redering the result to a table" might just be the bottleneck. I've come across such issues before where displaying to the terminal is slow (because the terminal is bufferred, I guess?).
Any idea how this can be examined/confirmed?
You can have a look into memory usage with the chrome dev tools for example in VS Code or with ndb as a stand alone tool. I'm not really experienced with the memory debugging tools though.
But what i've seen is that the files content, in the state before it gets parsed, doesn't get garbage collected. I think thats not a huge issue as long as files are small, because the memory get's allocated only once, so it's not a classic memory leak that starts to bloat, but never the less it should be removed.
The other thing I found is a huge collection of strings that are ready to get rendered (ansi escape codes are applied). And as far as I can tell this is the point where we're running out of memory because there are a lot of ansi escape sequences added to the data.
Another good claim for Point 3 is that currently the SELECT
statement is currently applied last, so in the case of select country from table
it operated on the complete table in memory without causing any problems. Relevant code.
So to make it work my idea would be to try streaming the results to the terminal instead of trying to dump the complete thing at once.
I've just created #29 so we can test a version that supports streams. If we get this working we can support files of (theoretically) unlimited size.
For smaller files (say, less than 50 MB?), maybe we should have an option to read them entirely into memory and then run operations on them. Will it speed things up? If yes, maybe we should make this as a first question when a file is loaded. We can caution the end user that importing into memory might take a long time.