harelba / q

q - Run SQL directly on delimited files and multi-file sqlite databases
http://harelba.github.io/q/
GNU General Public License v3.0
10.21k stars 423 forks source link

Is entire file loaded into memory to be queried? would prefer read line by line, not whole file loaded in (10Gb file) #313

Open therobyouknow opened 1 year ago

therobyouknow commented 1 year ago

I have a 10Gb TSV file that I'd like to read using SQL commands.

As a TSV, tab-separated value file, it is a spreadsheet-like file in that it has headings/columns and rows. It's effectively like a single table database.

Being 10Gb I'd prefer not to bulk complete read in all at once as the whole file into memory, due to time taken to do so and limitation of machines' memory size (though I do have 16Gb, 24Gb and 32Gb machines).

Can you advise if, when running queries on the TSV, it is loaded into memory entirely all at once?

harelba commented 1 year ago

Hi, sorry for the late reply.

q does load the data into memory for processing, but it does contain an automatic caching feature which might help for large files.

If you run q with the -C readwrite parameter, then a cache for each file will be generated automatically (if the cache file doesn't already exist). That would cause the first execution to be slow, but all additional executions of q for that file will be extremely fast.

In order to create the cache file, you could run q -t -C readwrite "select count(*) from original-file.tsv. This will create another file called original-file.tsv.qsql.

After this preparation step (which will take a considerable time for a 10GB file), you will be able to do either of the following:

Hope that will help. I'd appreciate it if you can write down your impressions of the speedup here after testing this.

therobyouknow commented 1 year ago

thank you @harelba

Hope that will help. I'd appreciate it if you can write down your impressions of the speedup here after testing this.

I will let you know! thank you!