Open maharshi4697 opened 2 years ago
Update:
When I reduced chuck size it ran.
But as you can see 15 mins is too long. Plus. it took 80 gb of memory at one point.
I looked into your problem a bit..
even with this small sample of data you provide, given the high duplication level of the join indices, it will lead to do ~300M rows dataframe! (one row of which has to be in memory for the current implementation). The fast that this works at all is great if you ask me!
One way to do this would be to do the join, export the dataframe to disk as hdf5, then continue working with it.
JovanVeljanoski
Apologies for missing out on your comment and thank you for getting back to me super fast! This is not a subset but the entire data. What is confusing to me at this point of time is even though it is a 300 mil row data frame, Vaex is advertized to handle close to a billion rows with ease. So What is the exact problem here?
Vaex can handle as much data as you can hold on disk in principle. So billions of rows indeed. This assumes that the data is in a memory mappable format, and on disk.
When you do join, you are essentially creating a new dataset sort of, and in order to "link" the relation from the "left" to the "right" dataset, we need to create a single column (this is the joining key) which you need to put in memory.
So for example, if you afford to do the joining once, and export the joined dataframe to disk in hdf5/arrow/parquet format and read that, you can do all subsequent operations rather fast.
I think this can be improved, the current workaround for now might be:
import os
os.environ['VAEX_NUM_THREADS'] = '1'
import vaex
So that we get as little memory usage as possible. I think though, that the memory usage is still excessive, keeping this open.
These are the 2 datasets that I primarily want to use Vaex for.
sample_scenarios.csv sample_teams.csv
It is able to read the Data but Kernel dies when trying to join. I am currently using a 2019 Macbook Pro 16 Gb RAM, but I need to run it in a low cost AWS env.
According to what is advertized, Vex should easily be able to join this data. Am I doing something wrong?