Open AKhazane opened 1 year ago
I could be sending you down the wrong tack but if it is to do with shuffling, Then I believe Petastorm only shuffles within a parquet file when it loads the data not across files. Might be worth checking with huggingface datasets does as well since it relies on parquet as the storage format
Hello! I'm using an open-source template of skip-gram to train ID embeddings from parquet data (stored in parquet format, with a few columns all int64 type) with
petastorm
but I'm getting far lower quality / worse embedding representations from using this library in comparison to just loading data directly from a PyTorch's non-distributed dataloader (with everything held constant, such as batch size, learning rate, data, and so on).I'm wondering if I'm simply loading data w/ the library in an incorrect way. Here's a snippet of my code that uses
petastorm.
My only hunch at this point is that I'm not properly shuffling the data between epochs, which will cause the data within each row group to have the same sequential order unlike PyTorch dataloader which shuffles entire dataset (from a csv file) before each epoch. Is there a straightforward way to add epoch shuffling in petastorm? (since all the data is stored in multiple parquet files).