I realise no one is maintaining this anymore, but just for anyone who might come across a similar issue which was hard to debug:
With the default binarized dataset type in fairseq preprocess (mmap), it is possible to get integer overflow errors when processing big datasets. The key snippet of code is in fairseq/data/indexed_dataset.py:
for some reason, when using multiple workers it is possible for some of the values in sizes to be np.int32, rather than int. I have not worked out why this is. However, for large enough datasets this can lead to integer overflow (as address becomes type np.int32 rather than int).
🐛 Bug
I realise no one is maintaining this anymore, but just for anyone who might come across a similar issue which was hard to debug:
With the default binarized dataset type in fairseq preprocess (mmap), it is possible to get integer overflow errors when processing big datasets. The key snippet of code is in
fairseq/data/indexed_dataset.py
:for some reason, when using multiple workers it is possible for some of the values in sizes to be np.int32, rather than int. I have not worked out why this is. However, for large enough datasets this can lead to integer overflow (as address becomes type np.int32 rather than int).
The fix is just to change:
address += int(size * dtype_size)