facebookresearch / fairseq

Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
MIT License
30.57k stars 6.41k forks source link

Overflow issue with Fairseq Preprocess for large datasets #5532

Open henrycharlesworth opened 3 months ago

henrycharlesworth commented 3 months ago

🐛 Bug

I realise no one is maintaining this anymore, but just for anyone who might come across a similar issue which was hard to debug:

With the default binarized dataset type in fairseq preprocess (mmap), it is possible to get integer overflow errors when processing big datasets. The key snippet of code is in fairseq/data/indexed_dataset.py:

@staticmethod
def _get_pointers(sizes):
    dtype_size = dtype().itemsize
    address = 0
    pointers = []

    for size in sizes:
        pointers.append(address)
        address += size * dtype_size

    return pointers

for some reason, when using multiple workers it is possible for some of the values in sizes to be np.int32, rather than int. I have not worked out why this is. However, for large enough datasets this can lead to integer overflow (as address becomes type np.int32 rather than int).

The fix is just to change:

address += int(size * dtype_size)