libffcv / ffcv

FFCV: Fast Forward Computer Vision (and other ML workloads!)
https://ffcv.io
Apache License 2.0
2.8k stars 180 forks source link

Performance worse with ffcv #262

Open samuelstevens opened 1 year ago

samuelstevens commented 1 year ago

I am using ffcv loaders with huggingface's accelerate for single-node multi-GPU training using 8x A100 with 16GB memory each.

When using ffcv, my training loop is slower and I need to use a lower batch size (48 rather than 32), and I additionally experience memory leaks (CUDA OOM after two epochs).

Writing .beton

class NumpyLabels:
    def __init__(self, dataset):
        self.dataset = dataset

    def __len__(self):
        return len(self.dataset)

    def __getitem__(self, i):
        img, labels = self.dataset[i]
        return (img, labels.numpy().astype(np.int16))

def main():
    dataset = NumpyLabels(HierarchicalImageFolder("/mnt/10tb/data/train"))

    writer = ffcv.writer.DatasetWriter(
        "/mnt/10tb/data/train.beton",
        {
            "image": ffcv.fields.RGBImageField(max_resolution=192),
            "label": ffcv.fields.NDArrayField(
                # int16 is fine for predicting up to 10000 classes
                # 2 ^ 16 = 65,536
                shape=(7,),
                dtype=np.dtype("int16"),
            ),
        },
        num_workers=32,
    )

    writer.from_indexed_dataset(dataset, chunksize=1000)

Dataloader

def _dataloader(self, accelerator):

    return ffcv.loader.Loader(
        "/mnt/10tb/data/train.beton",
        batch_size=48,
        num_workers=16,
        order=ffcv.loader.OrderOption.RANDOM,
        os_cache=True,
        distributed=True,
        drop_last=False,
        pipelines={
            "image": [
                ffcv.fields.decoders.SimpleRGBImageDecoder(),
                ffcv.transforms.ToTensor(),
                ffcv.transforms.ToDevice(accelerator.device),
                ffcv.transforms.ToTorchImage(),
                ffcv.transforms.NormalizeImage(mean.numpy(), std.numpy(), np.float32),
            ],
            "label": [
                ffcv.fields.decoders.NDArrayDecoder(),
                ffcv.transforms.ToTensor(),
                ffcv.transforms.Convert(torch.int64),
                ffcv.transforms.ToDevice(accelerator.device),
            ],
        },
    )

My dataset is about 2.7M 192x192 images.

My GPU utilization is between 100 and 75%, and memory usage is only 75% on GPUs 1-7 because it is much higher on GPU 0 (so I cannot increase batch size).

Screen Shot 2022-10-13 at 11 14 37 AM

Do you have any advice on how to improve performance?