sparkfish / augraphy

Augmentation pipeline for rendering synthetic paper printing, faxing, scanning and copy machine processes
https://github.com/sparkfish/augraphy
MIT License
339 stars 44 forks source link

Memory leak in AugmentationSequence #416

Closed bnawras closed 5 months ago

bnawras commented 9 months ago

There seems to be a memory leak in augraphy.AugmentationSequence image

My environment

Dockerfile

FROM nvidia/cuda:11.1.1-runtime-ubuntu20.04

ENV DEBIAN_FRONTEND noninteracrive
RUN apt-get update
RUN apt-get upgrade -y
RUN apt-get install -y \
        build-essential git python3 python3-pip \
        ffmpeg libsm6 libxext6 libxrender1 libglib2.0-0

WORKDIR /app
COPY requirements.txt .
RUN pip install --ignore-installed -r requirements.txt

CMD jupyter lab --ip 0.0.0.0 --port 1110 --allow-root

Requirements

torch==1.13.1
torchvision==0.14.1
albumentations==1.3.0
augraphy==8.2.4
opencv-python==4.8.1.78

Сode to reproduce the leak


num_workers = 8
n_epoch = 10
batch_size = 4

ink_phase = [ ]
post_phase = [ ]
paper_phase = [
    agr.AugmentationSequence(
        [
            agr.NoiseTexturize(
                sigma_range=(2, 3),
                turbulence_range=(2, 3),
                p=1
            ),
            agr.BrightnessTexturize(
                texturize_range=(0.999, 0.9999),
                deviation=0.01,
                p=1
            ),
        ],
    p=0.1),
]

pipeline = agr.AugraphyPipeline(ink_phase, paper_phase, post_phase)
def paperfy(image, **params):
    return pipeline(image)

synth_augmentations = A.Compose([
    A.RandomCrop(width=900, height=900),
    A.Lambda(image=paperfy, p=0.8),
    ToTensorV2()
])

train_set = # any dataset with synth_augmentations
trainloader = data.DataLoader(train_set, batch_size=batch_size, num_workers=num_workers, shuffle=True, pin_memory=True, persistent_workers=True)

for epoch in range(n_epoch):
    for i, samples in enumerate(trainloader):
        pass

:fire: workaround: don't use AugmentationSequence or persistent_workers=False

kwcckw commented 9 months ago

I can see a similar issue here: https://github.com/pytorch/pytorch/issues/62066

Could you cross check again and see if it is really related to AugmentationSequence?

bnawras commented 9 months ago

@kwcckw I have not reviewed the details of the implementation of AugmentationSequence, but this is directly or indirectly related to this module. This is what will happen if we run the same code without it:

memory usage: image

kwcckw commented 9 months ago

Thanks, i was able to reproduce the problem and fix it with this workaround from my end:

https://github.com/sparkfish/augraphy/pull/418

Could you reinstall it from the repo and try again?

bnawras commented 5 months ago

fixed!