Closed pjeambrun closed 6 years ago
I believe this bug https://github.com/tensorflow/tensorflow/issues/33516 is realted
in dataset_builder.py I've changed
dataset.map( ... , tf.data.experimental.AUTOTUNE)
to
dataset.map( ... , num_parallel_calls)
and memory leak seems to be fixed
Out Of Memory when training on Big Images
Systeme Information
Describe the Problem
I have successfully run the pets tutorial on this Google Compute Instance When I train a fasterrcnn resnet 101 on my dataset (VOC format, 47 classes, image_size: 1000/2000) with:
I get the following error at the beginning of the training:
I managed to avoid the oom on this dataset by resizing all the images and annotation files (divided the dimensions by 4).
I didn't modify the config file(just the number of class and the paths), therefore they should be resized at 600 1024, and the bug should not occur with the big images
Is there a way I can train on my images without having to shrink them? Are there some parameters I can tune to avoid this problem?