NVIDIA / DeepLearningExamples

State-of-the-Art Deep Learning scripts organized by models - easy to train and deploy with reproducible accuracy and performance on enterprise-grade infrastructure.
13.62k stars 3.24k forks source link

[BraTS 2021/PyTorch] Model not properly training #1304

Open DanielNajarian opened 1 year ago

DanielNajarian commented 1 year ago

When running the BraTS 2021 notebook (located at PyTorch/Segmentation/nnUNet/notebooks/BraTS21.ipynb) training section, the model is not properly training even though it is going through the steps, as seen in the image below. The Dice is stuck at an extremely low value and neither that nor the loss changes at all over the epochs. The "DALI iterator does not support resetting while epoch is not finished" warning comes up on every epoch but that is not something that I have touched.

image

To Reproduce Steps to reproduce the behavior:

  1. Clone the DeepLearningExamples repo and Install the dependencies
  2. Download the BraTS 2021 dataset
  3. Change paths in the BraTS 2021 notebook to point to file locations
  4. Run all of the steps up to and including the training stage

Expected behavior I expected the model to train and have at least a Dice of 70 after 5 epochs

Environment Please provide at least:

michal2409 commented 1 year ago

Are you running notebook inside docker container? It looks like a dependency issue (running notebook with different versions of dependency). Please see https://ploomber.io/blog/notebook-to-docker/ for reference how to run Jupyter Notebook inside container.

DanielNajarian commented 1 year ago

I'm running it through command line and built the environment based on their requirements files.

michal2409 commented 1 year ago

What versions for PyTorch and NVIDIA DALI are you using?

DanielNajarian commented 1 year ago

I am using torch 1.13.1+cu116 and nvidia-dali-cuda110 1.26.0. Looking at it now, DALI should be cuda116, correct? But there doesnt seem to be a cuda116 version of it.

michal2409 commented 1 year ago

22.11 container has 1.18.0 DALI version (see here). Were you manually reinstalling it?

DanielNajarian commented 1 year ago

I had to manually reinstall a few packages since the torch and torchvision CUDA versions weren't lined up and I had trouble getting 117 to work on both, so I went down to 116 and changed some stuff as a result.

Should I be focusing on 22.02 container since it lines up with CUDA 11.6, which is my torch version? This would be DALI 1.10.

michal2409 commented 1 year ago

You can experiment with different versions. I would start with DALI 1.18.0 (or not reinstalling it inside container).

What error log you had during running container without any modification?

Luffy03 commented 1 year ago

Hi, have you figured it out?