Reagan1311 / DABNet

Depth-wise Asymmetric Bottleneck for Real-time Semantic Segmentation (BMVC2019)
https://github.com/Reagan1311/DABNet
MIT License
140 stars 36 forks source link

Train on CamVid #39

Closed ydhongHIT closed 2 years ago

ydhongHIT commented 4 years ago

Do you use this codes with CamVid? There are some bugs.

ydhongHIT commented 4 years ago

Your labels do not range from 0 to num_classes.

ydhongHIT commented 4 years ago

/home/ydhong/.conda/envs/detectron/lib/python3.6/site-packages/torch/optim/lr_scheduler.py:100: UserWarning: Detected call of lr_scheduler.step() before optimizer.step(). In PyTorch 1.1.0 and later, you should call them in the opposite order: optimizer.step() before lr_scheduler.step(). Failure to do this will result in PyTorch skipping the first value of the learning rate schedule.See more details at https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate "https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate", UserWarning) /opt/conda/conda-bld/pytorch_1573049301898/work/aten/src/THCUNN/SpatialClassNLLCriterion.cu:104: void cunn_SpatialClassNLLCriterion_updateOutput_kernel(T , T , T , long , T *, int, int, int, int, int, long) [with T = float, AccumT = float]: block: [32,0,0], thread: [544,0,0] Assertion t >= 0 && t < n_classes failed. Traceback (most recent call last): File "train.py", line 317, in train_model(args) File "train.py", line 220, in train_model lossTr, lr = train(args, trainLoader, model, criteria, optimizer, epoch) File "train.py", line 87, in train loss.backward() File "/home/ydhong/.conda/envs/detectron/lib/python3.6/site-packages/torch/tensor.py", line 166, in backward torch.autograd.backward(self, gradient, retain_graph, create_graph) File "/home/ydhong/.conda/envs/detectron/lib/python3.6/site-packages/torch/autograd/init.py", line 99, in backward allow_unreachable=True) # allow_unreachable flag RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED