Closed VeeranjaneyuluToka closed 6 months ago
I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.
main branch https://github.com/open-mmlab/mmdetection3d
sys.platform: linux Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce RTX 3090 Ti CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.1, V11.1.105 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.9.0 PyTorch compiling details: PyTorch built with:
TorchVision: 0.10.0 OpenCV: 4.9.0 MMEngine: 0.10.3 MMDetection: 3.3.0 MMDetection3D: 1.4.0+ spconv2.0: False
I have been trying to train model with my own custom dataset by adopting to NuScenes dataset format.
I am able to train model but the loss goes down till epoch 20 and then it starts increasing as shown below
Do not have any errors other than loss decreasing and then increasing
Loss should be decreased or saturate at some point.
Hi All, I could figure out the reason for this myself. I need to change LR and Momentum schedulers based on new number of epochs. Thanks!
Prerequisite
Task
I have modified the scripts/configs, or I'm working on my own tasks/models/datasets.
Branch
main branch https://github.com/open-mmlab/mmdetection3d
Environment
sys.platform: linux Python: 3.7.10 (default, Feb 26 2021, 18:47:35) [GCC 7.3.0] CUDA available: True MUSA available: False numpy_random_seed: 2147483648 GPU 0: NVIDIA GeForce RTX 3090 Ti CUDA_HOME: /usr/local/cuda NVCC: Cuda compilation tools, release 11.1, V11.1.105 GCC: gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0 PyTorch: 1.9.0 PyTorch compiling details: PyTorch built with:
TorchVision: 0.10.0 OpenCV: 4.9.0 MMEngine: 0.10.3 MMDetection: 3.3.0 MMDetection3D: 1.4.0+ spconv2.0: False
Reproduces the problem - code sample
I have been trying to train model with my own custom dataset by adopting to NuScenes dataset format.
Reproduces the problem - command or script
I am able to train model but the loss goes down till epoch 20 and then it starts increasing as shown below
Reproduces the problem - error message
Do not have any errors other than loss decreasing and then increasing
Additional information
Loss should be decreased or saturate at some point.