Closed Supriob9 closed 5 months ago
Hi @Supriob9 ,
Sorry for the delay, Experiment with different learning rates. Instead of directly setting it to 0.08, try a range of values and monitor the impact on loss. Increase the variety and intensity of data augmentation techniques. This can help the model generalize better and reduce overfitting, potentially reducing regularization loss. Experiment with different base feature extractors or backbone architectures. You can try variations of MobileNet, EfficientNet, or other architectures to see if they improve performance. Adjust the hyperparameters of the feature extractor, such as depth multiplier and minimum depth, to find optimal settings for your dataset.
Thank you!
This issue is stale because it has been open for 7 days with no activity. It will be closed if no further activity occurs. Thank you.
This issue was closed because it has been inactive for 7 days since being marked as stale. Please reopen if you'd like to work on this further.
Issue type
Performance
Have you reproduced the bug with TensorFlow Nightly?
No
Source
source
TensorFlow version
2.10
Custom code
Yes
OS platform and distribution
windows 10
Mobile device
No response
Python version
3.9
Bazel version
No response
GCC/compiler version
No response
CUDA/cuDNN version
11.8, 8.1
GPU model and memory
Nvidia Quadro p3200
Current behavior?
"Loss/regularization_loss' is high during the training of a object detection model with "ssd_mobilenet_v2_320x320_coco17_tpu-8". How can I reduce the loss. Specially the regularization loss and classification_loss are high. I have tried reducing the learning rate to .08 and increasing the batch size to 24. I have added my configurations here.
Standalone code to reproduce the issue
Relevant log output