Closed JohnFuguiWang closed 6 months ago
Here is the content of my config.yaml if you wanna have a look at: augment2d: gridmask: fixed_prob: true prob: 0.0 resize:
Hello, have you solved this problem? I encountered the same problem.
Hello, have you solved this problem? I encountered the same problem.
Honestly speaking, I still don't know the reason of this problem now. I reinstalled the conda environment following this blog , and ran the model again using nuscenes-mini. The result map was around 0.47 and stayed stable. I think maybe this result is normal. You can try to reinstall your conda environment too and run the model again. If it's possible you can notify me of your new result, and we can check if our results are normal. Thanks in advance.
Problem description
I tried to train the model using nuscenes-mini dataset, and found the result statistics were quite low (around 0.45). Another weird thing was that the loss declined gradually, but the object map stayed at the same level of 0.45 from the very beginning to the end, and it even went downward with training going on. By the way, I've checked issue #463 and modified the sweeps_num as 9, and the result above was already the result after this modification. So did anyone also use nusenes-mini and how was your result? And why did the map figure stay at the same level during training? Many thanks for your help!
To replicate the problem
I use nuscenes-mini dataset, set the max_epochs as 15, both samples_per_gpu and workers_per_gpu as 2, and use the official order of detect: "torchpack dist-run -np 2 python tools/train.py configs/nuscenes/det/transfusion/secfpn/camera+lidar/swint_v0p075/convfuser.yaml --model.encoders.camera.backbone.init_cfg.checkpoint pretrained/swint-nuimages-pretrained.pth --load_from pretrained/lidar-only-det.pth"
More details
I can also attach some visualization results and the complete config.yaml here if you want to see more details. The content of config.yaml can be found in the next comment of this issue.