Open dmxj opened 5 years ago
If I understood correctly, there are 3 places where n_classes should be changed: out_dim in CornerNet.py, categories in CornerNet.json and self._configs["categories"] in detection.py. Have you achieved any results?
in
Hello have you successfully used this network to train your own dataset? I could not got a correct result ,after trained ,all the boxes still seemed like default box .
Hello @qusongyun, I have the same problem, maybe I have wrong image preprocessing. Bboxes change a little from epoch to epoch, but they look the same for all pictures for each epoch.
Hello @qusongyun, I have the same problem, maybe I have wrong image preprocessing. Bboxes change a little from epoch to epoch, but they look the same for all pictures for each epoch.
Maybe my iteration ( 10 epoches) is too few (I have only 1 GPU with 12GB).So it may has not yet converge .How many epoches had you trained your dataset?
I have trained 100 epochs with batch size 2, my GPU is 12 GB too. Train losses were decreasing, but val and test losses were not. And boxes looked the same for every pictures at a epoch. I've tried to overfit the network but haven't achieved any results.
Hello @qusongyun, I have the same problem, maybe I have wrong image preprocessing. Bboxes change a little from epoch to epoch, but they look the same for all pictures for each epoch.
Maybe my iteration ( 10 epoches) is too few (I have only 1 GPU with 12GB).So it may has not yet converge .How many epoches had you trained your dataset?
I have trained 100 epochs with batch size 2, my GPU is 12 GB too. Train losses were decreasing, but val and test losses were not. And boxes looked the same for every pictures at a epoch. I've tried to overfit the network but haven't achieved any results.
Hello @qusongyun, I have the same problem, maybe I have wrong image preprocessing. Bboxes change a little from epoch to epoch, but they look the same for all pictures for each epoch.
Maybe my iteration ( 10 epoches) is too few (I have only 1 GPU with 12GB).So it may has not yet converge .How many epoches had you trained your dataset?
Maybe there are some codes should be changed but we cannot get it.
@stasysp @qusongyun, I guess the training code is good, but val and test code need to change. Maybe you can visualize the results by using https://github.com/princeton-vl/CornerNet/pull/60/commits to check what is the error in your test code, then to revise that.
@qusongyun @liben2018 thank you =) my training code has achieved the same mAP like SSD, but val not. Maybe there are problems in _decode function because I cannot see the bboxes correctly.
I trained CornerNet with 8 GPUs (12GB) using source code. After 260K interations, it only reach 24.6 mAP.
@moothes are the bboxes look ok? My bboxes look similar for all input
I try to train my own dataset which has 4 categories with coco format, I change the config/CornerNet.json "db: categories" from 80 to 4, but it raise a error when I launch train: The whole train log is:
what should I do ?