Closed jungscott closed 4 years ago
Are you sure the groundtruth points are properly created? If you look at your screenshot, the white dots in the window called "(Training) image w/ output heatmap and labeled points" are not on top of people's heads. Some of them are not even on top of people, so I'm not sure what objects you are trying to locate. Where did the green crosses of your first image come from?
Closing due to inactivity.
Are you sure the groundtruth points are properly created? If you look at your screenshot, the white dots in the window called "(Training) image w/ output heatmap and labeled points" are not on top of people's heads. Some of them are not even on top of people, so I'm not sure what objects you are trying to locate. Where did the green crosses of your first image come from?
I am facing the same problem.
I checked my dataset by visualizing them:
But when I visit them on visdom I got the similar result like this:
Do you apply any transformation to the annotations?
I have created a smaller dataset to see the training of the network. For faster training, the mall images were cropped to 1/2 in width and height and only 500 images were fed into the training (see the attached dataset, a ground-truth image, and a Visdom screenshot at 1200 epochs). The following command was used for training:
---mall_small_dataset command for training python -m object-locator.train --train-dir "data/mall_small_dataset" --batch-size 16 --lr 1e-4 --val-dir "data/mall_small_dataset" --optim adam --val-freq 10 --save "data/mall_small_dataset-model.ckpt" --visdom-env mall_small_dataset_training --visdom-server http://localhost --visdom-port 8097
The code performed 1200 epochs overnight, but it still hasn't converged to object locations yet. Could you give me guidance to resolve the problem?
Thank you
Hi did you solve the problem?
mall_small_dataset.zip
I have created a smaller dataset to see the training of the network. For faster training, the mall images were cropped to 1/2 in width and height and only 500 images were fed into the training (see the attached dataset, a ground-truth image, and a Visdom screenshot at 1200 epochs). The following command was used for training:
---mall_small_dataset command for training python -m object-locator.train --train-dir "data/mall_small_dataset" --batch-size 16 --lr 1e-4 --val-dir "data/mall_small_dataset" --optim adam --val-freq 10 --save "data/mall_small_dataset-model.ckpt" --visdom-env mall_small_dataset_training --visdom-server http://localhost --visdom-port 8097
The code performed 1200 epochs overnight, but it still hasn't converged to object locations yet. Could you give me guidance to resolve the problem?
Thank you