javiribera / locating-objects-without-bboxes

PyTorch code for "Locating objects without bounding boxes" - Loss function and trained models
Other
249 stars 52 forks source link

mall_small_dataset at 1200 epochs #19

Closed jungscott closed 4 years ago

jungscott commented 4 years ago

mall_small_dataset.zip seq_000001

I have created a smaller dataset to see the training of the network. For faster training, the mall images were cropped to 1/2 in width and height and only 500 images were fed into the training (see the attached dataset, a ground-truth image, and a Visdom screenshot at 1200 epochs). The following command was used for training:

---mall_small_dataset command for training python -m object-locator.train --train-dir "data/mall_small_dataset" --batch-size 16 --lr 1e-4 --val-dir "data/mall_small_dataset" --optim adam --val-freq 10 --save "data/mall_small_dataset-model.ckpt" --visdom-env mall_small_dataset_training --visdom-server http://localhost --visdom-port 8097

The code performed 1200 epochs overnight, but it still hasn't converged to object locations yet. Could you give me guidance to resolve the problem?

Screenshot from 2020-01-30 08-51-53

Thank you

javiribera commented 4 years ago

Are you sure the groundtruth points are properly created? If you look at your screenshot, the white dots in the window called "(Training) image w/ output heatmap and labeled points" are not on top of people's heads. Some of them are not even on top of people, so I'm not sure what objects you are trying to locate. Where did the green crosses of your first image come from?

javiribera commented 4 years ago

Closing due to inactivity.

Frank-Dz commented 3 years ago

Are you sure the groundtruth points are properly created? If you look at your screenshot, the white dots in the window called "(Training) image w/ output heatmap and labeled points" are not on top of people's heads. Some of them are not even on top of people, so I'm not sure what objects you are trying to locate. Where did the green crosses of your first image come from?

I am facing the same problem.

I checked my dataset by visualizing them: image

But when I visit them on visdom I got the similar result like this: image

Do you apply any transformation to the annotations?

Frank-Dz commented 3 years ago

mall_small_dataset.zip seq_000001

I have created a smaller dataset to see the training of the network. For faster training, the mall images were cropped to 1/2 in width and height and only 500 images were fed into the training (see the attached dataset, a ground-truth image, and a Visdom screenshot at 1200 epochs). The following command was used for training:

---mall_small_dataset command for training python -m object-locator.train --train-dir "data/mall_small_dataset" --batch-size 16 --lr 1e-4 --val-dir "data/mall_small_dataset" --optim adam --val-freq 10 --save "data/mall_small_dataset-model.ckpt" --visdom-env mall_small_dataset_training --visdom-server http://localhost --visdom-port 8097

The code performed 1200 epochs overnight, but it still hasn't converged to object locations yet. Could you give me guidance to resolve the problem?

Screenshot from 2020-01-30 08-51-53

Thank you

Hi did you solve the problem?