Open AndresOsp opened 3 years ago
Thank you for your answer.
Then (if i am not wrong). By running:
sh experiments/mix_dla34.sh
The images from ETH and Citypersons should be loaded. Therefore i have a problem running the code?
Just run sh experiments/mix_dla34.sh and you can load the images from ETH and CityPersons.
I did that as shown:
Using tensorboardX
Fix size testing.
training chunk_sizes: [6, 6]
The output will be saved to /workspace/FairMOT/src/lib/../../exp/mot/mix_dla34
Setting up data...
================================================================================
dataset summary
OrderedDict([('mot17', 1639.0), ('caltech', 1043.0), ('citypersons', 0), ('cuhksysu', 11931.0), ('prw', 933.0), ('eth', 0)])
total # identities: 15547
start index
OrderedDict([('mot17', 0), ('caltech', 1639.0), ('citypersons', 2682.0), ('cuhksysu', 2682.0), ('prw', 14613.0), ('eth', 15546.0)])
================================================================================
heads {'hm': 1, 'wh': 4, 'id': 128, 'reg': 2}
The algorithm so not use those images to train the detections.
Ah, I see. The output is the same as mine. The number means the number of IDs instead of images. So citypersons and eth show 0. It indeed uses the images of ETH and Citypersons to train the detection branch. Do not worry about that.
Thank you for your quick answer.
Then, i trained the model using:
sh experiments/mix_dla34.sh
The trained model do not match the results of fairmot_dla34.pth
. By running:
python track.py mot --load_model ../models/fairmot_dla34.pth --conf_thres 0.6
vs
python track.py mot --load_model ../exp/mot/mix_dla34/model_last.pth --conf_thres 0.6
Do you have any insides about this difference?
Regards
Hello,
Before anything thanks for sharing your amazing work.
I recreating the train results using the "mix" dataset and the DLA34 model.
I downloaded and followed the instructions in the readme by running:
sh experiments/mix_dla34.sh
.However i noticed that the dataloader dont use 2 datasets as the output in the terminal is:
it says that the code do not use citypersons and eth. This is a normal behaviour? I tried to debug and i noticed that the code in the
jde.py
filter out those images. This is the part that filter the images:From my analysis it filter the images without an ID. However this is unexpected. I downloaded the datasets again but this did not solve my issue.
Then, i decided to train the model ignoring that part. The results are not the same as the presented (in epoch 30). I run:
python track.py mot --load_model ../models/fairmot_dla34.pth --conf_thres 0.6
The results for MOT17 are:
Different to the results that i get by testing your model:
In conclusion:
Thanks for your help.
Cordially