Closed luksow closed 5 years ago
Btw. For OF=Horizontal, the error is similar:
RuntimeError: While copying the parameter named classifier.0.weight, whose dimensions in the model are torch.Size([4096, 50176]) and whose dimensions in the checkpoint are torch.Size([4096, 100352]).
@luksow you are somewhat right. In the first case you are using OF=None, so the weights for that are different than the ones you have (OF=Horizontal), and that is why you get the error.
Regarding OF=Horizontal, you need the folder for the RGB images and the respective OF images with the same names https://github.com/BCV-Uniandes/AUNets/blob/master/data_loader.py#L80-L81.
Could you please double check? I ran it and I got no errors (a nice reminder: it only works for pytorch 0.3):
If you still get the error, please provide me more details to reproduce it.
@affromero Thanks for your reply. Two more things:
--> 393 au_rgb_file = sorted(glob.glob(model_save_path.replace(OF_option,'None')+'/*.pth'))[-1]
in this line it's looking for a model in: ./snapshot/models/BP4D/normal/fold_0/AU02/OF_Normal/emotionnet
(as it substitued Horizontal with Normal) but this dir is empty. Please help :)Hello,
Please tell me if everything is ok for you now.
Hi, thanks for your patch! It started producing results (which is good) but I'm afraid something is still wrong (?) as for nearly all AUs with your Demo files I'm getting the probability of 1.0:
ipython main.py -- --AU=23 --fold=0 --GPU=0 --OF Horizontal --DEMO=Demo --mode_data=normal --pretrained_model /home/lsowa/fold_0/OF_Horizontal/AU23.pth
Namespace(AU='23', DELETE=False, DEMO='Demo', GPU='0', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU23', mode='train', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='/home/lsowa/fold_0/OF_Horizontal/AU23.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
[!!] loaded trained model: /home/lsowa/fold_0/OF_Horizontal/AU23.pth!
AU23 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Not sure though, maybe your optical flow files are wrong or something?
For every single AU or just for AU23?
I've tried 5 or 6 - all the same results - 1.0.
I just ran it for all of them:
afromero@bcv002:~/datos2/AUNets$ for au in 01 02 04 06 07 10 12 14 15 17 23 24; do ipython main.py -- --GPU=2 --mode=test --AU=$au --pretrained_model snapshot_github/fold_0/OF_Horizontal/AU${au}.pth --OF=Horizontal --DEMO Demo; done
Namespace(AU='01', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU01/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU01', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU01/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU01.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU01.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU01.pth!
AU01 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/007.jpg : 0.9999990463256836
AU01 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Namespace(AU='02', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU02/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU02', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU02/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU02.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU02.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU02.pth!
AU02 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Namespace(AU='04', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU04/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU04', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU04/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU04.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU04.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU04.pth!
AU04 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Namespace(AU='06', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU06/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU06', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU06/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU06.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU06.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU06.pth!
AU06 - OF Horizontal | Forward | Demo/000.jpg : 0.00014769911649636924
AU06 - OF Horizontal | Forward | Demo/001.jpg : 0.00012369781325105578
AU06 - OF Horizontal | Forward | Demo/002.jpg : 0.00013797279098071158
AU06 - OF Horizontal | Forward | Demo/003.jpg : 7.314350659726188e-05
AU06 - OF Horizontal | Forward | Demo/004.jpg : 7.164275302784517e-05
AU06 - OF Horizontal | Forward | Demo/005.jpg : 6.787211168557405e-05
AU06 - OF Horizontal | Forward | Demo/006.jpg : 7.220840052468702e-05
AU06 - OF Horizontal | Forward | Demo/007.jpg : 7.675521192140877e-05
AU06 - OF Horizontal | Forward | Demo/008.jpg : 6.585798837477341e-05
AU06 - OF Horizontal | Forward | Demo/009.jpg : 7.559465302620083e-05
Namespace(AU='07', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU07/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU07', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU07/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU07.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU07.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU07.pth!
AU07 - OF Horizontal | Forward | Demo/000.jpg : 0.00019906883244402707
AU07 - OF Horizontal | Forward | Demo/001.jpg : 0.00010282346920575947
AU07 - OF Horizontal | Forward | Demo/002.jpg : 0.0001013330474961549
AU07 - OF Horizontal | Forward | Demo/003.jpg : 1.4206962077878416e-05
AU07 - OF Horizontal | Forward | Demo/004.jpg : 1.72459585883189e-05
AU07 - OF Horizontal | Forward | Demo/005.jpg : 1.7171663785120472e-05
AU07 - OF Horizontal | Forward | Demo/006.jpg : 1.4279102288128342e-05
AU07 - OF Horizontal | Forward | Demo/007.jpg : 1.4347187970997766e-05
AU07 - OF Horizontal | Forward | Demo/008.jpg : 1.3749855497735552e-05
AU07 - OF Horizontal | Forward | Demo/009.jpg : 9.883703569357749e-06
Namespace(AU='10', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU10/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU10', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU10/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU10.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU10.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU10.pth!
AU10 - OF Horizontal | Forward | Demo/000.jpg : 0.0016679934924468398
AU10 - OF Horizontal | Forward | Demo/001.jpg : 0.0017341816565021873
AU10 - OF Horizontal | Forward | Demo/002.jpg : 0.0016202481929212809
AU10 - OF Horizontal | Forward | Demo/003.jpg : 0.001320053357630968
AU10 - OF Horizontal | Forward | Demo/004.jpg : 0.0011272196425125003
AU10 - OF Horizontal | Forward | Demo/005.jpg : 0.0009411966893821955
AU10 - OF Horizontal | Forward | Demo/006.jpg : 0.0012218032497912645
AU10 - OF Horizontal | Forward | Demo/007.jpg : 0.0009297414217144251
AU10 - OF Horizontal | Forward | Demo/008.jpg : 0.0008697224548086524
AU10 - OF Horizontal | Forward | Demo/009.jpg : 0.0008779817726463079
Namespace(AU='12', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU12/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU12', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU12/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU12.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU12.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU12.pth!
AU12 - OF Horizontal | Forward | Demo/000.jpg : 4.029447154607624e-05
AU12 - OF Horizontal | Forward | Demo/001.jpg : 4.194277062197216e-05
AU12 - OF Horizontal | Forward | Demo/002.jpg : 3.7196106859482825e-05
AU12 - OF Horizontal | Forward | Demo/003.jpg : 4.142936086282134e-05
AU12 - OF Horizontal | Forward | Demo/004.jpg : 3.795105294557288e-05
AU12 - OF Horizontal | Forward | Demo/005.jpg : 3.526378350215964e-05
AU12 - OF Horizontal | Forward | Demo/006.jpg : 4.677407196140848e-05
AU12 - OF Horizontal | Forward | Demo/007.jpg : 2.8154141546110623e-05
AU12 - OF Horizontal | Forward | Demo/008.jpg : 2.8286036467761733e-05
AU12 - OF Horizontal | Forward | Demo/009.jpg : 2.830025550792925e-05
Namespace(AU='14', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU14/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU14', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU14/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU14.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU14.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU14.pth!
AU14 - OF Horizontal | Forward | Demo/000.jpg : 0.04602605849504471
AU14 - OF Horizontal | Forward | Demo/001.jpg : 0.04383467510342598
AU14 - OF Horizontal | Forward | Demo/002.jpg : 0.04087616503238678
AU14 - OF Horizontal | Forward | Demo/003.jpg : 0.03829687833786011
AU14 - OF Horizontal | Forward | Demo/004.jpg : 0.04373708739876747
AU14 - OF Horizontal | Forward | Demo/005.jpg : 0.043208882212638855
AU14 - OF Horizontal | Forward | Demo/006.jpg : 0.04182494059205055
AU14 - OF Horizontal | Forward | Demo/007.jpg : 0.04454749450087547
AU14 - OF Horizontal | Forward | Demo/008.jpg : 0.03956642001867294
AU14 - OF Horizontal | Forward | Demo/009.jpg : 0.036166053265333176
Namespace(AU='15', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU15/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU15', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU15/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU15.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU15.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU15.pth!
AU15 - OF Horizontal | Forward | Demo/000.jpg : 0.9999988079071045
AU15 - OF Horizontal | Forward | Demo/001.jpg : 0.9999991655349731
AU15 - OF Horizontal | Forward | Demo/002.jpg : 0.9999995231628418
AU15 - OF Horizontal | Forward | Demo/003.jpg : 0.9999995231628418
AU15 - OF Horizontal | Forward | Demo/004.jpg : 0.9999988079071045
AU15 - OF Horizontal | Forward | Demo/005.jpg : 0.9999991655349731
AU15 - OF Horizontal | Forward | Demo/006.jpg : 0.9999991655349731
AU15 - OF Horizontal | Forward | Demo/007.jpg : 0.9999992847442627
AU15 - OF Horizontal | Forward | Demo/008.jpg : 0.9999991655349731
AU15 - OF Horizontal | Forward | Demo/009.jpg : 0.9999996423721313
Namespace(AU='17', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU17/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU17', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU17/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU17.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU17.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU17.pth!
AU17 - OF Horizontal | Forward | Demo/000.jpg : 0.32159876823425293
AU17 - OF Horizontal | Forward | Demo/001.jpg : 0.48420536518096924
AU17 - OF Horizontal | Forward | Demo/002.jpg : 0.49231451749801636
AU17 - OF Horizontal | Forward | Demo/003.jpg : 0.5389353632926941
AU17 - OF Horizontal | Forward | Demo/004.jpg : 0.5844005942344666
AU17 - OF Horizontal | Forward | Demo/005.jpg : 0.6829580068588257
AU17 - OF Horizontal | Forward | Demo/006.jpg : 0.605492115020752
AU17 - OF Horizontal | Forward | Demo/007.jpg : 0.5915066599845886
AU17 - OF Horizontal | Forward | Demo/008.jpg : 0.6478011012077332
AU17 - OF Horizontal | Forward | Demo/009.jpg : 0.603738009929657
Namespace(AU='23', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU23', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU23.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU23.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU23.pth!
AU23 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Namespace(AU='24', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU24/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU24', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU24/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU24.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
Loading snapshot_github/fold_0/OF_Horizontal/AU24.pth
[!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU24.pth!
AU24 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/009.jpg : 1.0
I think it is ok like that. Keeping in mind that our perforfance in the dataset is far from perfect in average. And all the 10 demo images have somewhat the same classification because they belong to 10 consecutive frames (much less than a second), so it is not crazy to think they potentially have the same AUs. Does it look good for you?
Oh, you're right, I got the same results. Sorry I asked about it and haven't checked other AUs. It all works now :)
Do you have any further ideas on how to improve accuracy? Seems like ex. FR was able to get somewhat better results in 2014: https://www.researchgate.net/publication/276393549_Automated_Facial_Coding_Validation_of_Basic_Emotions_and_FACS_AUs_in_FaceReader I'm really excited about developings in this field.
Well, I have not worked on this topic for a while now. This field has evolved quickly and there are lots of new implementations or approaches in the literature that certainly are more efficient.
The paper you mention, they validate their method with different datasets (I personally did not know them, maybe they are easier than BP4D I do not know), so the results are not comparable.
I will proceed to close the issue as there are no more problems with it. I am glad I could solve all your problems.
Hello, I tried to change the folder and run demo as below:
ipython main.py -- --AU=23 --fold=0 --GPU=0 --OF Horizontal --DEMO=Demo --mode_data=normal --pretrained_model /workspace/aunets/AUNets/models/fold_0/OF_Horizontal/AU23.pth
messages on the terminal is:
Namespace(AU='23', DELETE=False, DEMO='Demo', GPU='0', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU23', mode='train', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='/workspace/aunets/AUNets/models/fold_0/OF_Horizontal/AU23.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx') [!!] loaded trained model: /workspace/aunets/AUNets/models/fold_0/OF_Horizontal/AU23.pth! AU23 - OF Horizontal | Forward | Demo/000.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/001.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/002.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/003.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/004.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/005.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/006.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/007.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/008.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/009.jpg : 1.0
It looks like working but how can I find my results? there are no output files on results folder, no excel files. Could you tell me what the problem is? Just values on the terminal is all I have to deal with?
Also, one more question is about meaning of output values Are those detection probability values? are those used for intensity values?
Hello @kyoungchinseo,
It looks like working but how can I find my results? there are no output files on results folder, no excel files. Could you tell me what the problem is? Just values on the terminal is all I have to deal with?
Yes, there are no excel files because that was something I did for the entire test set and compute F1 across folds, etc. Values on the terminal is all you have for the prediction.
Are those detection probability values? are those used for intensity values?
Those are probability values for presence of each AU, not intensity values. In this paper we only address presence or absence of AUs.
Hi,
I'd like to run a demo of AUNets. I've downloaded all folds and put them in my home directory. I've made some changes to source files, as it seems they were needed:
it's some package fixing and substituting your local path, by my local path pointing to the fold_0 weights. While running demo like this
./main.sh -AU 12 -gpu 0 -fold 0 -OF None -DEMO /home/lsowa/demo/
(images are raw screens from my webcam) I encounter error:Do you know why's that? Is it something to do with the fact that I'm using weights for OF=Horizontal and I'm passing in images without OF? How to fix that?