BCV-Uniandes / AUNets

Pytorch implementation of Multi-View Dynamic Facial Action Unit Detection, Image and Vision Computing (2018)
MIT License
149 stars 30 forks source link

Issues while running demo #12

Closed luksow closed 5 years ago

luksow commented 5 years ago

Hi,

I'd like to run a demo of AUNets. I've downloaded all folds and put them in my home directory. I've made some changes to source files, as it seems they were needed:

diff --git a/main.py b/main.py
index 5b83276..40b2d6d 100755
--- a/main.py
+++ b/main.py
@@ -1,4 +1,4 @@
-#!/usr/local/bin/ipython
+#!/usr/bin/ipython
 import os
 import argparse
 from data_loader import get_loader
diff --git a/models/vgg_pytorch.py b/models/vgg_pytorch.py
index 3c0c135..910eb60 100644
--- a/models/vgg_pytorch.py
+++ b/models/vgg_pytorch.py
@@ -384,7 +384,7 @@ def vgg16(pretrained='', OF_option='None', model_save_path='', **kwargs):
     sheet={k.encode("utf-8"): v for k,v in model_zoo_.iteritems()}

   elif pretrained=='emotionnet' and OF_option=='None':
-    emo_file = sorted(glob.glob('/home/afromero/datos2/EmoNet/snapshot/models/EmotionNet/normal/fold_all/Imagenet/*.pth'))[-1]
+    emo_file = sorted(glob.glob('/home/lsowa/fold_0/OF_Horizontal/*.pth'))[-1]
     model_zoo_ = torch.load(emo_file)
     # print("Finetuning from: "+emo_file)
     model_zoo_={k.replace('model.',''): v for k,v in model_zoo_.iteritems()}
@@ -484,4 +484,4 @@ def vgg16(pretrained='', OF_option='None', model_save_path='', **kwargs):
       model.load_state_dict(model_zoo_2) 
       # ipdb.set_trace()    

-  return model
\ No newline at end of file
+  return model
diff --git a/requirements.txt b/requirements.txt
index ccce7b5..16e2f75 100644
--- a/requirements.txt
+++ b/requirements.txt
@@ -11,12 +11,11 @@ tqdm==4.11.2
 scikit_image==0.10.1
 torchvision==0.2.0
 ipdb==0.10.1
-pytorchviz==0.0.1
+torchviz==0.0.1
 Wand==0.4.4
 matplotlib==1.5.1
 Pillow==5.1.0
 matlab==0.1
 ops==0.4.7
-skimage==0.0
 scikit_learn==0.19.1
 xlsxwriter==1.0.4

it's some package fixing and substituting your local path, by my local path pointing to the fold_0 weights. While running demo like this ./main.sh -AU 12 -gpu 0 -fold 0 -OF None -DEMO /home/lsowa/demo/ (images are raw screens from my webcam) I encounter error:

./main.py -- --AU=12 --fold=0 --GPU=0 --OF None --DEMO /home/lsowa/demo/ --batch_size=117 --finetuning=emotionnet --mode_data=normal
Namespace(AU='12', DELETE=False, DEMO='/home/lsowa/demo/', GPU='0', HYDRA=False, OF=False, OF_option='None', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=117, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU12/OF_None/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU12', mode='train', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU12/OF_None/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
---------------------------------------------------------------------------
RuntimeError                              Traceback (most recent call last)
/home/lsowa/AUNets/main.py in <module>()
    119 
    120   print(config)
--> 121   main(config)

/home/lsowa/AUNets/main.py in main(config)
     39   # Solver
     40   from solver import Solver
---> 41   solver = Solver(rgb_loader, config, of_loader=of_loader)
     42 
     43   if config.SHOW_MODEL:

/home/lsowa/AUNets/solver.pyc in __init__(self, rgb_loader, config, of_loader)
     93     # Build tensorboard if use
     94     if config.mode!='sample':
---> 95       self.build_model()
     96       if self.SHOW_MODEL: return
     97       if self.use_tensorboard:

/home/lsowa/AUNets/solver.pyc in build_model(self)
    151     if self.TEST_TXT: return
    152     from models.vgg16 import Classifier
--> 153     self.C = Classifier(pretrained=self.finetuning, OF_option=self.OF_option, model_save_path=self.model_save_path)
    154 
    155     trainable_params, name_params = self.get_trainable_params()

/home/lsowa/AUNets/models/vgg16.pyc in __init__(self, pretrained, OF_option, model_save_path)
     30     self.model_save_path = model_save_path
     31 
---> 32     self._initialize_weights()
     33 
     34   def _initialize_weights(self):

/home/lsowa/AUNets/models/vgg16.pyc in _initialize_weights(self)
     36     if 'emotionnet' in self.finetuning:
     37       mode='emotionnet'
---> 38       self.model = model_vgg16(pretrained=mode, OF_option=self.OF_option, model_save_path=self.model_save_path, num_classes=22)
     39       modules = self.model.modules()
     40       for m in modules:

/home/lsowa/AUNets/models/vgg_pytorch.pyc in vgg16(pretrained, OF_option, model_save_path, **kwargs)
    403     if pretrained:
    404       # ipdb.set_trace()
--> 405       model.load_state_dict(model_zoo_)
    406 
    407   #====================================================================================================#

/usr/local/lib/python2.7/dist-packages/torch/nn/modules/module.pyc in load_state_dict(self, state_dict, strict)
    517                                        'whose dimensions in the model are {} and '
    518                                        'whose dimensions in the checkpoint are {}.'
--> 519                                        .format(name, own_state[name].size(), param.size()))
    520             elif strict:
    521                 raise KeyError('unexpected key "{}" in state_dict'

RuntimeError: While copying the parameter named classifier.0.weight, whose dimensions in the model are torch.Size([4096, 25088]) and whose dimensions in the checkpoint are torch.Size([4096, 50176]).

Do you know why's that? Is it something to do with the fact that I'm using weights for OF=Horizontal and I'm passing in images without OF? How to fix that?

luksow commented 5 years ago

Btw. For OF=Horizontal, the error is similar: RuntimeError: While copying the parameter named classifier.0.weight, whose dimensions in the model are torch.Size([4096, 50176]) and whose dimensions in the checkpoint are torch.Size([4096, 100352]).

affromero commented 5 years ago

@luksow you are somewhat right. In the first case you are using OF=None, so the weights for that are different than the ones you have (OF=Horizontal), and that is why you get the error.

Regarding OF=Horizontal, you need the folder for the RGB images and the respective OF images with the same names https://github.com/BCV-Uniandes/AUNets/blob/master/data_loader.py#L80-L81.

Could you please double check? I ran it and I got no errors (a nice reminder: it only works for pytorch 0.3):

image

If you still get the error, please provide me more details to reproduce it.

luksow commented 5 years ago

@affromero Thanks for your reply. Two more things:

  1. How to produce Horizontal OF for my images?
  2. Just to see how far it'll go, I created a folder with exactly the same images in OF folder and run the algo and this happend: image Seems like --> 393 au_rgb_file = sorted(glob.glob(model_save_path.replace(OF_option,'None')+'/*.pth'))[-1] in this line it's looking for a model in: ./snapshot/models/BP4D/normal/fold_0/AU02/OF_Normal/emotionnet (as it substitued Horizontal with Normal) but this dir is empty. Please help :)
affromero commented 5 years ago

Hello,

  1. here you can find the file to generate OF. Please see the new folders (Demo and Demo_OF) I just uploaded, so you have an idea.
  2. I just fixed that bug in this commit.

Please tell me if everything is ok for you now.

luksow commented 5 years ago

Hi, thanks for your patch! It started producing results (which is good) but I'm afraid something is still wrong (?) as for nearly all AUs with your Demo files I'm getting the probability of 1.0:

ipython main.py -- --AU=23 --fold=0 --GPU=0 --OF Horizontal --DEMO=Demo --mode_data=normal --pretrained_model /home/lsowa/fold_0/OF_Horizontal/AU23.pth 
Namespace(AU='23', DELETE=False, DEMO='Demo', GPU='0', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU23', mode='train', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='/home/lsowa/fold_0/OF_Horizontal/AU23.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 [!!] loaded trained model: /home/lsowa/fold_0/OF_Horizontal/AU23.pth!
AU23 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/009.jpg : 1.0

Not sure though, maybe your optical flow files are wrong or something?

affromero commented 5 years ago

For every single AU or just for AU23?

luksow commented 5 years ago

I've tried 5 or 6 - all the same results - 1.0.

affromero commented 5 years ago

I just ran it for all of them:

afromero@bcv002:~/datos2/AUNets$ for au in 01 02 04 06 07 10 12 14 15 17 23 24; do ipython main.py -- --GPU=2 --mode=test --AU=$au --pretrained_model snapshot_github/fold_0/OF_Horizontal/AU${au}.pth --OF=Horizontal --DEMO Demo; done
Namespace(AU='01', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU01/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU01', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU01/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU01.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU01.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU01.pth!
AU01 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/007.jpg : 0.9999990463256836
AU01 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU01 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Namespace(AU='02', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU02/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU02', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU02/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU02.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU02.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU02.pth!
AU02 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU02 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Namespace(AU='04', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU04/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU04', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU04/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU04.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU04.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU04.pth!
AU04 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU04 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Namespace(AU='06', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU06/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU06', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU06/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU06.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU06.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU06.pth!
AU06 - OF Horizontal | Forward | Demo/000.jpg : 0.00014769911649636924
AU06 - OF Horizontal | Forward | Demo/001.jpg : 0.00012369781325105578
AU06 - OF Horizontal | Forward | Demo/002.jpg : 0.00013797279098071158
AU06 - OF Horizontal | Forward | Demo/003.jpg : 7.314350659726188e-05
AU06 - OF Horizontal | Forward | Demo/004.jpg : 7.164275302784517e-05
AU06 - OF Horizontal | Forward | Demo/005.jpg : 6.787211168557405e-05
AU06 - OF Horizontal | Forward | Demo/006.jpg : 7.220840052468702e-05
AU06 - OF Horizontal | Forward | Demo/007.jpg : 7.675521192140877e-05
AU06 - OF Horizontal | Forward | Demo/008.jpg : 6.585798837477341e-05
AU06 - OF Horizontal | Forward | Demo/009.jpg : 7.559465302620083e-05
Namespace(AU='07', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU07/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU07', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU07/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU07.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU07.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU07.pth!
AU07 - OF Horizontal | Forward | Demo/000.jpg : 0.00019906883244402707
AU07 - OF Horizontal | Forward | Demo/001.jpg : 0.00010282346920575947
AU07 - OF Horizontal | Forward | Demo/002.jpg : 0.0001013330474961549
AU07 - OF Horizontal | Forward | Demo/003.jpg : 1.4206962077878416e-05
AU07 - OF Horizontal | Forward | Demo/004.jpg : 1.72459585883189e-05
AU07 - OF Horizontal | Forward | Demo/005.jpg : 1.7171663785120472e-05
AU07 - OF Horizontal | Forward | Demo/006.jpg : 1.4279102288128342e-05
AU07 - OF Horizontal | Forward | Demo/007.jpg : 1.4347187970997766e-05
AU07 - OF Horizontal | Forward | Demo/008.jpg : 1.3749855497735552e-05
AU07 - OF Horizontal | Forward | Demo/009.jpg : 9.883703569357749e-06
Namespace(AU='10', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU10/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU10', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU10/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU10.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU10.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU10.pth!
AU10 - OF Horizontal | Forward | Demo/000.jpg : 0.0016679934924468398
AU10 - OF Horizontal | Forward | Demo/001.jpg : 0.0017341816565021873
AU10 - OF Horizontal | Forward | Demo/002.jpg : 0.0016202481929212809
AU10 - OF Horizontal | Forward | Demo/003.jpg : 0.001320053357630968
AU10 - OF Horizontal | Forward | Demo/004.jpg : 0.0011272196425125003
AU10 - OF Horizontal | Forward | Demo/005.jpg : 0.0009411966893821955
AU10 - OF Horizontal | Forward | Demo/006.jpg : 0.0012218032497912645
AU10 - OF Horizontal | Forward | Demo/007.jpg : 0.0009297414217144251
AU10 - OF Horizontal | Forward | Demo/008.jpg : 0.0008697224548086524
AU10 - OF Horizontal | Forward | Demo/009.jpg : 0.0008779817726463079
Namespace(AU='12', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU12/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU12', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU12/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU12.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU12.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU12.pth!
AU12 - OF Horizontal | Forward | Demo/000.jpg : 4.029447154607624e-05
AU12 - OF Horizontal | Forward | Demo/001.jpg : 4.194277062197216e-05
AU12 - OF Horizontal | Forward | Demo/002.jpg : 3.7196106859482825e-05
AU12 - OF Horizontal | Forward | Demo/003.jpg : 4.142936086282134e-05
AU12 - OF Horizontal | Forward | Demo/004.jpg : 3.795105294557288e-05
AU12 - OF Horizontal | Forward | Demo/005.jpg : 3.526378350215964e-05
AU12 - OF Horizontal | Forward | Demo/006.jpg : 4.677407196140848e-05
AU12 - OF Horizontal | Forward | Demo/007.jpg : 2.8154141546110623e-05
AU12 - OF Horizontal | Forward | Demo/008.jpg : 2.8286036467761733e-05
AU12 - OF Horizontal | Forward | Demo/009.jpg : 2.830025550792925e-05
Namespace(AU='14', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU14/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU14', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU14/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU14.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU14.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU14.pth!
AU14 - OF Horizontal | Forward | Demo/000.jpg : 0.04602605849504471
AU14 - OF Horizontal | Forward | Demo/001.jpg : 0.04383467510342598
AU14 - OF Horizontal | Forward | Demo/002.jpg : 0.04087616503238678
AU14 - OF Horizontal | Forward | Demo/003.jpg : 0.03829687833786011
AU14 - OF Horizontal | Forward | Demo/004.jpg : 0.04373708739876747
AU14 - OF Horizontal | Forward | Demo/005.jpg : 0.043208882212638855
AU14 - OF Horizontal | Forward | Demo/006.jpg : 0.04182494059205055
AU14 - OF Horizontal | Forward | Demo/007.jpg : 0.04454749450087547
AU14 - OF Horizontal | Forward | Demo/008.jpg : 0.03956642001867294
AU14 - OF Horizontal | Forward | Demo/009.jpg : 0.036166053265333176
Namespace(AU='15', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU15/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU15', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU15/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU15.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU15.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU15.pth!
AU15 - OF Horizontal | Forward | Demo/000.jpg : 0.9999988079071045
AU15 - OF Horizontal | Forward | Demo/001.jpg : 0.9999991655349731
AU15 - OF Horizontal | Forward | Demo/002.jpg : 0.9999995231628418
AU15 - OF Horizontal | Forward | Demo/003.jpg : 0.9999995231628418
AU15 - OF Horizontal | Forward | Demo/004.jpg : 0.9999988079071045
AU15 - OF Horizontal | Forward | Demo/005.jpg : 0.9999991655349731
AU15 - OF Horizontal | Forward | Demo/006.jpg : 0.9999991655349731
AU15 - OF Horizontal | Forward | Demo/007.jpg : 0.9999992847442627
AU15 - OF Horizontal | Forward | Demo/008.jpg : 0.9999991655349731
AU15 - OF Horizontal | Forward | Demo/009.jpg : 0.9999996423721313
Namespace(AU='17', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU17/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU17', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU17/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU17.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU17.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU17.pth!
AU17 - OF Horizontal | Forward | Demo/000.jpg : 0.32159876823425293
AU17 - OF Horizontal | Forward | Demo/001.jpg : 0.48420536518096924
AU17 - OF Horizontal | Forward | Demo/002.jpg : 0.49231451749801636
AU17 - OF Horizontal | Forward | Demo/003.jpg : 0.5389353632926941
AU17 - OF Horizontal | Forward | Demo/004.jpg : 0.5844005942344666
AU17 - OF Horizontal | Forward | Demo/005.jpg : 0.6829580068588257
AU17 - OF Horizontal | Forward | Demo/006.jpg : 0.605492115020752
AU17 - OF Horizontal | Forward | Demo/007.jpg : 0.5915066599845886
AU17 - OF Horizontal | Forward | Demo/008.jpg : 0.6478011012077332
AU17 - OF Horizontal | Forward | Demo/009.jpg : 0.603738009929657
Namespace(AU='23', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU23', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU23.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU23.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU23.pth!
AU23 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU23 - OF Horizontal | Forward | Demo/009.jpg : 1.0
Namespace(AU='24', DELETE=False, DEMO='Demo', GPU='2', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU24/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU24', mode='test', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU24/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='snapshot_github/fold_0/OF_Horizontal/AU24.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx')
 Loading snapshot_github/fold_0/OF_Horizontal/AU24.pth
 [!!] loaded trained model: snapshot_github/fold_0/OF_Horizontal/AU24.pth!
AU24 - OF Horizontal | Forward | Demo/000.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/001.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/002.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/003.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/004.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/005.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/006.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/007.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/008.jpg : 1.0
AU24 - OF Horizontal | Forward | Demo/009.jpg : 1.0

I think it is ok like that. Keeping in mind that our perforfance in the dataset is far from perfect in average. And all the 10 demo images have somewhat the same classification because they belong to 10 consecutive frames (much less than a second), so it is not crazy to think they potentially have the same AUs. Does it look good for you?

luksow commented 5 years ago

Oh, you're right, I got the same results. Sorry I asked about it and haven't checked other AUs. It all works now :)

Do you have any further ideas on how to improve accuracy? Seems like ex. FR was able to get somewhat better results in 2014: https://www.researchgate.net/publication/276393549_Automated_Facial_Coding_Validation_of_Basic_Emotions_and_FACS_AUs_in_FaceReader I'm really excited about developings in this field.

affromero commented 5 years ago

Well, I have not worked on this topic for a while now. This field has evolved quickly and there are lots of new implementations or approaches in the literature that certainly are more efficient.

The paper you mention, they validate their method with different datasets (I personally did not know them, maybe they are easier than BP4D I do not know), so the results are not comparable.

I will proceed to close the issue as there are no more problems with it. I am glad I could solve all your problems.

kyoungchinseo commented 5 years ago

Hello, I tried to change the folder and run demo as below:

ipython main.py -- --AU=23 --fold=0 --GPU=0 --OF Horizontal --DEMO=Demo --mode_data=normal --pretrained_model /workspace/aunets/AUNets/models/fold_0/OF_Horizontal/AU23.pth

messages on the terminal is:

Namespace(AU='23', DELETE=False, DEMO='Demo', GPU='0', HYDRA=False, OF=True, OF_option='Horizontal', SHOW_MODEL=False, TEST_PTH=False, TEST_TXT=False, batch_size=118, beta1=0.5, beta2=0.999, dataset='BP4D', finetuning='emotionnet', fold='0', image_size=224, log_path='./snapshot/logs/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', log_step=2000, lr=0.0001, metadata_path='./data/BP4D/normal/fold_0/AU23', mode='train', mode_data='normal', model_save_path='./snapshot/models/BP4D/normal/fold_0/AU23/OF_Horizontal/emotionnet', num_epochs=12, num_epochs_decay=13, num_workers=4, pretrained_model='/workspace/aunets/AUNets/models/fold_0/OF_Horizontal/AU23.pth', results_path='./snapshot/results', stop_training=2, test_model='', use_tensorboard=False, xlsfile='./snapshot/results/normal/emotionnet.xlsx') [!!] loaded trained model: /workspace/aunets/AUNets/models/fold_0/OF_Horizontal/AU23.pth! AU23 - OF Horizontal | Forward | Demo/000.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/001.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/002.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/003.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/004.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/005.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/006.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/007.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/008.jpg : 1.0 AU23 - OF Horizontal | Forward | Demo/009.jpg : 1.0

It looks like working but how can I find my results? there are no output files on results folder, no excel files. Could you tell me what the problem is? Just values on the terminal is all I have to deal with?

Also, one more question is about meaning of output values Are those detection probability values? are those used for intensity values?

affromero commented 5 years ago

Hello @kyoungchinseo,

It looks like working but how can I find my results? there are no output files on results folder, no excel files. Could you tell me what the problem is? Just values on the terminal is all I have to deal with?

Yes, there are no excel files because that was something I did for the entire test set and compute F1 across folds, etc. Values on the terminal is all you have for the prediction.


Are those detection probability values? are those used for intensity values?

Those are probability values for presence of each AU, not intensity values. In this paper we only address presence or absence of AUs.