Closed hwd-code closed 4 months ago
This project use the FERG, MUG, and RAF-DB datasets, all of which are public available. Please obtain them from their official websites.
Thank you for your reply. Yes, I know that we are using three datasets: FERG, MUG, and RAF-DB. I have downloaded the RAF-DB dataset. How should I handle the data structure of the project? Do I need to download the three datasets for processing in order to match the CSV file.(感谢您的回复。是的,我知道使用的是FERG, MUG, and RAF-DB三个数据集,我有下载其中的RAF-DB数据集,请问我应该如何处理成本项目的数据结构呢,需要下载三个数据集进行处理才能符合csv文件吗)
Thank you for your reply. Yes, I know that we are using three datasets: FERG, MUG, and RAF-DB. I have downloaded the RAF-DB dataset. How should I handle the data structure of the project? Do I need to download the three datasets for processing in order to match the CSV file.(感谢您的回复。是的,我知道使用的是FERG, MUG, and RAF-DB三个数据集,我有下载其中的RAF-DB数据集,请问我应该如何处理成本项目的数据结构呢,需要下载三个数据集进行处理才能符合csv文件吗)
Please do not combine datasets. The main README.md shows the structure of each preprocessed dataset.
csv
file according to the labels and image file names (files' base name excluding the path). The structure of each line of the csv
file is also in the README.md. Example csv
file can be found in this Repo.Thank you very much. I have followed your suggestion to downsample and successfully run the example, but it seems that this example is not yet the complete objective of the paper. But it was one of the ablation experiments, so I carefully read your code and really benefited a lot. However, so far, I have difficulty understanding the role of no_C_adv. The meaning in the code is whether to recreate the classifier as an adversarial, but this is not mentioned in the paper. I don't know if this is for ablation experiments or if the experimental goal is to recreate the classifier as an adversarial. Also, I commented out L_lir in the code. Is it ineffective or was it accidentally done. Thank you again to the author for their excellent results.(非常感谢,我按照您的建议进行了下采样操作并且顺利的运行了example,但是,这个example好像还不是论文中完整的目标。而是其中一个消融实验,所以,我仔细阅读了您的代码,真的受益匪浅,但是目前为止,我对no_C_adv难以理解其作用,代码中的意思是是否重新创建分类器作为对抗,但是论文中并未提及这一点,不知道这是要用于消融实验还是实验目标就是要重新创建分类器作为对抗。还有代码中将L_lir注释掉了,是没有效果还是不小心的。再次感谢作者优秀的成果。)
Thank you very much. I have followed your suggestion to downsample and successfully run the example, but it seems that this example is not yet the complete objective of the paper. But it was one of the ablation experiments, so I carefully read your code and really benefited a lot. However, so far, I have difficulty understanding the role of no_C_adv. The meaning in the code is whether to recreate the classifier as an adversarial, but this is not mentioned in the paper. I don't know if this is for ablation experiments or if the experimental goal is to recreate the classifier as an adversarial. Also, I commented out L_lir in the code. Is it ineffective or was it accidentally done. Thank you again to the author for their excellent results.(非常感谢,我按照您的建议进行了下采样操作并且顺利的运行了example,但是,这个example好像还不是论文中完整的目标。而是其中一个消融实验,所以,我仔细阅读了您的代码,真的受益匪浅,但是目前为止,我对no_C_adv难以理解其作用,代码中的意思是是否重新创建分类器作为对抗,但是论文中并未提及这一点,不知道这是要用于消融实验还是实验目标就是要重新创建分类器作为对抗。还有代码中将L_lir注释掉了,是没有效果还是不小心的。再次感谢作者优秀的成果。)
We have conducted more ablation study than that reported in paper. Some experiments are not reported because they are not contributes to the final model. Using new classifiers for adversarial training rather than that already used in cooperative training have downgraded performance. You can recover the experimental conditions in the paper by enable/disable the losses in the code.
------------ Options ------------- C_adam_b1: 0.5 C_adam_b2: 0.999 De_adam_b1: 0.5 De_adam_b2: 0.999 En_adam_b1: 0.5 En_adam_b2: 0.999 HR_image_size: 256 L_adv: 0.001 L_cls_sim: 0.0001 L_cons_sim: 8 L_cross: 0.001 L_cyc: 5 L_lir: 0.1 batch_size: 16 checkpoints_dir: ./checkpoints data_dir: ./dataset/FERG display_freq_s: 600 expr_dir: ./checkpoints/FERG_res_Gu expression_type: 7 gpu_ids: ['2', '5'] ids_file_suffix: _16.csv images_folder: imgs init: kaiming_normal is_train: True load_epoch: 0 lr_C: 0.0005 lr_De: 0.0005 lr_En: 0.0005 lr_change: 0.95 lr_decay_iters: 5 lr_gamma: 0.9 lr_policy: lambda model: LRPPN n_threads_test: 2 n_threads_train: 4 name: FERG_res_Gu nepochs_decay: 6 nepochs_no_decay: 3 no_C_adv: True no_RecCycle: False num_iters_validate: 1 pretrain: False pretrain_nepochs: 10 print_freq_s: 60 resnet: True save_fake_dir: ./checkpoints/FERG_res_Gu/imgs save_features: 1 save_img: True save_latest_freq_s: 300 save_model: True save_model_freq: 1 save_results_file: results.csv show_time: False subject_type: 6 train_Cross: True train_CrossOnly: False train_Gu: True train_Gu_LIR: True train_Gu_RSC: True train_Gu_SC: True train_Rec: True train_adv: True use_scheduler: True -------------- End ---------------- exp_exp represents the result of expression representation from the expression classifier;(exp_exp表示从表情分类器对表情表示的结果;) exp_id represents the result of facial expression representation from the identity classifier;(exp_id表示从身份分类器对表情表示的结果;) id_exp represents the result of identity representation from the expression classifier;(id_exp表示从表情分类器对身份表示的结果;) id_id represents the result of identity representation from the identity classifier;(id_id表示从身份分类器对身份表示的结果;) End of epoch 1, the HR acc is exp_exp:0.9998, exp_id:0.5533, id_exp:0.0475, id_id:1.0000, LR acc is exp:0.9995, id:0.2188, exp(1-id)=0.7809 End of epoch 2, the HR acc is exp_exp:0.9999, exp_id:1.0000, id_exp:0.1039, id_id:1.0000, LR acc is exp:0.9920, id:0.2062, exp(1-id)=0.7874 End of epoch 3, the HR acc is exp_exp:0.9999, exp_id:1.0000, id_exp:1.0000, id_id:1.0000, LR acc is exp:0.9963, id:0.1959, exp(1-id)=0.8011 End of epoch 4, the HR acc is exp_exp:0.9999, exp_id:1.0000, id_exp:0.9999, id_id:1.0000, LR acc is exp:0.9999, id:0.3132, exp(1-id)=0.6867 End of epoch 5, the HR acc is exp_exp:0.9999, exp_id:1.0000, id_exp:0.9999, id_id:1.0000, LR acc is exp:0.9999, id:0.2806, exp(1-id)=0.7193 End of epoch 6, the HR acc is exp_exp:1.0000, exp_id:1.0000, id_exp:1.0000, id_id:1.0000, LR acc is exp:1.0000, id:0.3531, exp(1-id)=0.6469 End of epoch 7, the HR acc is exp_exp:0.9999, exp_id:1.0000, id_exp:1.0000, id_id:1.0000, LR acc is exp:0.9999, id:0.4302, exp*(1-id)=0.5697 Why is the recognition rate of facial expressions and facial expressions so high after cross adversarial training? I believe this is not the original intention of cross adversarial training. May I ask if there was anything I did wrong(为什么进行了交叉对抗训练后,对表情表示的身份识别率和对身份表示的表情识别率如此之高,我相信这并非交叉对抗训练的本意。请问我是有哪里做错的地方吗) Thank you very much for your generous guidance(非常感谢您的不吝指教) ========add=============== At present, the reconstructed images I have trained on are not very good, and I cannot see any information from the original images, only the pixel colors. I think the reconstruction effect of RafDB demonstrated in the paper is excellent, I don't know how I can do it.(还有我现阶段训练得到的重建图片都不是很好,完全无法看出原图片的任何信息,只有像素颜色。我认为在论文中展现的对RafDB的重建效果非常的棒,不知道我应该如何能做到。) =============add============== http://[mug.ee.auth.gr](http://mug.ee.auth.gr/fed/)/fed/,The website provided in the MUG paper is no longer accessible, and there is no way to obtain the dataset through a browser. Therefore, could the author share your dataset for experimentation? Thank you very much!!!(MUG论文中提供的网址已经无法访问了,通过浏览器也无法找到能够获取数据集的途径,所以,作者能分享您的数据集进行实验吗,非常感谢!!!)
Hint: The reconstructed images will start from random pixels to gradually become clear, the above samples are from ~10 epoch if I remember correctly.
- The setting of adversarial training can cause diverse phenomenons during training. I'm very sorry, but I am quite occupied with preparing a new paper and cannot help you debug recently. Besides, the results on LR images (features) are more important in this work.
- Reconstructed images should have quality similar to these:
- Sharing the dataset violates the license of the dataset. Thank you for your reply. Your reconstructed image is excellent and retains human perception recognition. However, after running 30 epochs, I still cannot get a normal reconstructed image. I don't know where it is that you set it incorrectly.
Hello, first of all, I greatly admire the author's achievements. How should the dataset used in this project be obtained. thank you!!! (您好,首先非常佩服作者的成果,请问本项目中使用的数据集应该如何获取。谢谢!!!)