Open Mahdidrm opened 3 years ago
I think that you can modify the demo file to add the titles to the plots according to the corresponding AUs.
Sure, just a question, the names of AUs are saved in which name or variable to use?
They are AU6, AU10, AU12, AU14, AU17 in that order.
OK thanks; So, I must add them by my self, this is not a way to put them automatically.
You can modify the demo code, it is easy Python, to suit your needs :)
Yeah sure. many thanks for your guides.
Hi About adding the AU's name:
title= ['6','10','12','14','17']
for i in range(0,1):
for j in range(0,5):
resized_map = dlib.resize_image(map[j,:,:].cpu().data.numpy(),rows=256,cols=256)
resized_map1 = resized_map1 + resized_map
ax = axs[j,i+1]
ax.set_title('AU('+title[j]+')')
pcm = ax.pcolormesh(resized_map)
fig.colorbar(pcm, ax=ax)
ax = axs[j,i]
ax.imshow(img)
I have an other question. In this code we have just 5 AUs, but for emotion recognition we need more of them . Can you please tell me how I can add more AUs to the code? I know that I do not have access to main model and understand you for copy-right reasons; but I hope you accept that this code is not complete for other usages.
please guide me. thanks
You have to train the model with the AUs you want to use. You can follow the paper and train a model on DISFA for example.
Please note that the code in this repo was released for research purposes only to reproduce the results of the BMVC2018 paper listed in the main README file.
The training uses standard heatmap regression, so you should be able to train it with whichever AUs suit you.
Thank you for reply; About the purpose of code and paper, I am using them in my PhD thesis and not business subjects, this is sure that I will cite your paper in my work as one of my references(this is my pleasure). But about the model , you used of Hourglass Network for AU extraction, but at the middle of this network in your schema you have a big layer; could you please tell me what is that exactly?
Thanks
I didn't mean you were not going to use it adequately! I meant the code is a proof-of-concept and therefore is limited in some sense :-)
There is no extra layers in the network. The model is just an Hourglass that takes the registered image and produces a set of heatmaps corresponding to the different AUs. Nothing special appears in the Hourglass, it was actually replicated from the Face Alignment Network (FAN) of Bulat and Tzimiropoulos, ICCV 2017.
Training the network is certainly straightforward provided a good augmentation and by placing the heatmaps correctly (for an update of where to place the extra heatmaps corresponding to other AUs, please check https://arxiv.org/abs/2004.06657). I guarantee no extra steps or hyperparameter tuning is needed.
Oh sorry, I though other thing;
Thanks a lot for the information. I gonna read the paper in the link. About the AU-net which is mentioned in the paper, I lost one month of my time to run it because caffe installation was a little complicated and at the end I did not have the results :)) ;
However thanks a lot for your guides. I will try to implement the model.
You can use the network structure and the processing from this code to develop a proper dataloader and train in PyTorch. The experiments on that paper were also done in PyTorch.
Thanks a lot for your reply. I gonna try to solve it.
Sorry for delay in reply. And thanks again
Hello dear Lozano I am working in the training code, but I have several issues to train the model. When I call Fullnetwork as feature extractor it needs some parameters which in AUmaps file you gave number 1.
I would like to ask you, is it possible that I have your training code? I know you have copyright limitations, but I'm not sure if this also applies to the training code or not. Right now I'm on a deadline (for February 1 I should send my article) and the heatmap part is not over. So please if you can help me solve this training step.
Thank you
I didn't understand what you meant with "when I call Fullnetwork as feature extractor it needs ..."
You just need to have the network returning as many heatmaps as AUs (see the papers), and then as target define the heatmaps with the target intensity and location as described in the papers. Everything else is just having your dataloader and the MSE loss or Huber loss.
I hope this helps.
Sorry, my question was not clear.
If I'm not mistaken you are saying I should just add the number of AUs in the AUMaps.py from 5 to 12 Is this true? If so, what about the pre-trained model we should load? it is trained for 5 AU and not for 12. So I should train the model again.
In the code we have an hourglass network which is pre-trained and we just load the AUdetector.pth.tar file which detects 5 AUs; now I want to have 12 AU. for this:
Should Hourglas be re-train? if yes, I should use its architecture. isn't it? for this I have to call the FullNetwork class as input in "I think" training code which I should write.
No, you should re-train with all the AUs you want to use. You can initialize the network from the pre-trained weights by adding the necessary heatmaps but you need to train it end-to-end.
OK perfect. Thanks again
@Mahdidrm Hi, Have you finished the training? I want to I want to predict all the AUs. Could you please give me some help? @ESanchezLozano
Hi, Yes, I did about one year and some months ago. The problem is that this code detected just 5 AU and it was not enough. I wanted to estimate the positions of at least 12 AUs to facial expressions recognition.
I'm writing an article right now and I detected 13 AU. When it is published, I send you the code! Just email me in three months, please. mehdidrm@gmail.com
@Mahdidrm we are planning to release, when time provides, an extension corresponding to the TAFFC paper we published last year:
A transfer learning approach to heatmap regression for action unit intensity estimation I Ntinou, E Sanchez, A Bulat, M Valstar, G Tzimiropoulos IEEE Transactions on Affective Computing, 2021
The main author will at some point release that code, she is just very busy at this moment! Thanks for your patience.
Perfect! In current work we have worked on CK+ dataset but we also need to add other datasets like DISFA and FER2017. I will also cite your work. Thanks
@Mahdidrm Hi,I have emailed you. And I would like to ask you a question: I found that if there is a slight change in the clipping position of the same picture, the AU intensity detected by the detection algorithm will be different. Do you have a solution? @ESanchezLozano
Hi, thanks for the question, if I'm not mistaken, if you're working with landmark coordinates it can have an effect on the positions of the intensities. which means that when you crop an image from a dataset and want to use the coordinates from the same dataset, you will get these errors. But in positionless algorithms, you won't have the same problem.
I read your email and wanted to respond. But I say here, our work is not published yet and it's not possible to send the code at the moment, after October it will be issued. Sorry.
OK. Thank you for your help. Could you please send me some placeless algorithms for me to learn? I only studied micro expressions for a week. I'm very new. @Mahdidrm
Hello thanks a lot for sharing this code.
Could you please tell me how we can add the name of each Action Unit above each one?
Thanks