Closed nandhini30 closed 1 year ago
I am reviewing the document , and I found out 1 major issue so I am writing is ASAP. We are missing the "robustness" experiment which is even there in the title.
For robustness the experiment to do is adding noise to the labels like the one which was done in my paper . So basically the experiment for adding different noise to the labels , retraining the models with the 4 loss function or atleast (the Gaussian and cauchy) . Do it only for 1 dataset whichever trains faster.
I know this doesnt fit in your plans but we were discussion this before the break but missed the point after the break. But think about it, its in the title and we started for checking the robustness.
I would say you try these experiments till 15th and submit which is possible. I will send the other review ASAP .
There is one another option which might take less time than previous approach . Which is checking the robustness of the inference and not in training .
In this you can add noise to the image by using the different augmentation techniques (https://pytorch.org/vision/stable/transforms.html) so here you dont need to train the model just change the transform for the test dataloader and check the outputs.
So by doing this you can atleast check the mark that you have looked something into robustness of the model.
@nandhini30
(12/22/2022, 8:49:31 PM) -> the pdf is attached below
[ ] “.” ([Mathivanan, p. 1]() page=13&annotation=CMXDP9RV)) check spaces
[ ] “.M” ([Mathivanan, p. 1]() page=13&annotation=TSY8JNTP))
[ ] “Motivation In this research, th” ([Mathivanan, p. 1]() page=13&annotation=QRP47A9N)) This below what you have written needs to be last section of the Chapter 1.
Basically this is not the motivation .
Motivation :
Why does one need uncertainty estimation ? Why does anyone need to check the robustness of the uncertainty estimation ?
Very little "why keypoint regression ? "
Try to answer this 3 questions : And give multiple answers to the questions.
Dont dump this section move to the last part of the chapter 1
[ ] “robustness capacity” ([Mathivanan, p. 1]() page=13&annotation=EG78NQEF)) There is no experiment for the robustness capacity .. As mentioned in the issue.
The robustness experiment is missing .
So there is no case of improvement.
Add this point after you do 1 of the robustness experiment.
[ ] “is improved by” ([Mathivanan, p. 1]() page=13&annotation=NCEZGGTK))
[ ] ([Mathivanan, p. 1]() Add
\ A new loss function Cauchy loss function [cite by blog] was compared on the keypoint estimation
[ ] “robustness capacity” ([Mathivanan, p. 1]() page=13&annotation=H88GHUSA)) The uncertainty estimation capability of different loss function is evaluated using complex regression task Keypoint detection.
[ ] “Problem Statement” ([Mathivanan, p. 2]() page=14&annotation=GGFWNMGQ)) why is this in points.
make it a paragraph
[ ] ([Mathivanan, p. 2]() This is problem statement section :
So the end paragraph should start with should be :
In this RnD we look into the problem of ..........................
NOTE : It can also be a mathematical formulation of the problem. Just copy paste the equations from my papers problem statement (maybe change the alphabets if possible) . Make sure in the begining of the paragraph you mention the below paragraphs are taken from the paper.
Dont copy the sttaements only the equations and cite.
[ ] “The challenge of estimating uncertainty increases when there are noisy labels or outliers in the training data. The ability of learning algorithms to properly learn uncertainty by ignoring the outliers is known as robust uncertainty estimation.” ([Mathivanan, p. 2]() page=14&annotation=93ZS5M6H)) So you now the definition.
But the experiments are missing .
So either do the experiments or remove this line .
[ ] ([Mathivanan, p. 3]() Move that contributions section here :
[ ] ([Mathivanan, p. 8]() 3 pages and little less. Please add more here .
Anyways write a conclusion atleast on which datasets were selected and which uncertainty estimation method was selected and WHY. WHY is important
[ ] “et.” ([Mathivanan, p. 10]() page=22&annotation=7ZUZXY7K)) citation
[ ] “s.” ([Mathivanan, p. 10]() page=22&annotation=AB2Q63DF)) citation
[ ] “es.” ([Mathivanan, p. 11]() page=23&annotation=SU373NDS)) citation
[ ] “l.” ([Mathivanan, p. 11]() page=23&annotation=9ESTC2ZS)) citation
[ ] “d” ([Mathivanan, p. 12]() page=24&annotation=M6HS7NN4)) citation
[ ] ([Mathivanan, p. 12]() copy paste the derivation from the paper of the loss function
Put the code side by side from pytorch .. same comment for all below
[ ] “Laplace negative log likelihood” ([Mathivanan, p. 12]() page=24&annotation=SQGCXBNN)) citation
[ ] ([Mathivanan, p. 12]() copy paste the derivation from the paper of the loss function
[ ] “Cauchy negative log likelihood” ([Mathivanan, p. 12]() page=24&annotation=UI53QL3G)) citation
[ ] ([Mathivanan, p. 12]() copy paste the derivation from the blog of the loss function
[ ] “Evidential loss” ([Mathivanan, p. 13]() page=25&annotation=5U29K4T6)) citation
[ ] “β(1+” ([Mathivanan, p. 13]() page=25&annotation=3NWWKTTA))
[ ] ([Mathivanan, p. 14]() Please paraphrase my blog on how to get the upper and lower her .
Just on top of the section write the below section comes from the blog here .
[ ] ([Mathivanan, p. 16]() Dont do this :
1 page 3 images .
Just pust the 3 images side by side .
Professors get really angry when we see that just to increase page numbers images are being posted
[ ] “Metrices” ([Mathivanan, p. 19]() page=31&annotation=W67QX9KU)) metric.
Which metric ..
Why is it not on the graph .. What is value .. Just put the metric name
[ ] ([Mathivanan, p. 19]() Plot it straight please ..
it easily fits horiontally
[ ] ([Mathivanan, p. 20]() which value ??
make the plot smaller half a sheet is enough ,, what is there in this graph that a whole page has to be taken
[ ] “metrices” ([Mathivanan, p. 20]() page=32&annotation=9EJEBU9T)) metrics .
Which metric ..
Why is it not on the graph .. What is value .. Just put the metric name
[ ] “Metrices” ([Mathivanan, p. 21]() page=33&annotation=QVTTU98L)) metrics .
Which metric ..
Why is it not on the graph .. What is value .. Just put the metric name
[ ] ([Mathivanan, p. 21]() Make it straight
What is value
[ ] “metrices” ([Mathivanan, p. 22]() metrics .
Which metric ..
Why is it not on the graph .. What is value .. Just put the metric name
[ ] ([Mathivanan, p. 23]() Make the metric which is lowest bold .
Add caption and explain what the table . Wat is better in the mteric lower better or higher better and conclude also ..
Cauchy works better etc
Make 2 tables :
1 for Biwi and 1 for Face .
There is nothing we can compare in between BIWI and FACE so they shoudl be separte .
[ ] ([Mathivanan, p. 24]() How is this most confident ..
All the blue mismatches the yellow ..
When you dont se the yellow its a match .. Here all mismatch
Mayeb you are plotting wrong ..
[ ] ([Mathivanan, p. 24]() For a single dataset
All most confident images and all the least confient images from all 4 loss function fit in a SINGLE page .
in the following order
(Page 1)
# Most confident
3 x gaussian x face
3 x laplace x face
3 x evidential x face
3 x cauchy x face
# Least confident
3 x gaussian x face
3 x laplace x face
3 x evidential x face
3 x cauchy x face
Big caption explaining each row
(Page 2)
# Most confident
3 x gaussian x BIWI
3 x laplace x BIWI
3 x evidential x BIWI
3 x cauchy x BIWI
3 x gaussian x BIWI
3 x laplace x BIWI
3 x evidential x BIWI
3 x cauchy x BIWI
similar 2 pages for most error and least error
order the images vertically for 1 loss function
gaussian laplace evidential cauchy
gaussian laplace evidential cauchy
gaussian laplace evidential cauchy
[ ] ([Mathivanan, p. 24]() How is this most confident ..
This is the most error image the face it selcted the wrong face totally
[ ] ([Mathivanan, p. 25]() How is this least error ..
This is the maximum error
[ ] ([Mathivanan, p. 25]() Again here (a ) looks all match ..
but b and c are many mismatch .
[ ] ([Mathivanan, p. 33]() A report should be read like a story .
This section doesnt fit anywhere .
My suggestion drop RQ2 and just move this section in state-of-art .
And every section conclude something .
So what was the goal of this section and write a conclusion paragraph in the end on what do you conclude from the state of the art.
Summary:
Dont get demotivated by the comments, you have done good amount of work. The report holds major percentage of the grade, so its a big deal.
Robustness_Results.pdf I have uploaded the results of data augmentation techniques (gaussian blur, random rotation and random invert). The entropy plots were plotted and I have interval score values too. Is this enough for robustness study?
rough_draft.pdf