Closed richelbilderbeek closed 4 years ago
Sent email to @Giappo if he'd be up to process the last feedback.
From @Giappo:
1) I would leave the nomenclature problem to Rampal. I reported my preference in the letter but I think the final decision should be on you/him.
2) Figure 3 now looks much worse than the previous version. If you are ok with it I will roll back to the old version.
3) Fig 5 and 6 look like the same thing. If you confirm, I will remove fig 6.
4) In section 5.12 (and some others in the supplementary), I would like to add another figure of errors vs number of taxa (or error vs other metric in some other case). Today I will download all the data from your website and try to make them.
I predict for one beer, that his will be harder then you think. I used the pir_plot_from_files.R script to get all the errors. Maybe that helps.
5) I today will give another general look to everything, just to be sure.
I will fix my things today before 15:00.
I did my things, which were #106 and #108 :+1:
This is great work by @Giappo!
Cool again:
Sent manuscript to Rampal for feedback
Hmmm, Rampal is out of office until after the journal's deadline. In the worst case, we can send the manuscript to the journal way before the deadline :+1:
Feedback Rampal:
I noted a few changes in Figure 3:
For the generative model, the Yule model is used for generating the twin tree. That would be my preference as indicated earlier, but the text is now stating that BD was used. Which one is correct?
If the Figure legend is correct, then does this also apply to all experiments in the supplementary material (which do not state the models in the legend)? That is, did you also use Yule to generate the twin tree for the supplementary material?
The best candidate model is now TN, Strict and BD. So our comments on the RLN being the best fit are no longer valid.
I can make the textual changes once I know what is correct.
Furthermore,
Tables 12 and 13 seem to be identical, even the caption.
Tables 8 and 9 seem to be identical, even the caption.
Tables 10 and 11 seems to be identical, even the caption.
Tables 4 and 5 seem to be identical. It is also unclear what these are ESSes for, because the caption is quite unclear.
Tables 14 and 15 seem to be identical, even the caption.
Please check and update the captions.
Fixed the identical tables: somehow, the LaTeX file's content got duplicated.
- For the generative model, the Yule model is used for generating the twin tree. That would be my preference as indicated earlier, but the text is now stating that BD was used. Which one is correct?
This a common point of confusion for all of us. Any twin tree is generated using a BD model by default, likewise our example. The generative inference model, however, uses Yule by default.
That is, the text is correct :-)
This feedback encouraged me to attribute every point of confusion in the text of the 'Main example' text. I think it removes any points of confusion now. If not, let me know :-)
- If the Figure legend is correct, then does this also apply to all experiments in the supplementary material (which do not state the models in the legend)? That is, did you also use Yule to generate the twin tree for the supplementary material?
The legend is correct. The inference model used is only shown when the plot is generated from a single run. It would take too much time to add this ('tiebeaur' was an attempt to help to do so), especially when showing the best candidate model, which may differ per replicate.
- The best candidate model is now TN, Strict and BD. So our comments on the RLN being the best fit are no longer valid.
Correct. I have rewritten this.
Furthermore,
- Tables 12 and 13 seem to be identical, even the caption.
- Tables 8 and 9 seem to be identical, even the caption.
- Tables 10 and 11 seems to be identical, even the caption.
- Tables 4 and 5 seem to be identical. It is also unclear what these are ESSes for, because the caption is quite unclear.
- Tables 14 and 15 seem to be identical, even the caption.
Indeed, a script caused the tables to be mysteriously replicated. It's fixed.
Tables 4 and 5 have an unclear caption is quite unclear.
The captions are generated automagically, which I chose to keep as-is. However, to compensate for it, I've surrounded the tables with more explanatory text.
From @Giappo: