Open marybarker opened 3 weeks ago
Sorry for the delay. Thank you for the report. Can you please send me a couple of small datasets with outfiles that ran quickly (so that I can debug) that show the most discrepancies between iqtree and consel? For now, I'd suggest that you use the results from consel.
Thank you so much! Here are 2 example datasets. For each I'm attaching:
input_alignment*.txt
(fasta format)iqtree-model*.txt
(a text file with a single line containing the model name)topologies*.txt
(a text file containing the two topologies in newick format)I generated the sitewise likelihood files using -wsl
option with iqtree. I can add my output files for iqtree and consel as well if needed, but I'll keep the file uploads to a minimum till they're useful.
input_fasta1.txt input_fasta2.txt iqtree-model1.txt iqtree-model2.txt topologies1.txt topologies2.txt
I seem to be getting different p-values for the AU test results using IQTree vs. when I use CONSEL. I ran the IQTree AU test using the command:
iqtree -s input_fasta -z input_topologies -m model_file -n 0 -zb 10000 -zw -au
on a set of around 67,700 separate datasets. Each dataset was in a separate folder containing
input_fasta
input_topologies
containing two tree topologies written as newick stringsand I stored the p-values from the AU test results in a dataframe
Using the same fasta/newick files, I ran Consel on the same dataset to compute p-values for the AU test to double check those values, and I got a very different set of p-values. I am attaching a histogram of the values found using IQTree (called
p-AU
in the plot), and those computed using CONSEL (calledconsel-p-AU
in the plot)