MesserLab / SLiM

SLiM is a genetically explicit forward simulation software package for population genetics and evolutionary biology. It is highly flexible, with a built-in scripting language, and has a cross-platform graphical modeling environment called SLiMgui.
https://messerlab.org/slim/
GNU General Public License v3.0
161 stars 33 forks source link

Neutral-like behavior due to floating point error #37

Closed mufernando closed 5 years ago

mufernando commented 5 years ago

Hi

We just ran into a problem with very large fitnesses and started to get inconsistent results due to floating point errors.

The problem appears to be that when we have two or more populations without migration, positive mutations that fix in one of them are not saved as substitutions, and thus always count toward fitness. If you have many of them, fitness goes to very large numbers. This becomes an issue because new mutations start behaving like neutral due to floating point error.

We think SLiM should at least throw an error when this happens.

Here I implemented a simple script that mimics this behavior:

// set up a simple neutral simulation
initialize() {
    initializeMutationRate(1e-7);

    // m1 mutation type: beneficial
    initializeMutationType("m1", 0.5, "f", 1.0);

    // g1 genomic element type: uses m1 for all mutations
    initializeGenomicElementType("g1", m1, 1.0);

    // uniform chromosome of length 100 kb with uniform recombination
    initializeGenomicElement(g1, 0, 99999);
    initializeRecombinationRate(1e-8);
}

// create a population of 500 individuals
1 {
    sim.addSubpop("p1", 1000);
}

100 {
    sim.addSubpopSplit(2, 1000, 1);
}

// print cached fitness
2000 { print(p1.cachedFitness(NULL)); }
bhaller commented 5 years ago

Hmm. What makes you think SLiM is not handling these large fitness values correctly? Yes, the values printed in generation 2000 are very large. But you say that the mutations "start behaving like neutral due to floating point error", and I don't see any evidence for that. It looks to me like the model is working fine.

There is no need to have two subpopulations to see this sort of behavior, by the way. This model reproduces the same behavior:

initialize() {
    initializeMutationRate(1e-7);
    initializeMutationType("m1", 0.5, "f", 1.0);
    m1.convertToSubstitution = F;
    initializeGenomicElementType("g1", m1, 1.0);
    initializeGenomicElement(g1, 0, 99999);
    initializeRecombinationRate(1e-8);
}

// create a population of 500 individuals
1 {
    sim.addSubpop("p1", 1000);
}

// print cached fitness
2000 { print(p1.cachedFitness(NULL)); }

But again, I don't see any evidence of a bug; it looks like SLiM is doing exactly what you asked it to do, and the behavior of the model, while unusual (because it is being driven very strongly by dominance effects and competing haplotypes), does not look neutral at all to me.

bhaller commented 5 years ago

If you run it for a while longer, though, then the fitness values go from merely being very large to actually being +INF, and that probably ought to trigger an error. OK, reopening.

petrelharp commented 5 years ago

We did see other evidence of this being a bug, and can pull that together if you need.

bhaller commented 5 years ago

Well, what exactly was the bug, before fitness values get to INF? I can perhaps put in a check for fitness values of INF, since that should probably not be allowed. (Although I think I have recommended to someone at least once that they should set a fitnessScaling of INF for an individual to force all offspring in the next generation to come from that particular parent, so even just making INF illegal is not clearly the right thing to do – a fitness of INF has utility in SLiM.) But if you're saying that some threshold below INF ought to be drawn, where should that threshold be? There is no clear point, below INF, where floating point stops working. It just loses precision and resolution progressively, as numbers get larger. There is no obvious place to draw a threshold. I would hesitate to make some arbitrary rule that fitness values can't be, say, over 1 million, because as soon as such a line gets drawn there will be people who complain because they want to exceed that threshold. (And as I said, even a value of INF has utility.) So... what exactly is the bug that you want to see fixed?

mufernando commented 5 years ago

The problem is that when most individuals reach INF fitness, then new beneficial mutations start behaving like neutral and are lost with a higher probability then they should. I built this example where I’m plotting total number of mutations over time, and you can see that when fitness of individuals get to INF, the number of mutations plateaus.

image

// set up a simple neutral simulation
initialize() {
    initializeMutationRate(1e-8);

    // m1 mutation type: beneficial
    initializeMutationType("m1", 0.5, "f", 1.0);
    m1.convertToSubstitution = F;
    // g1 genomic element type: uses m1 for all mutations
    initializeGenomicElementType("g1", m1, 1.0);

    // uniform chromosome of length 100 kb with uniform recombination
    initializeGenomicElement(g1, 0, 99999);
    initializeRecombinationRate(1e-8);
}

// create a population of 500 individuals
1 {
    sim.addSubpop("p1", 1000);
    sim.setValue("nmutations", NULL);
    defineConstant("pdfPath", writeTempFile("plot_", ".pdf", ""));
    // If we're running in SLiMgui, open a plot window
    if (sim.inSLiMgui) {
        // If we're running in SLiMgui, open a plot window
        appPath = system('ps -x | grep "SLiMgui" | grep -v grep | awk \'{ print $4 }\'');
        system("open", args=c("-a", appPath, pdfPath));
    }
}

1: {
    if (sim.generation % 10 == 0)
    {
        count = sim.mutations.size();
        sim.setValue("nmutations", c(sim.getValue("nmutations"), count));
    }
    if (sim.generation % 1000 != 0)
        return;
    print(p1.cachedFitness(NULL));
    y = sim.getValue("nmutations");
    rstr = paste(c('{',
        'x <- (1:' + size(y) + ') * 10',
        'y <- c(' + paste(y, sep=", ") + ')',
        'quartz(width=4, height=4, type="pdf", file="' + pdfPath + '")',
        'par(mar=c(4.0, 4.0, 1.5, 1.5))',
        'plot(x=x, y=y, xlim=c(0, 50000), ylim=c(0, 1500), type="l",',
        'xlab="Generation", ylab="Total number of mutations", cex.axis=0.95,',
        'cex.lab=1.2, mgp=c(2.5, 0.7, 0), col="red", lwd=2,',
        'xaxp=c(0, 50000, 2))',
        'box()',
        'dev.off()',
        '}'), sep="\n");
    scriptPath = writeTempFile("plot_", ".R", rstr);
    system("/usr/local/bin/Rscript", args=scriptPath);
}

// print cached fitness
50000 { print(p1.cachedFitness(NULL)); }
bhaller commented 5 years ago

Aha, yes, that's good evidence. So it sounds like INF itself should cause an error, but any float value smaller than INF should be considered OK, yes? Probably any simulation that gets up high enough that numerical error is important will hit INF soon afterwards anyway. :->

petrelharp commented 5 years ago

Yes, since you're doing multiplicative fitness, it should be safe from floating point error unless it's INF.

bhaller commented 5 years ago

OK. I think this applies to WF models only; in nonWF models there is no problem with infinite fitness values (they merely mean that the individual with infinite fitness has a probability of death of zero). I have just fixed this issue on that assumption. Thanks for the bug report!