Closed momeni133 closed 9 months ago
Have you tried it using another modeling algorithm, like glm? This could help me determine whether it's ENMTools or Maxent that's responsible.
Thanks Dan, I will try another algorithm and let you know. These different empirical overlaps are for "env" indices (env.D, env.I, and env.rank). Can't it be because of the random process that Latin hypercube sample uses for sampling?
Ah, I think so! You'd still expect them to be very close together, but perhaps that means the Monte Carlo process isn't running long enough to converge. Try passing in a new argument, "tolerance = 0.000001" and see if that helps. It will likely take longer to execute now but should cause more stable outputs if it's all working right.
Hi Dan, I hope you have had great holidays by now. Thanks for your response. I added this argument to the code and also used set.seed function before running both identity and background tests. However, the issue is still there. The empirical overlap for identity and background tests are very different (for environmental space)!
Okay, I'm messing with this now and I can definitely replicate the issue. I'm thinking we messed something up in the move to terra, but I can't find where it is yet!
Okay, I found it! Turns out the identity test wasn't clamping correctly when clamp was set to TRUE. Fixed now on the develop branch and I'm pushing it to main as we speak. Thank you very much for bringing this to our attention!
Thanks, Dan! Now it works great! I also have another difficulty with identity and background tests but if it is ok, I will my question by email first as: 1- I guess the problem is the interpretation of the result, and not the code 2- it is my research results
Hello everyone, I've run the identity and background tests for two distinct populations of a species. in enmtools.species object for each population I have: 1- presence points 2- background points 3- species range raster 4- species name
Based on the background.test and identity.test code, it seems that in both tests for calculating the empirical overlap values, enmtools runs the model for each species using combined background points. If it is the case and I use the same algorithm (e.g. mx) for both tests, then I expect to have the same empirical overlap values for both background and identity tests. However, the calculated empirical overlaps are very different for different tests. Do you know why this hapens?