Closed colplainwil closed 3 months ago
It is still good practice to run multiple iterations of an optimization to determine consistency of values. You are correct, however, that ResistanceGA now scales results to have a minimum of 1 and maximum that is specified. If you are trying to optimize a binary surface and are getting inconsistent results, this would suggest to me that there isn't a clear signal in this variable and it may not be relevant.
Thank you for your answer. The surface I am optimizing is not binary but categorical, with many categories. Most of these categories get similar resistance values between iterations, and only two of these categories are problematic. However, I guess the issue remains the same. The signal is not clear enough to distinguish which of these two categories has the high resistance value and which has the low resistance value.
If land covers are minimal on the landscape or don't occur between sample locations the values assigned to them can vary wildly. In these instances it makes no difference to the connectivity what value is assigned, hence the varying estimates.
Okay, thank you I will keep that in mind.
Hello,
I am using the function all_combs to optimize my resistance surfaces and determine which surfaces should be included. In the second appendix of your paper, it says on multiple surface optimization:
"If the optimized resistance values are near the maximum value specified in GA.prep, it is recommended that you increase the maximum value and rerun the optimization."
Since at least one category of at least one surface always received the maximum value possible, I increased this value first from 500 to 1000 and then to 2000. However, one category still receives the highest value. In another issue on GitHub, I read a response from you where you said that surfaces are scaled to have a minimum of 1 and a maximum of whatever value was chosen. Does this mean that the remark in the supplementary material does not apply anymore, or did I misunderstand the remark?
The reason why I wanted to improve parameters is that for two categories, the variance in optimized values between different runs is bigger than for the other categories. Specifically, if category A receives a very low optimized value, then category B will get a very high value, but this might be the other way around in the next run. Is there a way to handle this problem or at least make a decision on which of the two categories is more likely to actually have a low resistance value?
Thank you very much in advance.