I have a question regarding the metrics used for evaluating our model runs. I have an observed value x, and my model produces global value y which should be compared with x to define if the parameter values produced output similar to observed data. In this case the metrics will look like: metrics=c("(1 / abs( x - y ))"),
What will happen if x == y? Then 1 will be divided by 0 and cause error, right? How should metrics be coded to make sure that in cases when x == y, 1 is always divided by 1 and such cases will be evaluated by GA as perfect match to observed data?
I have a question regarding the metrics used for evaluating our model runs. I have an observed value x, and my model produces global value y which should be compared with x to define if the parameter values produced output similar to observed data. In this case the metrics will look like:
metrics=c("(1 / abs( x - y ))"),
What will happen if x == y? Then 1 will be divided by 0 and cause error, right? How should metrics be coded to make sure that in cases when x == y, 1 is always divided by 1 and such cases will be evaluated by GA as perfect match to observed data?Thank you in advance for clarifications!