Closed Cervangirard closed 1 year ago
We suggest to only look at the spatial predictions (S_x) and total biomass (total_abundance). You should get it by doing fm_model_results$report$S_p and fm_model_results$report$total_abundance.
Thanks. We close this ticket because the tests pass for a model with a small k and a short period. We will reprocess this issue later with a more consistent model
Ajouter lambda_p et par_b
As client, i'd like to have a stable behaviour for outputs of the
fm_fit_model
function so that I am warned if there are differences in outputs while changing other functions or while updating package dependenciesClient - Validation
After this is unblocked we plan to validate these points:
time.step_df_output
loc_x
report_output
with only a look at the spatial predictions (S_x) and total biomass (total_abundance).samp_process_output
Dev - Tech
[ ] Save only what is needed for the outputs
[ ] Units tests for each result
[ ] => We suggest to only look at the spatial predictions (S_x) and total biomass (total_abundance). You should get it by doing fm_model_results$report$S_p and fm_model_results$report$total_abundance.
Problem
We want to write stable unit tests that can, over time, identify real differences in the model's output. Checking on the entire model results can cause the test to fail on very small differences. Here is an example:
fm_fit_model
that are stable over time if the parameters are unchanged?