Closed oyvindeide closed 1 month ago
Is the example
npv: 123456.789
Total normalized objective: 123.456
from an actual case because I could not reproduce the issue as described. I could easily reproduce it if I change the normalization factor in the objective_function
section for example:
objective_functions:
- name: npv
normalization: 0.001
The behaviour of the objective function npv
and the reported Total normalized objective
, is likely correct. It was understood that they would report the results differently if there were failed simulations.
The intention of the issue is to highlight attention for when perturbations or simulations fails, which might lead to different values for the objective function reported in the console.
I just created an example that replicates the behavior.
===================== Running forward models (Batch #0) ======================
Waiting: 0 | Pending: 0 | Running: 0 | Complete: 3 | FAILED: 0
well_constraints: 0/15/0 | Success: 0-14
add_templates: 0/15/0 | Success: 0-14
schmerge: 0/15/0 | Success: 0-14
eclipse100: 12/ 3/0 | Success: 2, 9-10
strip_dates: 0/ 3/0 | Success: 2, 9-10
npv: 0/ 3/0 | Success: 2, 9-10
====================== Optimization progress (Batch #0) ======================
well_rate_A5-1: 0
well_rate_A5-2: 0
well_rate_A5-3: 0
well_rate_A6-1: 0
well_rate_A6-2: 0
well_rate_A6-3: 0
npv: 2.3461e+10
Total normalized objective: 5.8175
The output of the run can be found here: /project/everest/users/tup/drogon_source_10real/everest/output/DROGON_WR_BREAK2/ and
/scratch/everest/users/tup/DROGON_WR_BREAK2/ for simulation output
Unfortunately I limited the max runtime too strictly so it only ran for 2 batches.
I don't have access to /project/everest/users/tup/drogon_source_10real/everest/output/DROGON_WR_BREAK2/
or /scratch/everest/users/tup/DROGON_WR_BREAK2/
Hi @DanSava you do now 👍.
After a chat with @tup1985 about the issue we identified this happens if an Everest forward model run hits a max_runtime limit for the simulation and the simulation_folder
is populated with data from a previous run.
I will create an issue for a warning that the simulation folder used for the optimization is not empty and one for better error reporting for users when the max_runtime limit is reached. The simulation output folder does not show contain any ERROR
file if that happens and it is not easy to find in the logs what exactly happened.
This issue should be resolved by: https://github.com/equinor/ert/issues/8759
Will close then as it is a combination of: https://github.com/equinor/ert/issues/8759 and https://github.com/equinor/ert/issues/8760
Issue The values displayed in console after a batch is completed and objectives are computed are confusing. When the batch has no failed realizations then the 2 values coincide, otherwise the objective is adjusted for the percent of successful realizations.
No failed jobs are reported, no ERROR files are created in runpath and optimization moves on to the next iterations. This issue is consistent and has also been reported by other users.
One proposal is to display the number successful realizations near the summary of the reported functions: