IRT-SystemX / ml4physim_startingkit

11 stars 2 forks source link

Evaluation error #7

Closed Mleyliabadi closed 10 months ago

Mleyliabadi commented 10 months ago

In notebook 4_how_to_contribute, when executing (@daviddanan):

from lips.evaluation.airfrans_evaluation import AirfRANSEvaluation

evaluator = AirfRANSEvaluation(config_path = BENCH_CONFIG_PATH,
                               scenario = BENCHMARK_NAME,
                               data_path = DIRECTORY_NAME,
                               log_path = LOG_PATH)

observation_metadata = benchmark.train_dataset.extra_data
metrics = evaluator.evaluate(observations=observations,
                             predictions=predictions,
                             observation_metadata=observation_metadata)
print(metrics)

I have the following error:

IndexError                                Traceback (most recent call last)
/home/ubuntu/SYSTEMX/milad/packages/ml4physim_startingkit/4_How_to_contribute.ipynb Cell 56 line 9
      3 evaluator = AirfRANSEvaluation(config_path = BENCH_CONFIG_PATH,
      4                                scenario = BENCHMARK_NAME,
      5                                data_path = DIRECTORY_NAME,
      6                                log_path = LOG_PATH)
      8 observation_metadata = benchmark.train_dataset.extra_data
----> 9 metrics = evaluator.evaluate(observations=observations,
     10                              predictions=predictions,
     11                              observation_metadata=observation_metadata)
     12 print(metrics)

File ~/SYSTEMX/milad/venv/ml4phy/lib/python3.8/site-packages/lips/evaluation/airfrans_evaluation.py:82, in AirfRANSEvaluation.evaluate(self, observations, predictions, observation_metadata, save_path)
     79 self.observation_metadata = observation_metadata
     81 for cat in self.eval_dict.keys():
---> 82     self._dispatch_evaluation(cat)
     84 return self.metrics

File ~/SYSTEMX/milad/venv/ml4phy/lib/python3.8/site-packages/lips/evaluation/airfrans_evaluation.py:100, in AirfRANSEvaluation._dispatch_evaluation(self, category)
     98 if category == self.MACHINE_LEARNING:
     99     if self.eval_dict[category]:
--> 100         self.evaluate_ml()
    101 if category == self.PHYSICS_COMPLIANCES:
    102     if self.eval_dict[category]:

File ~/SYSTEMX/milad/venv/ml4phy/lib/python3.8/site-packages/lips/evaluation/airfrans_evaluation.py:134, in AirfRANSEvaluation.evaluate_ml(self)
    132 pred_pressure = self.predictions["pressure"]
    133 surface_data=self.observation_metadata["surface"]
--> 134 tmp_surface = metric_fun(true_pressure[surface_data.astype(bool)], pred_pressure[surface_data.astype(bool)])
    135 self.metrics[self.MACHINE_LEARNING][metric_name+"_surfacic"]={"pressure": float(tmp)}
    136 self.logger.info("%s surfacic for %s: %s", metric_name, "pressure", tmp_surface)

IndexError: boolean index did not match indexed array along dimension 0; dimension is 35849332 but corresponding boolean dimension is 18515415
daviddanan commented 10 months ago

Oh, indeed, the problem is quite explicit. I am working on it. I would say it is caused by the observation_metadata passed to evaluation, which i suspect to be wrong.

There is an illustration of that here. Normally, i would expect the observation_metadata to be associated with the obervation dataset but it is not the case in the notebook. I expect the same issue to happen with the next cell where we evaluate the simulator performance on the ood dataset . The correction is done, the run is in progress.

daviddanan commented 10 months ago

I pushed a correction, can you tell me if the problem is also solved on your side?

daviddanan commented 10 months ago

Problem solved, closing the issue.

Mleyliabadi commented 10 months ago

I confirm that the problem is solved.