Open GemmaTuron opened 7 months ago
Quick thought based on internal discussion today (13 Feb 2024).
We should add a test step where predictions are done (a) individually and (b) in batch. We should then check that results are consistently the same (if the model is deterministic).
This will allow us to capture errors related to, for example, fit_transform
of a scaler being done at inference time and not at training time.
Hi @DhanshreeA Work has been done on the test command if I am right. Can you update the issue, and point to the commits / issues linked to this one?
Describe the bug.
The test module is currently failing as it is unable to parse the output appropriately. The issue is with the
read_csv
function inersilia/publish/test.py
I suggest to refactor the test module entirely, starting by getting the output from json files not dataframes
Describe the steps to reproduce the behavior
ersilia test eosid
Expected behavior.
Test completed without issues
Screenshots.
No response
Operating environment
Ubuntu 20.04 LTS
Additional context
No response