YerevaNN / mimic3-benchmarks

Python suite to construct benchmark machine learning datasets from the MIMIC-III 💊 clinical database.
https://arxiv.org/abs/1703.07771
MIT License
796 stars 328 forks source link

Evaluation error #72

Closed etetteh closed 5 years ago

etetteh commented 5 years ago

Hi Hrayr, It's been a while and I hope you are doing well? Please this is the code I entered to get the prediction with CI, but it is now working. Please, I need your help python -m mimic3benchmark.evaluation.evaluate_decomp -h --test_listfile data/decompensation/train/listfile.csv --n_iters 15 --save_file decomp_results prediction mimic3newmodels/decompensation/autopytorch/predictions/all.all.csv

I get this error message:

usage: evaluate_decomp.py [-h] [--test_listfile TEST_LISTFILE] [--n_iters N_ITERS] [--save_file SAVE_FILE] prediction

positional arguments: prediction

Sincerely,

sparik commented 5 years ago

it's not an error message, it's a help message printed because of the '-h' (short for --help) option. Remove '-h' and you should be fine.

etetteh commented 5 years ago

Thank you @sparik I am getting an error for passing the path to the predition csv file:

python -m mimic3benchmark.evaluation.evaluate_decomp --test_listfile data/decompensation/train/listfile.csv --save_file decomp_results prediction mimic3newmodels/decompensation/autopytorch/predictions/all.all.csv usage: evaluate_decomp.py [-h] [--test_listfile TEST_LISTFILE] [--n_iters N_ITERS] [--save_file SAVE_FILE] prediction evaluate_decomp.py: error: unrecognized arguments: mimic3newmodels/decompensation/autopytorch/predictions/all.all.csv

etetteh commented 5 years ago

This is what happens, when I remove the term "prediction"

python -m mimic3benchmark.evaluation.evaluate_decomp --test_listfile data/decompensation/train/listfile.csv --save_file decomp_results mimic3newmodels/decompensation/autopytorch/predictions/all.all.csv Traceback (most recent call last): File "/home/enock/miniconda3/envs/google-crash/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/enock/miniconda3/envs/google-crash/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/enock/final_project/mimic3-benchmarks/mimic3benchmark/evaluation/evaluate_decomp.py", line 69, in main() File "/home/enock/final_project/mimic3-benchmarks/mimic3benchmark/evaluation/evaluate_decomp.py", line 27, in main assert (df['prediction'].isnull().sum() == 0) AssertionError

hrayrhar commented 5 years ago

Note that the word "prediciton" was a placeholder for the a test prediction csv file. The command you need to run should look like this:

python -m mimic3benchmark.evaluation.evaluate_decomp --test_listfile data/decompensation/train/listfile.csv --save_file decomp_results.json {PREDICTION-FILE}

The "{PREDICTION-FILE}" should be replaced with the path to prediction file outputted by one of the baselines.

etetteh commented 5 years ago

python -m mimic3benchmark.evaluation.evaluate_decomp --test_listfile data/decompensation/train/listfile.csv --save_file decomp_results.json mimic3newmodels/decompensation/autopytorch/predictions/all.all.csv

Just entered the code above, but it's still not working

Traceback (most recent call last): File "/home/enock/miniconda3/envs/google-crash/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/enock/miniconda3/envs/google-crash/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/enock/final_project/mimic3-benchmarks/mimic3benchmark/evaluation/evaluate_decomp.py", line 69, in main() File "/home/enock/final_project/mimic3-benchmarks/mimic3benchmark/evaluation/evaluate_decomp.py", line 27, in main assert (df['prediction'].isnull().sum() == 0) AssertionError

hrayrhar commented 5 years ago

I see that you used the train/listfile.csv. Are the predictions done on the train set? If they are done on the test set, you should use the test/listfile.csv file.

etetteh commented 5 years ago

Sure Thanks. I changed the train to test, but still getting error. The most surprising thing is, I am using your evaluation script but still getting error. I don't know if it might be the package version conflicting. Sorry if I am asking too many questions

python -m mimic3benchmark.evaluation.evaluate_decomp --test_listfile data/decompensation/test/listfile.csv --save_file decomp_results.json mimic3newmodels/decompensation/autopytorch/predictions/all.all.csv Traceback (most recent call last): File "/home/enock/miniconda3/envs/google-crash/lib/python3.6/runpy.py", line 193, in _run_module_as_main "main", mod_spec) File "/home/enock/miniconda3/envs/google-crash/lib/python3.6/runpy.py", line 85, in _run_code exec(code, run_globals) File "/home/enock/final_project/mimic3-benchmarks/mimic3benchmark/evaluation/evaluate_decomp.py", line 69, in main() File "/home/enock/final_project/mimic3-benchmarks/mimic3benchmark/evaluation/evaluate_decomp.py", line 35, in main data[:, 0] = np.array(df['prediction']) ValueError: could not convert string to float: '[ 0.]'