codeneuro / neurofinder-python

python module for testing neuron finding algorithms
MIT License
8 stars 2 forks source link

add method for comparison to remote datasets #2

Open freeman-lab opened 8 years ago

freeman-lab commented 8 years ago

Currently the evaluate method compares two local results to each another, which is useful. But as suggested by @marius10p, sometimes we want the evaluation to incorporate metadata from the "standard" ground truth datasets.

So one idea is to add an extra method, maybe called benchmark or evaluate-remote that takes as input ONE set of results, and the name of a ground truth dataset, then fetches both the remote regions and the metadata, and returns the scores.

In other words, we'll have both

neurofinder evaluate a.json b.json

and

neurofinder benchmark 01.00 a.json

Thoughts?

cc @syncrostone

marius10p commented 8 years ago

Sounds good, but maybe do not make it possible to obtain results on the test datasets, otherwise people can easily overfit them (and we won't know).

I have been using "neurofinder evaluate a.json b.json" on the training datasets, just to get an overall idea of how many ROIs to output.

freeman-lab commented 8 years ago

Yes, oops, I definitely meant only having this for the training data 😄