brain-score / model-tools

Helper functions to extract model activations and translate from Machine Learning to Neuroscience
MIT License
8 stars 27 forks source link

Error encountered while testing a base model. #57

Open efirdc opened 2 years ago

efirdc commented 2 years ago

Hello,

I have encountered an error while testing a base_models.py implementation. Here is the log and the line where it breaks. There is a comment there that may be describing the issue, but I'm not sure how to interpret it.

It looks like this traces back to the behavioral benchmark, specifically here. Is this assuming the model has a layer named 'logits'? The model I am testing does not have this.

Any advice on how to debug this would be very appreciated.

Thank you, Cory

mschrimpf commented 2 years ago

Hi Cory,

Thanks for pointing out this issue. The check is indeed assuming that the model has a layer named 'logits', leading the check to fail.

For submitting your model and having it tested on all the brain benchmarks, you can safely ignore this, the submission should still work. It just won't run through on the "engineering"/ML benchmarks.

(background: we generally test models on an ImageNet benchmark as well to get some sense of their ground-truth performance. To shortcut this and because a majority of models was trained on ImageNet, we just query for a logits layer and hope that it knows the ImageNet classes. We are working on automatically training a readout from the penultimate layer to also test models that aren't trained on ImageNet. Either way, this should have no impact on any of the brain benchmarks.)

Please let us know of any further issues with the submission!

Martin