issues
search
dilyabareeva
/
quanda
A toolkit for quantitative evaluation of data attribution methods.
https://quanda.readthedocs.io
MIT License
33
stars
0
forks
source link
Debugging Metrics & Benchmarks
#158
Closed
dilyabareeva
closed
2 months ago
dilyabareeva
commented
2 months ago
Closes #116 (set filters where necessary, set the defaults according to papers)
remove model_id, cache_dir from base explainer signature
make benchmark
evaluate
args: explainer, expl_kwargs, batch_size
some bug fixes in the metrics
evaluate
args: explainer, expl_kwargs, batch_size