There are two tables in the Benchmarking section of the README. The benchmarking also is mentioned in the JOSS paper draft. Maybe one could add the tables to the paper.
Ideally, there should be a page (in the documentation) explaining how to reproduce these tables. If impossible/impractical, the page with some instructions on WSKNN response time/memory footprint evaluation (probably, by using dev/profile_fn.py) would be enough.
The README claims: "Its performance was always very close to the level of fine-tuned neural networks, but it was much easier and faster to train," --- is its performance in the same meaning as in the Benchmarking tables? Please elaborate and add a citation (I guess one of the Twardowski et al. paper mentioned in the README).
Ok, I've done this part. There are some updates in the paper, and in the README. I've created an additional notebook with computational performance measurements.
There are two tables in the Benchmarking section of the README. The benchmarking also is mentioned in the JOSS paper draft. Maybe one could add the tables to the paper. Ideally, there should be a page (in the documentation) explaining how to reproduce these tables. If impossible/impractical, the page with some instructions on WSKNN response time/memory footprint evaluation (probably, by using
dev/profile_fn.py
) would be enough. The README claims: "Its performance was always very close to the level of fine-tuned neural networks, but it was much easier and faster to train," --- is its performance in the same meaning as in the Benchmarking tables? Please elaborate and add a citation (I guess one of the Twardowski et al. paper mentioned in the README).