Open alecandido opened 1 year ago
Ok. Maybe we can take care of the rearrangement for asv
first, then address issue #41.
Ok. Maybe we can take care of the rearrangement for
asv
first, then address issue #41.
It is just a proposal and an improvement, I would not consider blocking for @liweintu #41. But if you want to keep going with this one first, that is perfectly feasible (and I'd also personally like to see the report :D)
Ok. Maybe we can take care of the rearrangement for
asv
first, then address issue #41.It is just a proposal and an improvement, I would not consider blocking for @liweintu #41. But if you want to keep going with this one first, that is perfectly feasible (and I'd also personally like to see the report :D)
I see. How about we address issue #41 first to add our first TNet backend quimb into the benchmarking, then take care of this issue for asv
in a new PR.
I see. How about we address issue #41 first to add our first TNet backend quimb into the benchmarking, then take care of this issue for
asv
in a new PR.
Perfectly fine as well :)
This is the typical setup that works well inside
asv
.We can rearrange benchmarks in
asv
layout, run them in the CI, and publish the report on GitHub pages. Similar to what NumPy is doing: https://pv.github.io/numpy-bench/