Open chaosink opened 4 years ago
I did something like this a couple of months ago, here are the results I found back then (with the caveat that I may be using some tools in a dumb way): https://github.com/pettni/autodiff/blob/master/benchmarks/visualization/visualization.ipynb
The forward mode is overall very competitive. Let me know if this is something you would be interested in including in the repository. One headache is installing the other tools for testing. I did this in the cmake file: https://github.com/pettni/autodiff/blob/master/benchmarks/CMakeLists.txt
Hi Petter,
Wow, this is amazing! Sorry for getting back to you only now (I did not see this over the weekend).
I'm happy to see that the forward mode differentiation algorithm in autodiff
is as fast as the other libraries and that in some scenarios, sometimes, even a bit faster.
The reverse mode is not the forte of this library, and that explains its subpar performance. My research and development only require forward mode, and thus I have focused a lot more on the types autodiff::dual
and autodiff::real
than autodiff::var
.
There is still room for improvements/optimization in autodiff
, and this is happening gradually. Please let me know if you would be interested in publishing your findings on the website (https://autodiff.github.io) (or perhaps in a journal publication!).
Best regards, Allan
@pettni : How about a docker container? That's ideally suited for this sort of problem of reproducibility. I have some experience there that I can share.
Yeah, between the great syntax and competitive speed I think autodiff
forward is the best option out there :+1:.
I think it would be good to somehow add benchmarking to autodiff
so that future improvements can be tested in a structured way to help it become even better. If you are interested it would be great if you could outline how you'd like to do that.
benchmark
folder I worked in has been archived), or maybe a separate repo?Changes:
real
typeI don't have a ton of time right now, but I'd be happy to use some of it to work on this.
I think a new repo, perhaps autodiff/bench
, would make sense and then it can be pinned to particular versions of autodiff
, so you can do performance regression by bisection.
@pettni Are all the tools you benchmarked open source and available on linux? If so, we could turn this into a runnable example in a binder environment with some additional effort I think.
@pettni Are all the tools you benchmarked open source and available on linux? If so, we could turn this into a runnable example in a binder environment with some additional effort I think.
Yes they're all open source and I ran them in Ubuntu 20. I don't have experience with binder, can it be used with a Dockerfile?
Yes and no with binder. In theory you can provide a dockerfile, but it is better if you can install things with apt and friends. Here is a small example of the setup needed: https://github.com/ianhbell/multicomplex/tree/master/binder
I think binder could be the way to go indeed. autodiff
is available via conda, so we can add it in the environment.yml
file.
I'll create an autodiff/bench
repository, which will be used by binder
and give access to you.
OK, just created https://github.com/autodiff/benchmark and invited you both.
This would be an interesting project. Unfortunately, I won't have time for this investigation in the coming months. We would need a volunteer.