Closed alvin-hsu closed 1 year ago
Hi @alvin-hsu! To check, what version of functorch are you on? This specific batch rule should have been added in https://github.com/pytorch/pytorch/pull/82176
Ooh, exciting! I am on functorch 0.2.1 and pytorch 1.12.1. Is the commit you're referencing a nightly build?
If you build PyTorch from source and build functorch from source, it should work. Sadly getting pytorch-nightly is not enough since functorch doesn't have a corresponding nightly (if you install pytorch-nightly, you can git pull the commit corresponding to the build and just build functorch with that but doing so is a little difficult. We can try to send you some code pointers if you want to try)
If these aren't possible for you, we're looking at putting the pytorch and functorch builds together within the next couple of weeks and that should make it possible to get a nightly functorch with a pytorch nightly build. We can let you know when that's available
No worries, I think I can manage that. Thank you so much!
I would really appreciate it if someone implemented batched matrix exponentials! Currently this is not a bottleneck for me (would purely be more convenient), but I'd like to post the issue here so that it can be prioritized, if possible, based on the following warning:
/Users/ahsu/miniconda3/envs/testenv/lib/python3.9/site-packages/functorch/_src/vmap.py:365: UserWarning: There is a performance drop because we have not yet implemented the batching rule for aten::linalg_matrix_exp. Please file us an issue on GitHub so that we can prioritize its implementation. (Triggered internally at /Users/runner/work/functorch/functorch/functorch/csrc/BatchedFallback.cpp:85.)
Thanks!