-
Picks up from discussion in https://github.com/pytorch/vision/pull/4293#discussion_r696471536
## 🚀 Feature
API for Commonly used Layers in building models.
## Motivation
A huge code duplic…
-
## 🚀 Feature
Support `torch.linalg.einsum`.
cc @jianyuh @nikitaved @pearu @mruberry @heitorschueroff @walterddr @IvanYashchuk @xwang233 @Lezcano @rgommers @pmeier @asmeurer @leofang @AnirudhDagar …
-
## 🐛 Bug
I have benchmarked EEC vs https://github.com/mys007/ecc implementation.
The code is 3-5 times slower in pytorch_geometric.
I think it is due to the entire pseudo tensor which is give…
-
https://developer.nvidia.com/tensorrt should be able to give significant performance gains when doing inference.
-
#
1. [ cheat sheet about DL/ML architectures](http://www.asimovinstitute.org/neural-network-zoo/)
2. http://deeplearninggallery.com/ - Deep Learning Gallery - a curated list of awesome deep l…
-
Our current split-K kernels are quite slow. For example, these are the two split-K problems I encountered with a bfloat16 fwd+bwd nanogpt run (both are TST, NN, no epilogue), measured on an A100:
![i…
-
I tried to mimic this example https://github.com/danielegrattarola/spektral/blob/master/examples/node_prediction/citation_gcn.py
with custom data (multiple graph and BatchLoader) and regression task,…
-
At this moment, we only support `Dense` layers. This is becuase `SparseArrays.jl` currently supports only 1D vectors or 2D matrices, so we could try transforming `Conv` layers into dense matrix multip…
-
expect
-
Looking at https://docs.google.com/spreadsheets/d/1lGFf6PLGmBUSMan-YP7Vul4DpRNfn6K8oeCjBILe6uA/edit#gid=857482380, it seems that cuDNN instead of default CUDA can boost lczero performance. I tried to …