-
The NN with erf function output activation can occassionally output way beyond the boundary [-1,1]:
```
from jax import random
from neural_tangents import stax
import neural_tangents as nt
impo…
-
I have an empirical kernel function that works on a small batch of inputs.
I then decorated the kernel function with batch decorator like below, then it gives out of memory error. I would like to use…
-
Hey!
I was curious about why you choose this as the batch size:
https://github.com/google/neural-tangents/blob/9f2ebc88905c46d60b7c4a9da25636924acc9d45/neural_tangents/utils/batch.py#L611
I'm n…
-
The LLVM compiler pass uses excessive amounts of memory for deep networks which are constructed like this
```
stax.serial([my_layer]*depth)
```
In fact, the compilation may eventually OOM.
The …
-
In the notebook "Neural Tangents Cookbook", there was a section to draw sample functions from the prior .
Is there a way to to draw sample functions from the posterior (conditioned on the training…
-
Hello!
I want to implement my custom layer (Dense layer without bias term (y = Wx) and with custom weight initialization).
How can I do this? I've looked at https://github.com/google/neural-tang…
-
For a model being used for classification with `k` classes, for `n` datapoints, the NTK should be of the size `nk` X `nk`. How would we get that with neural-tangents?
Currently, I'm able to get a `…
-
Hello,
When I try to compute the NTK of a model with an embedding layer, I get the following warning:
```
/usr/local/lib/python3.10/dist-packages/neural_tangents/_src/empirical.py:2215: UserWarni…
-
## 一言でいうと
幅が無限とした場合のDNNを最急降下法で学習させる処理は、線形変換と同等であるとした論文(=学習結果は一次のテイラー展開で置き換えられる)。損失関数が2次の場合は出力がGaussianで維持されるため学習の過程はGaussian Processと見做すことができ、この点がBayesianNetと異なるとしている。
### 論文リンク
https://arxiv…
-
We want to have some more tests cases. Here's some cool things that work with JAX that would be interesting to port over to functorch.
- [ ] [Neural tangents](https://github.com/google/neural-tange…