-
i didn't do that as part of gradient work in https://github.com/terrastruct/d2/pull/2120
gauging demand before implementing
-
Thanks for making Nx!
I tried to use `value_and_grad` on a function that takes two inputs: a vectorized tensor and a non-vectorized tensor.
``` elixir
defmodule Foo do
import Nx.Defn
defn…
-
I use OM when riding a bicycle.
I would like to see a feature added to display gradients when giving bicycle route directions.
The current version of OM only shows the gradient when navigating a bic…
-
Hey Team,
Not really an issue..
I'm trying to convert old aesthetic gradients created for Auto1111 into something that Comfy would understand.
What would the community recommend as best practices…
-
Materials that render with the toon deferred shader use color ramps in monolib/shader. This can probably be represented as a color ramp in the material with values based on the selected shader databas…
-
Hi there!
I am trying to train a DP diffusion model with an embedding layer defined as `nn.Embedding(10, 256) ' using the default MixOpt mode. However, the backward pass fails when the activation s…
-
Hi, thanks for your great work, I set up the training with vmamba2, but after a period of time, the gradients turned into NaN. Even with gradient clipping, NaN still occurs. Have you encountered t…
-
I think that when `packed=True` is passed to the trainer, the gradient is wrong immediately after densification. The densification duplicates gaussians and prunes them leading to a change in the order…
-
## Issue description
Trying computing the model jacobian using a keras model returns zeros
## Code example
```python
import keras
import torch
model = keras.Sequential(
[keras.layers.De…
-
The default algorithm of the Optim minimizer is `Nelder-Mead()`, which is a gradient-free method. I'd like to explore gradient-based algorithms, such as `GradientDescent()` or `LBFGS()`.
Motivation:…