-
Hello,
I am getting the following exception when I try to wrap a schedule free optimizer with multisteps. Can you help me?
```python
learnin_rate_fn = optax.schedules.warmup_constant_sc…
-
[jaxopt is being merged with optax](https://github.com/google/jaxopt?tab=readme-ov-file#jaxopt). All [the features we use](https://github.com/talmolab/stac-mjx/blob/main/stac_mjx/stac_base.py#L231-L23…
-
Optax has a function [`multi_transform`](https://optax.readthedocs.io/en/latest/api/combining_optimizers.html#optax.multi_transform), which is nice for using multiple optimizers. [See here](https://co…
-
I would like to know what is the best way to freeze parameters in a model using `nnx` and `optax` (https://flax.readthedocs.io/en/latest/guides/training_techniques/transfer_learning.html#optax-multi-t…
-
https://github.com/google-deepmind/optax/blob/main/examples/lbfgs.ipynb
-
Hi,
I am having trouble implementing optax LBFGS with equinox types. I am trying to run a linear regression model using this notebook https://github.com/ubcecon/ECON622/blob/master/lectures/lectures/…
-
I am trying to use L-BFGS and related optimizers with nnx + optax, but running into trouble. It might be that `optax` has a slightly different optimization interface in those cases: https://optax.rea…
-
It seems like the first call to `step` of the `GradientDescent` optimizer doesn't perform the step operation. I didn't check if this occurs for other optimizers or do other digging, but can do so if t…
-
Hi,
I am trying to optimize a couple of parameters with different sizes and associated optimizers in a `jax.lax.while_loop` .
In order to achieve that, I used `multi_transform` and `set_to_zero…
-
Dropout is really the bane of equinox it seems. Loose follow-up of #681 - effectively, I'm trying to fix this problem that cropped up a while ago when using `optax.MultiSteps` for gradient accumulatio…