-
See https://app.honeybadger.io/projects/50046/faults/84341000
A publication has a field size that is too large for the ORCID API. Investigate by looking at the publication, seeing which field it …
-
It would be good to have a mechanism to enforce penalties on the leaf tensors that are summed together with the loss when optimizing a tensor expression. A great example use case is a penalty on the u…
-
Example: https://mila.quebec/en/publications/
It would be nice to reuse the same code as in the Mila website. Not sure if that's 'easily' possible via RTD
-
**Describe the bug**
Accessing elements or slices of a vectorized factorization model fails.
**To Reproduce**
```Python
A = ff.tensor('A', 5, 2)
B = ff.tensor('B', 2, 4)
i, j, k = ff.indices…
-
Currently all `tensors` in a tensor expression are updated in the gradient descent step of a factorization model. We should implement a feature such that the user can decide which of the `tensors` (an…
-
**Describe the bug**
Our current `einop` implementation depends on `order='F'` of `numpy.transpose` or `jax.numpy.transpose`, which PyTorch does not implement. Simply removing the order specifier lea…
-
Having an infix matrix `dot` operator would make writing Nx code much nicer. It's one of the most common operations and not having an infix operation makes many formula much more difficult to read (or…
-
## 🐛 Bug
`torch.lu` returns a 1-indexed pivoting vector, which is inconsistent with for example `scipy.linalg`. This behavior is correctly documented, though.
## To Reproduce
Steps to reprodu…
dme65 updated
2 years ago
-
I guess the title is saying everything.
-
Hi,
The gradient operation in miniapps/navier may be significantly improved for efficiency. When using quadrilateral elements, the grad_hat was calculated using `MultAtB(loc_data_mat, dshape, grad…