-
**Describe the bug**
Using the "pause-on-active" config option it says its paused but continues to say "new job" as if its requesting work but not executing it.
**To Reproduce**
"pause-on-active"…
-
I have created an optimisation problem using PEtab.jl. This gives me a gradient, which I have tried to adapt to LiklihoodProfiler's format using
```julia
function loss_grad(p)
grad = zeros(9)
…
-
The following code yields the identity, as it should
```
function f1(dx, x)
for i in 1:length(x)
dx[i] = x[i]^2
end
end
input = rand(10)
output = similar(input)
sparsity_patte…
-
I'm getting underflow during the computation of the reverse pass of an RNN gradient. I have numpy raise an exception when underflow is detected. However, it's very difficult (or maybe impossible) to t…
-
### Feature description
I have an optimization that is very sensitive to initialization. No idea why. Instead of getting it right with elegant math, I have found I can just try over and over un…
-
/cc @sherm1
We would like to capture some sort of baseline data from the cassie benchmark program to help guide development of performance optimizations. I have argued it should be stored out-of-tr…
-
### Description
I' m experiencing very large memory usage in forward mode .
I'm studying AD and wrote a small script to compare memory usage in forward mode vs reverse mode.
I was expecting less u…
-
Currently, our imports are a bit of a mess. We have all of `tools` available under the highest level, we have `synth` and `synthesis`, `simul` and `simulate` all available. Should clean this up, which…
-
Some HPC + AI applications need to differentiate through LU factorization (factor and apply). The gradient of such linear algebra operations [can be derived analytically](https://arxiv.org/abs/1710.08…
-
What should we do with domain errors? The easiest solution would be to use std::nan() for the values and gradients, but then errors could not be traced back to their source. Alternatively, we can th…