-
For now, we're doing a full gradient descent over the whole dataset (each thread does one user). That takes a lot of time, and increases the time till convergence. I think we can easily change this to…
-
I had an issue when trying to perform a training run on the GPU, which appeared to be caused by reference and predicted data being stored on different devices leading to errors like `RuntimeError: ind…
-
-
Implement a recursive descent disassembly instead of a flat sweep to improve the accuracy and brevity of disassembled data.
-
## 🚀 Feature
I am interested in contributing an [exponentiated gradient descent optimizer](https://ttic.uchicago.edu/~tewari/lectures/lecture4.pdf) to pytorch.
## Motivation
Optimizing variabl…
-
Hello,
I would like to run simple unconstrained algorithms (like gradient descent, trust-regions etc.) and have the solver return the following quantities:
- the function value at each iterate v…
-
![Screenshot_1](https://user-images.githubusercontent.com/25990177/68534491-3d581c00-0346-11ea-87f3-ecf7a26f0813.jpg)
-
Hello. Interesting work, but I am having trouble reproducing your results.
The code from example notebook:
```
_markov_chain = MarkovChain(
[[0.3, 0.5, 0.2],
[0.1, 0.8, 0.1], …
-
Hi!
Love your module!
I've been looking to have something like this for Imperial Assault and, having already descent dice, it should be quite easy to add those. Could you please integrate those di…
-
Consider `-himem` for example. [`-himem` doesn’t do anything unless your system has less than 62 MiB of memory.](https://github.com/DescentDevelopers/Descent3/blob/496b2ed7c94a1fc08c6ed2831fc64abf486c…