-
I've read your paper on deep neuroevolution and found it very interesting.
A few basic questions:
- When can we expect a code release?
- Do you use a GPU to evaluate neural networks, or is everyt…
-
Background: Salimans et al (2017) suggest that reduced floating point precision increases processing speed and at the same time it doesn't significantly reduce performance of the neural network. This …
-
currently the reading of the configurations is very ugly. Everything happens in [a helper functions with lots exeptions for different cases](https://github.com/neuroevolution-ai/NeuroEvolution-CTRNN_n…
-
-
examples should be removed and created in another repo like this [https://github.com/elbraulio/neuroevolution_snake](https://github.com/elbraulio/neuroevolution_snake)
-
using MP together with Torch-CNNs is significant slower than with Dask
-
Here is the memory consumption for a simple BipedalWalker-v3 over 100 generation, which very short evaluations per individual.
![image](https://user-images.githubusercontent.com/4362465/102321729-…
-
As the we learn about the influence of hyper parameters on the project, we tend to use more and more default values for each of them. Also as the system grows, the number of parameters also grows.
…
-
I think that this should have a skidl interface. basic rectangle packer with margins for rivers. Then add nmigen for a logic solver.
-
Was it an active decision to not use linear algebra for neural network forward-propagation and the like? I would assume it would speed up the computations significantly.
Crates like `nalgebra` would …