-
The [tensorforce blog](http://reinforce.io/blog/introduction-to-tensorforce/) mentions state dependent action spaces under the **Further Considerations** section. Is this in the works / coming soon? I…
-
arXiv论文跟踪
-
Hello there,
I am wondering about the state of the ADC implementation, and what remains to bring it to a functional state.
In the ADC merge commit message, you mentioned that it is still WiP and tha…
-
Hello!I hope that you are fine!
As it is known Menge is based on ORCA which is a velocity based model, i.e every time step we have a feasible velocity and from this velocity each agent updates it's…
dkal3 updated
4 years ago
-
Hello,
I'm trying to see whether I can use `fax` in order to find gradients of a fixed point function (an optimization problem) wrt to problem parameters
consider f(x,y) = -(x**2 + (y[0]-a[0])*…
-
**Submitting author:** @wkirgsn (Wilhelm Kirchgässner)
**Repository:** https://github.com/upb-lea/gym-electric-motor
**Version:** v0.3.1
**Editor:** @Kevin-Mattheus-Moerman
**Reviewer:** @moorepants, …
-
Dear all,
I've started reading MLJBase in an attempt to develop spatial models using the concept of tasks. Is it correct to say that the current implementation of tasks requires the existence of da…
-
Hello, when I use kinova_stand.urdf to do Reinforcement Learning, I met the following errors. How can I fix these?
`[RAISIM_GYM] Visualizing in RaiSimOgre
*** buffer overflow detected ***: python3 …
-
In the existing literature, they use agents to explore and then get a model based on the interaction data. But I found that in your code, you use the data directly from the Dataset to train.I was wond…
-
> I appreciated understanding some of the history and problems that you stated in the printing section. I felt like in a few sentences I understood why and how printing became so pervasive. I felt the…