-
Hi there, I am trying to use DeepQLearning to solve the discrete Blackjack Gymnasium environment (Blackjack-v1), where the Observation Space is a tuple, Tuple(Discrete(32), Discrete(11), Discrete(2)),…
-
POMDPs.jl [supports state-dependent action spaces](https://juliapomdp.github.io/POMDPs.jl/stable/def_pomdp/#state-dep-action)
However, DeepQLearning.jl is always picking the full action space.
Th…
-
What would be a good interface for specifying the exploration policy?
It is implemented differently here and in `DeepQLearning.jl`.
- What is implemented here: Just allows a limited set of pos…
-
Dear Developers,
I'm getting the following error when running the code below
> pearl/neural_networks/common/value_networks.py", line 262, in get_q_values
x = torch.cat([state_batch, action_batc…
-
I think that is redundant as we switched to CommonRLInterface and RLInterface is no longer in dependencies.
-
Hi sir, I'm getting this error when running the main.py.
```
Traceback (most recent call last):
File "C:/SC/AirSim SC/PEDRA/main.py", line 99, in
eval(name)
File "", line 1, in
File …
-
The TRPO and PPO implementations are general enough to be in their own solver package in the POMDPs.jl ecosystem. I've already encapsulated these solvers into the DeepRL module.
Some TODOs:
- [ ] …
mossr updated
3 years ago
-
I'm getting this error when trying airsim1.2.4 and 1.2.8.
```
Traceback (most recent call last):
File "C:/SC/AirSim SC/PEDRA/main.py", line 99, in
eval(name)
File "", line 1, in
F…
-
Hi Aqeel,
I'm getting this error after a few hundred iterations. Hitting r and backspace doesn't seem to resolve it.
I've attached the setting.json file with this.
Cheers
![Capture 3](https://us…
-
Since I created v1.0, a few things have broken. Most importantly, the docs don't build. This mostly means that we have to update the Project.toml files for all the things needed to build the docs.