-
To run [https://github.com/tensorflow/agents/blob/master/tf_agents/examples/dqn/dqn_train_eval_rnn.py](https://github.com/tensorflow/agents/blob/master/tf_agents/examples/dqn/dqn_train_eval_rnn.py) th…
-
My environment is 2080 ti GPU, i9 CPU, 64G Rom, NVIDIA-SMI 470.161.03, Driver Version: 470.161.03, CUDA Version: 11.4.
After starting CARLA 0.9.11, I run "python3 dqn_train.py dqn_example/dqn_c…
-
I can't find any solution to this problem, please help me!
-
I see this TODO in the code for this file:
https://github.com/tensorflow/agents/blob/master/tf_agents/agents/dqn/dqn_agent.py
```
# TODO(oars): Get DQN working with more than one dim in the…
-
Need to add a tensorboard summary writer:
- In the convolution DQN agent:
https://github.com/im-ant/ElectrophysRL/blob/194e19a66455e9ea57eaa4bc30c041dadcaad62c/dopatorch/agents/dqn/dqn_agent.py#L62
…
-
@blavad
**thanks for the huge repo**
hi i have a query
i would like to train two agents with DQN of same environment, but independent of them (agents)
is it possible, if so help me out
t…
-
I changed the parameter in examples/dqn.py to this and I get an error:
```
def main():
env_name = 'CartPole-v1'
# env_name = 'PongNoFrameskip-v4'
use_prioritization = True
use_…
jt70 updated
4 months ago
-
Hi,
while working on a PyTorch DQN agent for BSuite experiments, I noticed quite bad results on the mnist and mountain car experiments. I see that a similar question was addressed [here](https://gi…
-
안녕하세요!
DQN.py 파일 경로를 어디다가 설정해야할지 잘 모르겠어서 질문드립니다.
현재 Desktop/ML-Agent_Project/Env/GridWorld 폴더안에 이전의 환경을 빌드한 파일들이 있는데 DQN.py파일은 어떤 경로에 넣으면 될까요??
-
### Search before asking
- [X] I searched the [issues](https://github.com/ray-project/ray/issues) and found no similar issues.
### Ray Component
RLlib
### What happened + What you expected to hap…