-
Hi Edouard,
I wanted to know if you have already tested Prioritized experience replay for the memory?
I noticed that there is only standard replay memory in this project. So, i have been trying…
-
results on Pong seem to indicate that the experience replay functionality is not working correctly. Performance is terrible, and the agent is far worse than its vanilla alternative. The paper introduc…
-
Thanks for the excellent implementations of multiple classic RL agents, I have tried some of them, worked very well.
Just curious, do you plan to add the prioritized experience replay to the DQN? I h…
-
In the original PER paper, the priority importance weights are normalized by the max weight.
However, in the code, what is being called "max_weight" is actually the normalized weight minimum...
Wa…
-
### 🚀 Feature
Prioritized Experience Replay for DQN
### Motivation
_No response_
### Pitch
_No response_
### Alternatives
_No response_
### Additional context
_No response_
### Checklist
- …
-
File "/home/moderngangster/Codes/APC-Flight/ElegantRL/examples/../elegantrl/agents/AgentSAC.py", line 43, in update_net
obj_critic, state = self.get_obj_critic(buffer, self.batch_size)
File …
-
Hi, thanks for the great work. I am wondering what kind of Prioritized Experience Replay in your code ? It seems pretty much different from original paper https://arxiv.org/abs/1511.05952
https://git…
-
Hi! Probably by accident usual replay buffer is used in rainbow file instead of prioritized.
-
-
Should we work upon adding Prioritized Experience Replay ?