-
Hello,
I tried to run your algorithm but there's an error.
> line 303, in main
> currentEpisode = np.array(currentEpisode)
> ^^^^^^^^^^^^^^^^^^^^^^^^
> ValueError: se…
-
```
docs/
├── README.md # Overview of the documentation structure
├── getting-started/
│ ├── introduction.md # Introduction to AIBuddies and AI
│ …
-
### Feature Name
Title: Add Q-Learning
### Feature Description
Develop a Q-Learning algorithm as a model-free reinforcement learning method that learns the value of actions in a given state.
### M…
-
### Feature Name
Adding Deep Q-Networks (DQN) Visualizations
### Feature Description
Introduce Deep Q-Networks that combine Q-Learning with deep neural networks to handle high-dimensional state spa…
-
**This is a(n):**
- [x] New algorithm
- [ ] Update to an existing algorithm
- [ ] Error
- [ ] Proposal to the Repository
**Details:**
[Q Learning](https://en.wikipedia.org/wiki/Q…
-
Helly @AliiRezaei ,
Nice work. Thank you so much for sharing it. I am really interested particularly due to your work in C++. I am just wondering if we change the algorithm from discrete actions to…
-
The following parameters will be considered for the model:
- learning rate
- exploration rate
- discount rate
make sure to explain in the comments what every parameters means and how they affe…
-
-
I do have some code for a Stock Trading game that is using Deep Q ( just standard Deep Q Learning with Experience Play, but i would like to use A3C LSTM with Experience Play as per the research paper …
-
Hi Lucas,
I am implementing different algorithms on different net provided in the library, but I want to simulate the network with a fixed timing and compare the rewards function output for differe…