-
https://math.la.asu.edu/~jtaylor/teaching/Fall2012/STP425/lectures/MDP.pdf summarises Markov decision processes in chapter 3 and chapter 4 has some use cases. There are some symbolic aspects of decisi…
-
-
-
Hi Jaz,
How do your open markov systems relate to things like Markov Decision Processes and Partially Observed Markov Decision Processes.
From [wikipedia](https://en.wikipedia.org/wiki/Partially…
-
# Reference
- [ ] [Markov - ML-NLP](https://github.com/NLP-LOVE/ML-NLP/blob/master/Machine%20Learning/5.2%20Markov/5.2%20Markov.md)
# Brief
- MDP - Markov Decision Processes - 马尔可夫决策过程
Name | …
-
read:
### stochastic programming
mathematical programming book
http://web.mit.edu/15.053/www/AMP.htm
mutistage stochastic programming
https://orbi.uliege.be/bitstream/2268/80246/1/MSPchap_pre…
-
the quantecon crew has some dynamic programming packages that might be relevant for you:
- [ContinuousDPs.jl](https://github.com/QuantEcon/ContinuousDPs.jl)
- [ddp.jl in QuantEcon.jl](https://gi…
-
@anasza, @jagman88, @spencerlyon2
I have just merged into master, our work with @anasza. There is *one* implementation of `time_iteration_direct`, `time_iteration`, `value_iteration` which all wor…
albop updated
7 years ago
-
Write a blog about the visualization of A2C playing Atari Pong. It seems that many actions are about the same most of the time (the horizon is limited by gamma) and rarely specific actions are _intend…
-
### Idea Contribution
- [X] I have read all the feature request issues.
- [X] I'm interested in working on this issue
- [X] I'm part of GSSOC organization
### Explain feature request
Adding proper …