Closed Yuya-Furusawa closed 4 years ago
I reviewed the notes. First, let me make some comments on Fictitious Play.
In the first line, in the cell below the one with notations for strategy sets and others, “After each round of play, players observe the actual actions choosen by opponents, ...“
In the cell next to the definition of the wight function, “Each player assesses concerning the behavior of his opponents at each date and contingent on history”
In the next cell,
In the cell that mentions the condition of the convergence of fictions play,
On the explanation of play
,
The explanation says the play
method “returns the new normalized actions history …” but it seems not. The displayed output looks like a mixed strategy profile and not a history.
“If you don't designate initial actions, they are choosed randomly.”
On the explanation of time_series
,
10
is (the length of the output series) though it can be inferred reading the contents below. num_reps
in play
, for intuitive use, it might be better to put this argument after that of initial play. (So the command becomes time_series(mp, (1,1)),10)
. Similarly in the following stochastic fictitious play model.On the explanation of the graph of the two-action simulation, “… This result is consistent with manu papers.”
On the explanation of the graph of the three-action-game simulation, “… correspond to player's belief for opponent's first, second third action respectively”
In the description of the Model of Stochastic Fictitious Play, “Almost all of the settings are same as original fictitious play model except for paerturbated payoff.” → “Almost all of the settings are the same as original fictitious play model except for perturbed payoff.” ?
“Note that we do not need to consider mixed startegies in this augmented game.”
“…(i) two-player symmetric game with an interior ESS…”
On “Stochastic Fictitious Play with constant gain” part,
Please ignore unnecessary comments. I'm going to upload comment for the rest ASAP.
@MKobayashi23m Thank you for the polite check!
We created another repository for the game theory notebooks. https://github.com/QuantEcon/game-theory-notebooks I'm sorry I forgot to contact you about this, and let's discuss here.
(I will close this issue after you check this comment)
@Yuya-Furusawa
Understood! Thank you. I will join in the repository.
@oyamad Please review the notes on learning algorithms(PR) Fictitious Play Local Interaction Best Response Dynamics Logit Response Dynamics