KehlRafael / hyper-rational-games

MIT License
0 stars 0 forks source link

Negative values on payoff matrix #1

Closed KehlRafael closed 3 years ago

KehlRafael commented 3 years ago

Using the classic_pd_negative example and commenting the matrixTranslate call on the hr_game function will bring up this issue. Once values start to get smaller and smaller you'll see issues like the sum of all strategy probabilities be smaller or bigger than one and, eventually, the increments will have the same sign, making all strategies eventually be 0 or 1.

KehlRafael commented 3 years ago

Theorem 1.9 of Game Theory: Decisions, Interaction and Evolution states that "The optimal action is unchanged if payoffs are altered by an affine transformation". A translation is an affine transformation. Even though this theorem is for choice theory, I can probably prove that it works for the hyper-rational payoff functions defined by me and, therefore, on the hyper-rational replicator equation.

With all that said, I still want to fix the issue stated above. I suspect it is a catastrophic cancellation issue, since the numbers are very small and close. I'll check on this issue whenever I have more time, because for now the translation is enough for me to run all my simulations with ease.

KehlRafael commented 3 years ago

For a state x to be a Nash Equilibrium of our system we essentially need two conditions to be satisfied: , for every neighbor vertex u and every strategy s.

Since we're dealing with matrices, an affine transformation would be to multiply all entries by a constant number and the sum of the eye matrix multiplied by another constant. For Theorem 1.9 to be true to our equation we would need a very strong hypothesis on our state x, which would not be very useful. What I proved is that Nash Equilibrium is not affected by the folowing operations:

We will see that this constant value need not to be the same in all payoff matrices, therefore we can only modify the ones we want to, as long as we only do the two operations mentioned above. The proof is simple and as follows:

Given a steady state x, 1, a matrix where all it's entries are equal to 1, and constants . Recalling that the sum of all entries of any strategy distribution is always one, we then have: Similarly, we have: Which satisfies the first condition for steady states.

By the same logic we also have: And

Which satisfies the second condition for steady states. Therefore, steady states are not changed by the two operations listed above.

With this we prove that the translation made in the algorithm does not affect our results and this issue is resolved. Anyone could comment or add to this discussion in an effort to mitigate catastrophic cancellation in the many operations made in this algorithm, which would be very helpful and insightful for me.