Closed oyamad closed 1 year ago
You might also check out the game_theory
submodule in QuantEcon.py
, which contains support_enumeration
, vertex_enumeration
, and lemke_howson
(which perform well using Numba
, but only for 2-player games):
import numpy as np
from quantecon import game_theory as gt
# Test 2 - mid rand 6x7 game
p1 = """
253 646 641 395 258 153 375
713 17 145 582 338 258 145
174 265 282 588 80 996 478
346 517 963 829 976 735 334
492 106 199 986 278 658 407
506 288 439 345 549 869 986"""
p2 = """
296 23 932 537 515 130 679
491 49 315 432 977 799 777
284 761 914 268 313 864 25
75 944 444 209 804 452 502
869 837 950 707 755 452 9
579 794 747 272 929 288 780"""
shape = (6, 7)
payoffs = [np.fromstring(p, dtype=int, sep=' ').reshape(shape) for p in [p1, p2]]
g = gt.NormalFormGame(np.dstack(payoffs))
%timeit gt.support_enumeration(g)
564 µs ± 12.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit gt.vertex_enumeration(g)
701 µs ± 5.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
lemke_howson
only samples one Nash equilibrium:
%timeit gt.lemke_howson(g)
9.62 µs ± 61.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
(These timings are in Python, not through PyCall
.)
Thank you, I'll have a look on [del]Tuesday[/del] the weekend. I am still looking (mor in general than this package) for a proper way to banchmark software in several languages, as I suspect that the different benchmark tools may measure different things...
Hello, thank you, I have added the benchmarks for 2-players GameTheory.jl support_enumeration
and lrsnash
functions... impressive...
Dear @sylvaticus: Congratulations, it's great that you developed a nice solver that works also for N>=3 player games! (Yes,
GameTheory.hc_solve
is very slow...; I am a developer ofGameTheory.jl
.)To your benchmarks (only for 2-player games) you might add
support_enumeration
andlrsnash
fromGameTheory.jl
:On my machine:
For comparison: