sylvaticus / StrategicGames.jl

A set of functions in pure Julia for Game Theory
https://sylvaticus.github.io/StrategicGames.jl/
MIT License
11 stars 0 forks source link

On the benchmarks #2

Closed oyamad closed 1 year ago

oyamad commented 1 year ago

Dear @sylvaticus: Congratulations, it's great that you developed a nice solver that works also for N>=3 player games! (Yes, GameTheory.hc_solve is very slow...; I am a developer of GameTheory.jl.)

To your benchmarks (only for 2-player games) you might add support_enumeration and lrsnash from GameTheory.jl:

# Test 2 - mid rand 6x7 game
p1 = [
    253  646  641  395  258  153  375
    713   17  145  582  338  258  145
    174  265  282  588   80  996  478
    346  517  963  829  976  735  334
    492  106  199  986  278  658  407
    506  288  439  345  549  869  986]
p2 = [
    296   23  932  537  515  130  679
 491   49  315  432  977  799  777
 284  761  914  268  313  864   25
  75  944  444  209  804  452  502
 869  837  950  707  755  452    9
 579  794  747  272  929  288  780
]
payoff = cat(p1,p2,dims=3)
g = NormalFormGame(payoff)

On my machine:

@benchmark support_enumeration(g)
BenchmarkTools.Trial: 5747 samples with 1 evaluation.
 Range (min … max):  768.502 μs …   8.122 ms  ┊ GC (min … max): 0.00% … 88.61%
 Time  (median):     808.898 μs               ┊ GC (median):    0.00%
 Time  (mean ± σ):   867.158 μs ± 436.857 μs  ┊ GC (mean ± σ):  3.12% ±  5.68%

  ▇██▇▇▆▅▄▃▂▁▁▁▁▁            ▂                                  ▂
  ██████████████████▇▇█▇▇▆▆▅▆█▇▆▅▅▄▅▅▆▅▁▁▅▇▇▃▇▆▅▄▄▄▅▃▃▁▄▁▅▆▆▅▅▅ █
  769 μs        Histogram: log(frequency) by time       1.58 ms <

 Memory estimate: 249.77 KiB, allocs estimate: 3223.
@benchmark lrsnash(g)
BenchmarkTools.Trial: 3004 samples with 1 evaluation.
 Range (min … max):  1.333 ms … 78.700 ms  ┊ GC (min … max): 0.00% … 25.34%
 Time  (median):     1.417 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   1.662 ms ±  4.126 ms  ┊ GC (mean ± σ):  3.53% ±  1.39%

    ▁▆▆▅███▄▆▃                                                
  ▅▇████████████▇▆▆▆▅▄▄▄▃▃▃▂▃▂▂▂▂▂▃▂▂▂▂▁▂▂▂▁▂▁▂▂▁▂▂▁▁▂▁▂▁▂▁▂ ▄
  1.33 ms        Histogram: frequency by time        1.91 ms <

 Memory estimate: 184.99 KiB, allocs estimate: 11058.

For comparison:

@benchmark nash_se(payoff)
BenchmarkTools.Trial: 770 samples with 1 evaluation.
 Range (min … max):  5.554 ms … 17.943 ms  ┊ GC (min … max): 0.00% … 50.41%
 Time  (median):     5.862 ms              ┊ GC (median):    0.00%
 Time  (mean ± σ):   6.496 ms ±  2.381 ms  ┊ GC (mean ± σ):  7.96% ± 12.82%

  ▅█▆▂                                                        
  ██████▆▅▅▄▄▁▁▄▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▁▄▄▇███ ▇
  5.55 ms      Histogram: log(frequency) by time     15.8 ms <

 Memory estimate: 3.71 MiB, allocs estimate: 72139.
@benchmark hc_solve(g)
BenchmarkTools.Trial: 1 sample with 1 evaluation.
 Single result which took 37.879 s (0.05% GC) to evaluate,
 with a memory estimate of 198.34 MiB, over 5924023 allocations.
oyamad commented 1 year ago

You might also check out the game_theory submodule in QuantEcon.py, which contains support_enumeration, vertex_enumeration, and lemke_howson (which perform well using Numba, but only for 2-player games):

import numpy as np
from quantecon import game_theory as gt

# Test 2 - mid rand 6x7 game
p1 = """
    253  646  641  395  258  153  375
    713   17  145  582  338  258  145
    174  265  282  588   80  996  478
    346  517  963  829  976  735  334
    492  106  199  986  278  658  407
    506  288  439  345  549  869  986"""
p2 = """
    296   23  932  537  515  130  679
 491   49  315  432  977  799  777
 284  761  914  268  313  864   25
  75  944  444  209  804  452  502
 869  837  950  707  755  452    9
 579  794  747  272  929  288  780"""
shape = (6, 7)
payoffs = [np.fromstring(p, dtype=int, sep=' ').reshape(shape) for p in [p1, p2]]
g = gt.NormalFormGame(np.dstack(payoffs))
%timeit gt.support_enumeration(g)
564 µs ± 12.1 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
%timeit gt.vertex_enumeration(g)
701 µs ± 5.28 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

lemke_howson only samples one Nash equilibrium:

%timeit gt.lemke_howson(g)
9.62 µs ± 61.6 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)

(These timings are in Python, not through PyCall.)

sylvaticus commented 1 year ago

Thank you, I'll have a look on [del]Tuesday[/del] the weekend. I am still looking (mor in general than this package) for a proper way to banchmark software in several languages, as I suspect that the different benchmark tools may measure different things...

sylvaticus commented 1 year ago

Hello, thank you, I have added the benchmarks for 2-players GameTheory.jl support_enumeration and lrsnash functions... impressive...