Closed eric-s-s closed 7 years ago
I'm not sure I like the sound of this idea: these strategies are really niche and artefacts of the start of the library. They're labelled as cheaters
so don't get used often. I don't think it's worth spending too much time on this and also I'm not a fan of adding a method to the Player
class just to deal with these weird strategies.
A more succinct version: I'm hesitant to add a (minor) layer of abstraction/complexity for the sake of performance in this (niche) case.
But let's wait to hear what @marcharper and @meatballs have to say.
:)
got it.
so are these strategies basically dead-ends, then? could/should they be deleted?
so are these strategies basically dead-ends, then? could/should they be deleted?
That's certainly a valid/logical pov. I would suggest they're a bit of fun so would personally like to see them stay in the library but I don't think we should worry about optimising them. EDIT: .. if there is a cost and I think that the slight addition you make here (which is a good fix to the problem) is a cost in terms of code complexity.
yes, that all makes a lot of sense. i do have the tendency to get caught up on small details, in case the previous PR did not make that obvious.
I agree with @drvinceknight: we should leave them alone. It isn't worth adding anything in Player
just for the sake of these strategies, but there is also nothing to be gained by getting rid of them. The classifier dict allows us to exclude them from any tournament.
i'll wait on @marcharper before closing this and the associated PR, then?
or do one of you close it?
i do have the tendency to get caught up on small details, in case the previous PR did not make that obvious.
You did awesome work on that PR :) It ended up being quite complicated!
Let's add a note to the top of those files that basically says not to put a lot of time into optimizing them.
We do have some simulation code in MockPlayer, simulate_play, etc. that maybe is worth cleaning up. I haven't looked at it in a while.
I also created a history class some time ago but it too had a lot of additional runtime -- might be worth revisiting: see this branch. With some effort it could be a nice addition to the library, and a bit of extra runtime may be worth it.
Let's add a note to the top of those files that basically says not to put a lot of time into optimizing them.
Good call. I'll do that now :)
We do have some simulation code in MockPlayer, simulate_play, etc. that maybe is worth cleaning up. I haven't looked at it in a while.
Might be worth holding on that till after the dust clears on #884 (responses_test
is a heavy users of mock_player.simulate_play
so might be worth waiting until we're using versus_test
throughout).
I also created a history class some time ago but it too had a lot of additional runtime -- might be worth revisiting: see this branch. With some effort it could be a nice addition to the library, and a bit of extra runtime may be worth it.
Yup :+1: :)
See #944
I was refactoring the tests for Darwin and came across the use of the "inspect" module. This slows down the strategy by about a factor of 20ish. same goes for Geller and MindReader.
I came up with a workaround.
I have a PR that, while not great, clearly demonstrates the idea and will submit a PR that references this as soon as this issue is up and running.
here's the gist of my solution.
Player has a new method
in _strategy_utils, limited_simulated_play now calls strategy_for_simulation. all other simulation methods in _strategy_utils rely on that method.
Darwin, Geller and MindReader override the method so that they return some default when another strategy tries to run a simulation on them.
Geller's strategy method calls strategy_for_simulation for its opponent.
MindReader's strategy method calls _strategy_utils.look_ahead which call strategy_for_simulation.
all regular testing is unaffected as that relies on mock_player.py which uses the "strategy" method
on my computer, this cut testing time by over a factor of ten for each strategy. Darwin: 0.209s -> 0.004s Geller: 3.167s -> 0.011s MindReader: 3.451s -> 0.209s