Open Sesu8642 opened 2 years ago
I don't know how feasible that would be, but your bots are controlled by some numerical parameters. It might be possible to let the player adjust them in the app to have a smooth(er) scale of difficulty. If that was the plan, I would suggest to find an AI code that always works the same but is completely controlled by those parameters. That would for once simplify the bot code and at the same time offer more flexibility. Then, you would have a bunch of default AI parameter sets, as well as the option to manually tweak an additional custom AI (in a settings menu).
Additionally, an ultimate AI would not work on a kingdom-by-kingdom basis but work on the whole map at once. And would usually outrank any of the existing AIs. So, if you want to challenge yourself and other players, then you might want to look into this option as well.
The AI doesn't use a lot of numerical parameters right now. Definitely not ones that would make al lot of sense to configure. And I don't plan to rewrite the AI. I already spent too much time doing that :D
I once tried to write a "proper" AI. The basic idea was to try every possible move, take the resulting game state and try every possible move again. At the end, the states were rated and the best one selected. Of course, this is a simplified explanation. I did tons of optimizations but never achieved satisfactory performance for the bigger maps. I then settled with the current version that is a bunch of if/else at it's core.
I wasn't speaking of re-writing the AI at all (at least not in the sense that the AI would behave differently). And yes, technically there aren't many numerical parameters, but a bunch of implicit ones. If you take at look at this line and the following lines you have:
That all combined means that you could define four (or perhaps more later on) numerical parameters in the AI enum and then boil the whole linked method down to what now is the SMART AI and guard everything with numerical checks. Something like:
removeBlockingObjects(gameState, pickedUpUnits, intelligence,removeThreshold, random);
if (random.nextFloat() <= intelligence.playThreshold) {
if (intelligence.refinements == 0) {
conquerAsMuchAsPossible(gameState, pickedUpUnits);
} else {
defendMostImportantTiles(gameState, pickedUpUnits, placedCastleTiles);
for (int i = 0; i < intelligence.refinements; i++) {
conquerAsMuchAsPossible(gameState, pickedUpUnits);
sellCastles(gameState.getActiveKingdom(), placedCastleTiles);
pickUpAllAvailableUnits(gameState.getActiveKingdom(), pickedUpUnits);
defendMostImportantTiles(gameState, pickedUpUnits, placedCastleTiles);
}
}
}
protectWithLeftoverUnits(gameState, pickedUpUnits);
I did that with three parameters, but you could as well use more, of course.
No guarantee that this would work exactly, but I see no immediate reasons why it shouldn't either. Your current AI levels would then be described by: Level: (removeThreshold, playThreshold, refinements) Easy: (0.3, 0.5, 0) Medium: (0.7, 1, 0) Hard: (1, 1, 1)
You would have to tweak the code a bit to prepare for intelligence.refinements > 1, because the AI should in these cases be able to undo castle acquisition as well as unit acquisition / upgrading and only the castle aspect is covered so far.
Me personally, I would like to try to play against a (1, 1, 0) AI. And if the whole bot code were controlled by a few parameters, it would be possible for the player to input these parameters to create a custom AI level (if you had a settings menu).
I understand now. Putting the parameters in the enum is a good idea. But I wouldn't want to make a UI for configuring it because:
It could go into some config that the advanced users like you could edit.
I tweaked the difficulty of the bots at levels easy and medium (7f0daa5aae0856b4f01e14cea58c05722569c7a4) but I'm not sure if it was enough. Looking for feedback.
Hey, (quick) comment from my side. I haven't checked these changes yet, but I can and will in the near future as I now have everything set up to quickly adapt to the newest version. But let me tell you this from my experience so far with the released version:
The dumb bots are mostly really easy to beat (you can win almost every generated map if you know the game a bit). That they don't act roughly every other turn makes it - not quite predictable but - simpler because the AIs forgive mistakes by not exploiting your weaknesses immediately. I can see why you want to offer an AI level that every new player can "learn with", but at least for me any easier AI would not pose any difficulty making the game a bit boring. That said, can you provide a short summary of what the dumb bot does differently now, so that I can focus my tests on these aspects? I checked the diff, but I was a bit overwhelmed by the amount of changed lines.
The medium bots are comparable to some human players, I'd say. They don't play optimal and they don't make use of castles, but they can actually beat you relatively quickly if you make mistakes that have you vulnerable to a certain level. But what I learned in my games is, that this medium AI is more dangerous the smaller the map is. For a long time I haven't played medium or large maps because I thought that these larger maps just make the games last longer but don't actually change the game play. Now that I mainly play on PC with a larger screen, I tend to play more medium sized maps. And these matches are (a bit) easier because - I guess that - the AI has no single enemy to focus on. While on small maps it is possible that some opponent solely targets you because you are the only neighbor or the AI thinks you are the easier target, on larger maps it is more likely that the bots will fight each other, leaving you the option to overtake their territory.
I don't know if you (Sesu) can tweak the AIs to adapt better to the map size, but if you (a player) have problems consistently beating an AI level in the current (release) version, try larger maps or be more selective with which matches you start in the first place. Because the starting point will decide a lot of what is possible for you to achieve. On medium I only start roughly every third generated map and with these I have a good win rate.
What I saw in the changes is that there are now two out of three bots that essentially won't play every single turn. That's bad! Because every advanced player now only has two options left: Use the best AI with all the capabilities that only this level has, or play against AIs that are passive some portion of the time. I really want variety among the AIs that use chanceToConquerPerTurn = 1F
, otherwise - at my current level - the game will be too easy for me (or too hard, because I haven't played against the smart AI yet at all). If you want to lower the difficulty of the two easier AIs, then please do this by introducing new levels in addition to the three available ones, such that advanced players can still have their level of difficulty. You effectively made the game more appealing to new players by worsen the experience for the established players (again, without having played against the new AIs yet, but that I can say from the diff alone). That said, I was always a bit confused by the fact, how much the smart AI can do that no other level can. It's quite a huge step from medium to smart. Some middle point between the old medium and the smart would be appreciated.
Thanks for your feedback here as well as on the commit. I will do some more tweaks. It makes sense to have at least one AI level between medium and hard that is not handicapped with chanceToConquerPerTurn < 1
. But that alone is probably not enough of a difference to medium.
I am not sure if you correctly identified the reason for medium maps feeling easier. My guess is that because of the law of large numbers, there is simply less luck involved and you can win more games with skill.
I see how you prefer the easy bots to be more predictable. But I find it quite challenging to have them act suboptimal and predictable at the same time. But I will have a look at it again.
You commented on the commit that you trick the AI by forcing it to remove a tree. I don't consider this a bad thing. You need to have some ways to outsmart them. There are many small (and big) things that could be improved even for the smart level. For now, I think they are good enough.
Hey, I wasn't complaining about the possibility that you can trick the AI into certain moves. Rather, I would complain that this possibility becomes an even bigger weakness the smarter the AI is (or is supposed to be). On dumb tree-removal is almost 50:50, on medium it's roughly 3:1, but smart will remove all trees regardless. Ideally, at least the smart AI would assign some heuristic values to each move (removing trees included) and then decide if removing is preferable or not. As said before, I play against medium mostly, so this trick works most of the time, if I played against smart, this would work always. So the more difficult AI is easier to "outsmart" as you said. That just feels a bit wrong to me. But I'm no AI specialist myself, so I'm just talking from the player's perspective of what I would assume a human opponent would act like on different levels of experience.
After all, a singular regular tree (not a palm tree) will not cause you any problems, unless the -1
income is really the issue here. So the expert AI (or me if I wasn't trying my best to get rid of ANY tree) could ignore it. Unless it had units without any serious usage this turn. But generally, any other move is more valuable than removing a singular tree. Even defensively protecting your kingdom! But again, I'm no AI expert, so I can only estimate how difficult programming the bots is. Props to you, that the AI is as capable as it is today at all.
And regarding the law of large numbers: Possibly. but you can't deny that a neighbor of yours has other ways to expand on a larger map than a smaller map, because on a smaller map this neighbor might be in a dead end only connected through you. And on smaller maps this will statistically happen more often. Or perhaps I developed more skills while transitioning to larger maps, so smaller maps felt harder because I wasn't as good back then. Who really knows?
PS:
But I find it quite challenging to have them act suboptimal and predictable at the same time.
That is the quintessential problem with everything AI. If you aren't limited by computational power / time, finding the optimal solution is a piece of cake. Finding a solution that is human like (because human players mostly are just that: somewhat predictable and suboptimal) is hard. Perhaps try my noise idea. Can't guarantee that this works, but in the field of game development, adding some statistical noise works surprisingly well in many domains, be it map/terrain building, textures, sound design. Why should AI be the exception.
So the more difficult AI is easier to "outsmart" as you said.
You do have a point there.
Applying noise to the optimal solution could work well for the smarter bot levels. But it would make the easy bots more intelligent (which I don't want) unless the noise is big enough to essentially make the decision random again.
So, I had this page open the whole day and was thinking about this every now and then. I'm wondering whether the problem with a "training AI" (essentially the lowest level of bots that you are trying to produce here) could be that it shouldn't be based on the same scheme that the smarter ones are based on. I know, it was me who suggested to stream-line the AI code, but perhaps the training AI has to be substantially different to accomplish its task of being really easy to beat.
Perhaps this AI level should work just by following strict step by step rules. With each rule being overly concrete and only involving very little randomness. Like you would teach/introduce a newcomer (to) a new topic. Start concrete, later develop a more abstract view of the domain.
What I mean with "overly concrete" is that you would specify the unit strength for example. The current AI code at times selects the best suitable unit based on what is available and what it could acquire. A concrete rule instead could be: "Step 1: Use peasants to clear as many trees as possible". If the bot happened to not have any (or enough) peasants (and couldn't acquire any), this step would (partially) fail and some trees were left for the next turn. If some kingdom only had spearmen (left), well, spearmen aren't peasants, so they wouldn't be "wasted" on trees. The only randomness would be the order in which the bot would try to remove the trees.
This example rule would come full circle to what I said yesterday about trees/gravestones. It's way more natural that simpler AIs would use up more units for such mundane tasks than to do any strategical planing about what to conquer.
And this training AI doesn't have to be good. Its only right to exist would be to help players get good enough to face the next level of AIs. So, this training AI might: (a) never use castles, (b) never use units stronger than spearmen (or only if it sees you using stronger units), (c) never acquire more units than the kingdom can pay for each turn (essentially keeping its payout each turn at or above 0), (d) whatever else you could think of.
The AI could look like this: Step 1: Use peasants to clear as many trees (or gravestones) in its kingdom as possible. Step 2: Use peasants to clear as many unprotected trees/gs in neighboring kingdoms. Step 3: Use spearmen to clear weakly protected trees/gs in neighboring kingdoms. Step 4: Use spearmen to kill enemy peasants where they aren't guarded. Step 5: Use spearmen to conquer opposing capitals. ...
If any step isn't executable because of missing prerequisites (no trees or no peasants for example), just skip it. And this style would certainly be suited as well to have a conditional break. After every other step for example you could toss a coin (metaphorically) and end the turn for this kingdom on heads. And while this certainly is quite similar to the current dumb AI, I could see this differing enough to be an appealing training AI. It's very predictable, it wouldn't be weighing up too many options at once or strategize a lot. And if you play according to a few basic rules, this AI will never be a threat to you. But that again could be the point where I struggle to design this basic AI because I can only really relate to new players that play like I used to play. So, other newcomers might still have a hard time against this idea of mine.
Your ideas do make sense to me but I see a problem with conquering and defending. The conquering logic you describe is almost the opposite of the optimal one. I actually tried this order before and it made the AI too stupid to make any sense. The bots were constantly taking unimportant tiles from each other but not making real progress in defeating anyone. This is why I settled for conquering (and defending) random tiles in the current iteration of the AI.
The conquering logic you describe is almost the opposite of the optimal one.
Well, isn't that the point of a dumb AI? If it were optimal, it wouldn't be easy to beat.
I actually tried this order before and it made the AI too stupid to make any sense. The bots were constantly taking unimportant tiles from each other but not making real progress in defeating anyone.
Now, on the one hand I feel dumb that I rehash an idea you already went through and tossed/rejected, but on the other hand I'm not totally convinced that my idea isn't actually an improvement to what you tried earlier on. Still haven't had time to get hands-on experience with your committed version, but it was one of my core ideas with my suggestion, that the AI won't do stupid stuff primarily. Or at least you can make the AI not do too dumb stuff. That's why there are strictly ordered steps. And if an AI implementation acts too dumb, you can always try out another step order.
This is why I settled for conquering (and defending) random tiles in the current iteration of the AI.
With no weights both on the defensive and the offensive side, how will this iteration be any better than your spin of my idea then? Sure, randomness can favor you (but mostly won't most of the time), but if your bots defend and attack random tiles, how won't that act out the same as described that AIs take useless tiles from each other instead of (sort of) strategically making moves. Just take these two examples: If defending an inner tile with no bordering tile neighboring is as good as defending a border tile, some units will be placed in the center and not actually do any defensive work. If conquering a useless tile is as good as conquering an essential opponent tile, the AI won't do any meaningful progress at least half the time (or even more often if you stick with the chanceToConquerPerTurn
check for the dumb AI).
Do you follow Tom Scott (plus) on YouTube? Check out his video where a flight instructor guides him to land a auto-pilot assisted (simulated) commercial plane over comms. That's the way I imagine a dumb AI to work essentially. Assume the AI doesn't really know the game and the instructor doesn't really see the game state. But still the instructor tries to make useful suggestions on what to do next.
And - to stick with this analogy - if the bots don't act the way you imagine, just swap out the instructor. ;) But by thinking in steps, you can control when some particular moves will be considered. And if the step "conquer any empty and not guarded tile" come strictly after "conquer opponent capitals", then the AI will ALWAYS prefer them over "unimportant tiles from each other".
If defending an inner tile with no bordering tile neighboring is as good as defending a border tile, some units will be placed in the center and not actually do any defensive work
Defending is limited to tiles that are close to the border even for the weakest AI. But defending the capital is just as likely as defending a tree. If I created steps that say "protect all trees first, then protect all empty tiles, then all capitals", this will be much worse that the random order because with the random one the capital will be protected at least sometimes. Of course I could mix up the priorities but I can't find the steps that are good enough to make a convincing opponent and are not too strong at the same time. The same thing applies to attacking even more. I don't follow Tom but I get the idea. The problem is the execution.
So, I just play-tested the dumb AI. It's just subjective impression, but the new easy AI seems acting dumber than the old one. So, given that this was your intention, I'd say mission accomplished. I just don't know, whether this new version is dumb enough for your standards; if literally everyone could use this new AI to learn the game, but it very well could be.
But I got the impression from time to time, that it in fact was "rampaging" / taking random/unimportant tiles too often for my taste. So, I prefer the slightly harder old AI as it behaves more natural. And, now that you made the easy AI easier, you definitely need another smart(er) AI because everyone will want to upgrade to a smarter AI quite soon and this would only leave you with two remaining AI levels, whereas the old easy AI could actually be fun to play against for longer.
That said, it acted soo dumb, that I actually surrendered one match where I - because I was really bored by the AI - did a detrimental move that ruined me. And in another match one bot had a spearman rather early, but wasn't using it for several turns. Either because it was "used up" defensively, or because the bot didn't got to move in this turn. But I really not enjoyed playing either dumb version any more. Even the old easy AI version is too dumb for me to be challenged in the slightest. Only if you use some additional restrictions (like, no castles), is this partially enjoyable.
Now, that I experienced again, how the easy AI works, I can say for sure, that my concept would definitely be too smart as an easy AI. Still, I would like to play against my AI at some point. Perhaps I will implement this myself eventually. It can't be considered an easy AI, but due to it following a different approach, it likely could serve as a bridge between different difficulties of your defined AIs for some players. That would actually be beneficial for the game if the game included AIs based on different concepts / implemented by different contributors because every coder / every concept challenges you differently. Just as different human players will require different strategies, different AIs could/should do the same.
Perhaps not ideal, but let me hi-jack this thread for this issue/question because it's AI related as well. I played a match against the old medium AI today, when - with three players left in the play order blue -> brown -> white - the following situation occurred: I can't provide any pictures to what it looked like the last turn, but seeing white placing the units the way it did is highly suspicious and as far as I know evidence for faulty AI logic. And depending on the last turn, even brown might have screwed up its turn given that white could have split the brown kingdom easily if it wasn't for wasting its units on the right. Whatever the intention behind white's move was, the outcome is really far from ideal. The peasants can't be in that spaces for offensive reasons (aka conquering brown territory) because these tiles would have been protected by brown peasants. So, they have to be there for defensive reasons. But in that case, white should have used its spearmen more intelligently and defending its other border to brown and the long border to blue/me. It's really bad how white has opened itself up to be overtaken by me without even using knights. I wouldn't have expected the dumb AI to play that way, even less so the medium AI. Any thoughts on how that could have happened?
The medium AI has a protectWithUnitScoreTreshold of Integer.MAX_VALUE which means that no units are reserved for protecting at the beginning of the turn. Then the conquering phase follows. I assume that it was skipped because of the brown tiles that could have been conquered by peasants but weren't. Finally, protectWithLeftoverUnits was called. All the units were used to protect tiles, beginning with the most important ones. Note that only border tiles and their neighbors are candidates for protecting. However, since the AI has smartDefending set to false, all tiles have the same score. Because of that, "random" tiles are selected for protecting. But there is a catch which I now realize is probably problematic. If multiple tiles have the same score, their coordinates are used for determining which one to choose - not actual random. I did it this way to have consistent bot behaviour when re-playing a seed. This causes tiles with higher x coordinates to be prioritized. This is why only the tiles at the border to the brown kingdom were protected. I will address this... Maybe using the hashcode of the coordinates is a better choice.
against the old medium AI
since the AI has smartDefending set to false
It's always possible that I messed this up (I have more than one version built to be able to switch for testing), but I'm pretty sure that the version I am using right now was from before the smartDefending
flag was added (I have no end game detection in this version) and as far as my understanding goes, this means that all AI will always act "smart". So, this should not be the issue here.
If multiple tiles have the same score, their coordinates are used for determining which one to choose - not actual random.
That might be part of the issue, but the more problematic thing here is that apparently, the protectWithLeftoverUnits
-method isn't taking into account what already placed units do to the defensive scores of each tile. If you have to place say 5 units with this method, the method should look something like: find a good place to defend with some unit, place that unit, then check the protection levels again to find a place for the second unit, and so on. If a spearman already defends some tiles, you don't need an additional peasant to protect the same tile twice. And less so do you need another line of spearmen at the same border, if another border is left completely unprotected (you only need another line of defense if you fear to be overrun by some enemy, but if you leave other borders open, then the threat is a completely different location than your doubly defended border line).
You didn't specify how old :P Since you have the new menu button, it can't be close to the release version. But it could be before the latest AI changes of course. protectWithLeftoverUnits calls getBestDefenseTileScore after placing a unit which calculates all the scores again. This is why I can't explain this behaviour with smartDefending / the old AI.
Oh, I wasn't that aware that AI moves soo fast that "old" already became "ancient". ;) Another reason why I suggested to use a more fine-grained versioning such that I could just say "medium AI in version 1.0.42". That also makes me wonder, how much must change before you release the next official version and whether this release would then immediately be called "version 2.0.0" (given how much changed until then). If so, why do you use a version-triple then? Or in other words: you have a powerful versioning system already, just use it. Bump up the "patch" frequently (perhaps even with each commit). Bump up the "minor" more often, if you feel you accomplished something new (new map sizes, new graphics, the new end game detection, ...). And release all minor updates at least. My phone is still running a version that behaves way differently to what my "live" version does.
And, back on topic: I haven't checked the code yet but getBestDefenseTileScore
should subtract some score from each tile that is already covered by some other unit to make it more appealing to cover new parts of the kingdom. And a tile which itself plus its neighbors isn't protected yet should get a higher score then a tile that among itself and its neighbors has already protected tiles (where it is up to you and further testing, whether it makes a difference if (only) one neighbor is protected vs if itself is protected). Ideally, the higher the number of protected tiles among these seven affected tiles, the lower the score. If that already exists, then there could/would be a bug in this portion. If it doesn't exist yet, then I would understand how that came to be.
The version will only be bumped when there is an official release. If you need to point to a specific development version, you can simply use the commit hash. It wouldn't be a bad idea to display that somewhere though. I initially intended to release a 1.1 quickly to address the most severe usability and balancing issues of the original release, the AI being one of them. But some of the changes took longer than I anticipated and I don't have a lot of time to work on the project right now. However, there are only two big TODOs on my list for the next version: balance the AI and an ingame changelog.
A logic like this does exist and but is disabled for AIs that have smartDefending=false in the master. Previously, all AI levels had this intelligent behaviour. This is one reason I assumed you played on a relatively new version.
I overhauled the removal of blocking objects. The two dumber AI levels will now try to remove all blocking objects. The smarter levels will remove the "dangerous" objects with a high priority and the other ones only when they have spare units. In addition to that, there is a new difficulty level "hard". The previous level "hard" is now "very hard". @d-albrecht would you mind testing again?
Just pulled the changes and checked the new "hard" mode. I wasn't really playing against "castle AIs" yet, but anything that doesn't play every round and isn't playing smart (offensively and defensively) got a bit boring for me. And the "dumbest" AI that plays according to these criteria automatically uses castles. That might still be a point to further improve/smooth the "learning curve". But that's just speaking from theory, can't back this up with experience right now.
So, from a single test game right now I can say, that I used this change to upgrade my default AI level to hard. But I will take another look at the other levels as well. I will get back when I have.
I did a bit of poking around and so far it feels quite right. I'm not sure if I can still fully assess the different difficulty levels correctly any more. Especially because against the easier levels I tend to mess around more, with the result that I approximately need the same amount of time against the easiest and the "hard" AI (because of my fewer attention the easiest can strike back a few times and make me have to rebuild/reconquer some parts, while against the hard I just need longer because I play more careful).
But: What is your stance on castles that don't belong to any kingdom? I encountered this situation in my game against a hard AI and it just feels wrong to see the brown castle in the middle remain even if there is no brown kingdom anymore. Or in other words: Are castles supposed to get destroyed if they get "carved of" from the actual kingdom (like capitals do) or are they really meant to stay in place like that? Seeing how Slay handles cut off units, your game implementation already behaves differently (cut off units in Slay stay in place until they starve the next time this color is playing) and that's not a bad thing. I just wasn't sure if this was the way you imagined castles to behave.
Thanks for testing again! I will ask some other people for feedback as well.
The cut off castle is a bug for sure. I will have a look.
@d-albrecht do you know what happens to a cut off castle in Slay? Does it simply disappear?
Don't know of this question is still relevant (I haven't looked into this issue tracker for a while), but unfortunately, no, I don't. I have a good understanding of how units might die in Slay (direct killing, replacing by capital if a small kingdom is cut off with no free tiles, and starving) and that units in Slay even if cut off don't immediately get killed (even if that means that pseudo-single-tile kingdoms are "created" with some unit on the only tile), but to be honest it's a really odd corner case that singular castles would get cut off. Because that means that you have to capture up to six tiles all still guarded by the castle (requiring a larger number of stronger units), while capturing the castle first will only take two knights and the surrounding tiles can then often be taken by peasants even.
That said, it always feels strange if some unit gets replaced by a capital. If that ever happened I would imagine that a cut off two-tile-kingdom with castles on both tiles would have one castle replaced by a capital as well. Don't know if you could do this any differently. But as I said, strange to lose units to capitals at all sometimes.
I made the castles simply disappear if they are cut off for now. I agree that units being replaced by the capital is a little weird. But I don't have a better idea either.
Multiple people reported to me that the difficulty is rather high. The difficulty of a session depends on the intelligence of the bot players as well as the generated map. I think that the varying difficulty of the maps is a good thing. I have great fun beating maps that seem almost impossible at first. However, some generated maps are actually impossible on certain bot difficulties. I don't think that I can fix this. That said, if you play against easy bots, impossible maps should not occur regularly. Another problem is that the medium difficulty bots are in practice not that much worse than the smart ones. I would like to improve this by making the easy and medium difficulty bots a little dumber.