ZeroK-RTS / Zero-K-Infrastructure

Website, lobby launcher and server, steam deployment, .NET based tools and other vital parts of Zero-K infrastructure
GNU General Public License v3.0
53 stars 52 forks source link

The latest pot idea #2803

Open GoogleFrog opened 3 years ago

GoogleFrog commented 3 years ago

I suppose I had better write this down to avoid the work of re-deriving it in the future.

The primary goal is to let the pot scale up to allow everyone who wants to play a slot in the game. The mechanics:

Pots sometimes check for merges. This occurs pairwise under certain condition. The conditions are:

Two pots are merged if at least one has fewer than 10 players.

Licho1 commented 3 years ago

This is what the queue room was supposed to be doing.. I remember you were quite militantly against it.. It still needs to solve details like game observenging, friends etc..

sprunk commented 3 years ago

As far as people are aware they are in the same pot, but all the high/low elo people left.

Doesn't this create a leaving spiral that kills the room? If people can still host 32 player rooms, won't this make people switch to manual hosting to work around the juggler?

GoogleFrog commented 3 years ago

This is what the queue room was supposed to be doing.. I remember you were quite militantly against it.. It still needs to solve details like game observenging, friends etc..

I recall the queue room being different in subtle but important ways. Guiding the flow of people so that they don't fight against it and break it is most of the difficult problem here. What exactly was the queue room proposal? I recall predicting it encouraging some sort of behaviour that would bring down the system. The specifics of what players see and can do are vital. I don't know if game observing or friending is possible. Perhaps splits could split people by friend group, but that risks creating high elo disparities that kill rooms.

Doesn't this create a leaving spiral that kills the room?

Potentially. It is one of the big risks for the system. Having the other players simply not be in the room by the time players switch back to the lobby seems like it would partially fix this. Having a potential 7v7 in front of them, with no alternative, would also help. The numbers would have to be calibrated to take into account the usual attrition after a large team game. Perhaps splits could only happen at 32 players, or maybe use waiting list to go even higher.

If people can still host 32 player rooms, won't this make people switch to manual hosting to work around the juggler?

This is a significant unsolved problem.

Licho1 commented 3 years ago

Queue room was basically a battle room where all people who want to play teams join. You can still set the map and stuff but when you run start, it will create possibly multiple running spring games based on ELO/counts.

Another thing is that on starting the game, people who are in MM waiting for "big teams" are auto joined to the pool of players.

When game ends people are still in the same queue room (they never leave the queue room, making it appear huge). You can spectate all of the child spring games (similar to spectating MM games) but you cannot choose where you play (except for friend/party).

Licho1 commented 3 years ago

These are the things I imagined people would do (besides complaining):

If game splits into A and B battles and A ends before B, then based on number of non-ingame players they will decide to either:

But regardless of their choice, they will still be in the same queue room, attracting more people and making it possible to change decision at any moment.

GoogleFrog commented 3 years ago

My concern with queue room is people's reaction to the surprise of not being in as large game as they thought they would be. It doesn't take that many people to bandwagon a force exit or ruin a game due to frustrated expectations. The UI would at least have to pre-show what the games would be. It could work but the whole thing hinges on how the UI manages and communicates with players so that they don't break it.

There are also issues such as what to do if one game ends just as another starts (not giving people who are sitting out time to spectate), and how to manage votes. The solution to these problems could create such a split room that we may as well have gone with the latest pot idea, with the big difference being whether the rooms are split on game start or game end.

Licho1 commented 3 years ago

I think pot has the problem that you dont see that huge clump of people and you cannot talk to them. I think it's important that the clump exist or people will form it naturally elsewhere. Regarding "surprise" - the room could indicate where the line is or there could be final confirm stage which shows your team. You could say no.. if enough people say no game starts without nay-sayers possibly in unified mode.

GoogleFrog commented 3 years ago

I don't think forming the clump elsewhere is likely if there is only one visible game with 10-20 players. Also, if people manage to successfully form it elsewhere, then I wouldn't call that a failure state.

I think a final confirmation stage is too vulnerable to people clicking no, or there being too many people unaware that they have to click anything at all. I think it is an inherent problem for any system to rely on any extra decision making or understanding from players.

Licho1 commented 3 years ago

Because lot's of these assumptions are about people behavior perhaps the best way is to try to make it flexible and test it.. At very least I think that pot should display all subpots in battle list, just joining it will cause you to join the corresponding bracket only. We should not hide players. Players attract players.

GoogleFrog commented 11 months ago

To revisit this, the wait list could be used to make the resulting rooms more viable. For example, the 32-player room could split into two 20 player rooms if there are 40 people playing or waiting to play.

I would also change the "upon game end" to "upon game start". As in, if there are 40 people in the room then the start vote causes a split and people end up in 20 player games. The room could even display the games that could be created.

Licho1 commented 10 months ago

Sounds pretty much like old style splitter.. the code probably still exists .. unrestrict the room and auto split on 40+

Licho1 commented 8 months ago

A bit of ancient aliens archeology - found old style split. Removed anno domini 2016

      /// <summary>
        ///     Split a too-large game into two equivalent smaller games
        /// </summary>
        /// <param name="context"></param>
        /// <param name="forceStart">Start the game as soon as the split occurs; so no chance for the players to escape</param>
        public static void SplitAutohost(BattleContext context, bool forceStart = false) {
            var server = Global.Server;
            try
            {
                //find first one that isnt running and is using same mode (by name)
                var splitTo =
                    server.Battles.Values.FirstOrDefault(
                        x =>
                            !x.Founder.IsInGame && x.NonSpectatorCount == 0 && x.Founder.Name != context.AutohostName && !x.IsPassworded &&
                            x.Founder.Name.TrimNumbers() == context.AutohostName.TrimNumbers());

                if (splitTo != null)
                {
                    // set same map 
                    server.GhostPm(splitTo.Founder.Name, "!map " + context.Map);

                    var db = new ZkDataContext();
                    var ids = context.Players.Where(y => !y.IsSpectator).Select(x => (int?)x.LobbyID).ToList();
                    var users = db.Accounts.Where(x => ids.Contains(x.AccountID)).ToList();
                    var toMove = new List<Account>();

                    var moveCount = Math.Ceiling(users.Count/2.0);

                    /*if (users.Count%2 == 0 && users.Count%4 != 0) {
                        // in case of say 18 people, move 10 nubs out, keep 8 pros
                        moveCount = users.Count/2 + 1;
                    }*/

                    // split while keeping clan groups together
                    // note disabled splittinhg by clan - use "x.ClanID ?? x.LobbyID" for clan balance
                    foreach (var clanGrp in users.GroupBy(x => x.ClanID ?? x.AccountID).OrderBy(x => x.Average(y => y.EffectiveElo)))
                    {
                        toMove.AddRange(clanGrp);
                        if (toMove.Count >= moveCount) break;
                    }

                    try
                    {
                        foreach (var m in toMove) server.ForceJoinBattle(m.Name, splitTo.FounderName);
                        Thread.Sleep(5000);
                        server.GhostPm(context.AutohostName, "!lock 180");
                        server.GhostPm(splitTo.Founder.Name, "!lock 180");
                        if (context.GetMode() == AutohostMode.Planetwars)
                        {
                            server.GhostPm(context.AutohostName, "!map");
                            Thread.Sleep(500);
                            server.GhostPm(splitTo.Founder.Name, "!map");
                        } else server.GhostPm(splitTo.Founder.Name, "!map " + context.Map);
                        if (forceStart)
                        {
                            server.GhostPm(splitTo.Founder.Name, "!balance");
                            server.GhostPm(context.AutohostName, "!balance");
                            server.GhostPm(splitTo.Founder.Name, "!forcestart");
                            server.GhostPm(context.AutohostName, "!forcestart");
                        }

                        server.GhostPm(context.AutohostName, "!endvote");
                        server.GhostPm(splitTo.Founder.Name, "!endvote");

                        server.GhostPm(context.AutohostName, "!start");
                        server.GhostPm(splitTo.Founder.Name, "!start");
                    }
                    catch (Exception ex)
                    {
                        Trace.TraceError("Error when splitting: {0}", ex);
                    }
                }
            }
            catch (Exception ex)
            {
                Trace.TraceError(ex.ToString());
            }
        }