Open jdk2000 opened 4 years ago
In which condition does it happen? I could not reproduce the same behavior with the position given. Presumably it might be related to komi and ruleset (though it didn't happen for chinese and japanese rules). Detailed information for komi, ruleset, and a corresponding sgf file might be helpful.
Thanks for the report. Of course KataGo misplays many positions still, that's why at the start of the game the winrate is 40% or 60%, not 0% or 100%. :)
Still, specifically in this situation, having komi equal to 75% of the area of the board is very extreme. There's no guarantee that KataGo plays well with such a komi because it is so vastly different than any komi you would ever use in a real game, even a game with a reverse komi or a komi handicap.
Could you try to replicate your issue in a position with less extreme settings, and post an SGF if you succeed in doing so?
@lightvector @isty2e Yes. This time I use the Chinese rules and 7.5 komi, and in this case, it still happened.
Just some slight changes can affact the win rate of KataGo a lot.
This is the sgf file. http://gokifu.com/f/37nz.sgf
Haha, okay I think I understand. I'm going to guess that virtually every purely selfplay-trained neural net misevaluates this position - pretty much every neural net that isn't specifically trained on such positions. (Specific training on a variety of positions "like" this might help).
KataGo's g104 run, and Leela Zero LZ157 and LZ272, and Facebook ELFv2 all have the same problem judging the group status.
Here's what I'm now guessing: because looping topologies on groups are rare, I'm guessing that roughly how any zero-trained neural net "detects" life is that if it can trace a path from a real eye, that leads to a real eye, such that the path does not "reverse" course. Basically there is some small set of channels that say things like "to the {north,south,east,west} of me there was a real eye" and each successive layer propagates this wave of information one step further outward along a chain of stones, making sure it flows in the right direction and doesn't reverse course. And when this wave hits a second real eye, or perhaps when it hits a real eye propagation wave going the other way, the group is marked as alive, and then a "this group is alive" wave now propagates outward through the group. Obviously, small eyeshapes are also specially handled and recognized for how many eyes or half-eyes they contribute into these waves.
Anyways, this algorithm seems like it should be extremely natural for a set of convolutional layers to come up with. And it's going to work on 99+% of real situations once the neural net learns local eyeshape values. And it's also pretty hard to see how the net is going to easily learn to encode any sort of location information in these waves - all a wavefront has to record is how many eyes or parts of eyes are behind it and which way it's going.
So then it obviously fails with a loop topology. The eye releases a wavefront that goes out from itself... and comes back to itself, without ever reversing course. So the group is considered alive. If this is indeed roughly the algorithm the net uses, it's in fact actually kind of hard for me to imagine a trivial local perturbation to it that would cause it to get loop topologies right. I'm sure it's quite possible with training, but not via a trivial local adjustment to this algorithm.
Fun! :)
Anyways, yes, this shouldn't matter that much for real play. Game losses due to this should be pretty rare, especially because a decent fraction of the time, if a group really does loop to itself, then suddenly even "false" eyes will become real and the group actually will be alive. But it does make one wonder if we can do better than these silly neural nets...
Thanks for the detailed response! I'm trying to figure out what you're saying, even though it may be difficult for me. :)
@lightvector I think I've understood something from what you said. But I've just found this: There's no a eye in the black, but the white still want to waste a move to kill it.
It seems to not be in the scope of what you said.
Good observation. By the same reasoning, one might expect this eyeless loop to be evaluated the same way that an "endless extremely huge" group is evaluated - if a group is so big that the neural net cannot see the end of the group, it can just keep following and propagating a wave/path down the group forever, and the same will be true for a loop.
A monstrously huge group that the network cannot see the end of might either be considered alive, or it might be considered "uncertain" - in either case, probably the network prefers to have the group be recognizably dead rather than either of those two, so it spends a move. So I think this case fits quite cleanly into the same reasoning as well.
@lightvector Oh I got it! So that could be a natural flaw in neural networks. Before I thought only the older AI like older version of "绝艺"(Fine Art, a Go AI developed by Tencent) has this problem, in that time, human masters can beat "绝艺" by kill a huge group of stones. But with the updating of "绝艺", that scene has not happened again, I thought it has been fixed. But now, even KataGo, the best open-source AI, also has not fixed the problem, so I feel a little confuse about Go AI, when and how exactly will this problem be solved? Thanks for your friendly reply again! :)
Although given your explanations, it might not help with this particular situation, I wonder if including 10-20% training games where play starts from a randomized board position might help alleviate the issue of blind spots in general. That is, start the play after 100-200 random legal moves have been played. It might also be interesting to do that for the opening, like 4-10 moves played at random, perhaps with the outer two lines removed from the pool.
@jdk2000 Hello. Please forgive me for interrupting a story that is not related to the subject. I believe the image you pasted is "lizzie", which you have improved upon ? It looks very stylish and highly functional. How can I make this? Also, is it possible for me to get my hands on this?
@hope366 That's okay. The software I use is lizzie improved by Yzy, which has more useful features than the original lizzie. Yzy usually releases new versions on QQ. If you don't have a QQ number and don't use QQ, this may be a little difficult for you. Also, this software is mainly geared towards Chinese people and only has a Chinese interface. So if you're not familiar with Chinese, it's not very convenient to use. However, I will give the QQ group number here: 786173361. The people in the group are very nice, and very willing to help others. If you have trouble joining the QQ group, here's the link to the Baidu-online-storage: https://pan.baidu.com/s/1q615GHD62F92mNZbTYfcxA. Using Google Translate, I believe you can download the version you want.
Thank you for your kind words. I tried to sign up for QQ, but I don't have a smartphone so I'm stuck with my mobile number verification. But somehow I will try to make it a success.
@hope366 That's okay. The software I use is lizzie improved by Yzy, which has more useful features than the original lizzie. Yzy usually releases new versions on QQ. If you don't have a QQ number and don't use QQ, this may be a little difficult for you. Also, this software is mainly geared towards Chinese people and only has a Chinese interface. So if you're not familiar with Chinese, it's not very convenient to use. However, I will give the QQ group number here: 786173361. The people in the group are very nice, and very willing to help others. If you have trouble joining the QQ group, here's the link to the Baidu-online-storage: https://pan.baidu.com/s/1q615GHD62F92mNZbTYfcxA. Using Google Translate, I believe you can download the version you want.
对不起,请允许我说中文。我的QQ号是“戴镜轩”,我已经尝试加入此群6个月了,可是仍然无法进群。原因很可能是一位QQ号为“Sigmoid”的管理员阻止我进群,让我感到非常抱歉。此群原本是造福广大棋友之群,为何进群如此之困难?如果有此群管理员看到了此条回复,请允许我的加群申请,或当面指出我的错误,谢谢!
But it does make one wonder if we can do better than these silly neural nets...
What, in the biggest picture, do you think could be a better approach? Or why do you view the nets as silly?