RoboCup-SSL / ssl-rules

Official RoboCup Small Size League Rule Book
https://robocup-ssl.github.io/ssl-rules/
GNU General Public License v3.0
11 stars 12 forks source link

Vision dropouts #8

Closed leonardoSCosta closed 1 year ago

leonardoSCosta commented 4 years ago

From the Rule proposal:

Robot detections from ssl-vision are not always reliable. We work on improving ssl-vision (new cameras, april-tags), but we cannot guarantee that detections will be stable at all times. Teams have to deal with vanishing robots as well.

We propose to stop the game only for catastrophic failures of ssl-vision, like camera failures. Teams have to make sure that the detections are good during their preparation time before the match starts and the game will not be stopped for vision problems afterwards.

We are aware that new teams may not focus on a good filter in the beginning, so they might have more difficulties with bad vision. There are several open-source filters available, though. We’d like to establish a protobuf message that includes filtered vision data including velocities. An initial producer of this new message could be the autoRefs, which have filters and are open-source already. We will not guarantee the availability of such a software, but we will do our best to make it happen.

This change applies to both divisions

mickmack1213 commented 4 years ago

@tobiasheineken I don't really see you're point. Yeah the vision is better calibrated during the finals, but that doesn't effect the group stage.

During a game both enemys have to deal with the same issues and thoughts like "If we had better vision during the group phase we might have won the game" aren't really sportsman like, are they?

@andre-ryll already mentioned it. It's a shared system and everyone has to deal with the issues. I personally think it's even fairer to have a consistent vision during a game. So no team can alter the vision during the game for their advantage.

You've accepted it during preparation, deal with it.

tobiasheineken commented 4 years ago

@mickmack1213 To make it absolutely clear: This is not a fairness issue. If the vision is not modified during a match AND both teams get enough time where all officials are already in place and where they can check the calibration and start a complaint, the game is fair. Even if there are slight changes, if they are done in good faith teams have to deal with it (see ER-Force vs. RoboteamTwente 2019, where Nicolai accidentally removed our ability to see the ball during half time)

However, fairness is not the only issue the rules have to cover. Having bad vision has bad effects on the game:

  1. Very dangerous situations can appear "out of the blue", making the game more luck-based and less skill-based. Also, the robots look kind of dumb in these situations.
  2. Collisions are unavoidable because you simply cannot know where the opponent robot is.
  3. The Autoref is not able to detect fouls anymore. While an AI might use assumptions to continue playing with little to no information given, the Autoref simply mustn't assume that a foul might have happend, he has to be able to proof it (kind of)

I think we all can agree that these points are bad - or at least some of these issues are.

The final vs. Group stage point i wrote earlier is prove two things:

  1. Potential: The vision can be calibrated. It's not impossible. We should make sure to continue working towards this potential. Better vision will give better matches - and as the vision is shared software, bad vision is boring because you cannot improve it, especially if you want to change something that reasonable people might disagree (for example false-positive vs. false-negative trade-offs)
  2. Action: The vision has to be worked on during the tournament. Right now, this is done during the matches because someone complains about issues. If be forbid that, we need some other mechanism in place to make sure the final is not stuck at the initial calibration. There might be technical support here, if the quality inspector is able to "record" the situation where the robot disappeared: Officials could fix these spots after the match has finished. But we have to make sure these improvements are done.

Technically, all of these points are irrelevant as we have an vision expert who should be interrupting the game on its own. The vision expert interrupts the game, the calibration is improved, no team interaction is needed. At least that's the way the old rules where supposed to work.

In reality, it wasn't the Vision Expert, it was one of the teams. The reason for that is well known: SSL-Officials aren't always that good. If teams start a complaint it is no longer a "objective" thing, but it could be because teams are better or worse at handling small hiccups. Which is obviously a bad thing.

This pull request offers a new way into this problem: If we can find a good "minimum quality", and interrupt the game as soon the quality gets worse that it, we replace the vision expert's opinion by a measurable metric. Even better, teams now are able to know what to expect, they what their software should be able to handle. As the game will not be stopped inside of these parameters, teams have to work on the "deal with it" mentality that Andre mentioned.

The interesting part is: What is a good minimal quality? If the bar is too high, we interrupt the game all the time. If the bar is too low, all of the issues stated earlier in this comment start to arise. Also, we start to get towards a territory where the initial calibration is not improved during the tournament because "it was good enough".

I think that my numbers are a good compromise. They are still very bad and even good teams will struggle if the data is so sparsely. But at least they can test it and these numbers seem to be physically possible to work with. And Nicolais data suggests that they could (maybe should) be more strict in following years, but we could start with these.

However, Nicolai is right when he wrote that time between detection is not sufficient. There are still other issues (like extreme changes in orientation for some 7/8, changing id's for the other robots), that have to be covered. Also, I think that Andre's "detections per second" should also play a role here. We implemented this very low vision quality in our simulator and I can tell you that the ERForce autoref - which already can tolerate vision hiccups - will totally stop working if we continuously add information that sparsely (100ms between each detection for each robot).

So I suggest 0.2s for robots, 0.1s for the ball, and at least 20 detections per second. These values allow for a coordinated match, they are not overly perfect and encourage "deal with it", and they make sure ssl-vision is used to its full potential during robocup.

andre-ryll commented 4 years ago

@tobiasheineken I appreciate your feedback but your posts tend to be rather long. Please try to condense them a little.

Of course a bad vision has some impact on the game but I wouldn't paint it as black as you did. If you have a specified minimum vision quality you can adjust your path planning accordingly to have larger safety areas to not crash into opponents. Of course this may set you back in case you have a more fearless oppononent. That's what sport is also about, planning rewards versus risks. Apart from that we have the second vision blackout challenge this year, encouraging teams to test with onboard vision. With that you have extra information for your risk planning. I see no technical difficulties here which cannot be solved.

The primary point remaining of this issue is how to define the quality. I would say 0.2s of blackout for robots sounds fair. The ball is a difficult matter. It naturally vanishes in robot shadows. We could only measure its quality if it is visible and all robots have some minimum distance to it (say, 0.5m). That imposes additional problems. If the ball vanishes completely because of very bad vision we cannot even judge if it is in a robot shadow or just not detected. Any idea on how we handle the ball would be great!

MathewMacDougall commented 4 years ago

@andre-ryll

Apart from that we have the second vision blackout challenge this year, encouraging teams to test with onboard vision

While I agree onboard vision it is absolutely one way to help solve the problem and is worth investigating (especially for the ball disappearing in robot shadows), I don't think it should be used as a primary reason to accept significantly lower SSL Vision quality.

My understanding is that the SSL is set up the way it is with cameras and SSL Vision to allow teams to focus more on strategy and multi-agent coordination, not on perception, because this is the SSL's contribution to the 2050 goal of RoboCup to beat the Fifa champions. Allowing vision quality to be "too low" puts more emphasis on onboard vision or other perception solutions, and detracts from the main issues the SSL is trying to solve / research. Perhaps this is worth bringing up in the post-RoboCup meeting this year since I believe we will be discussion the "purpose" of the league at that time.

I also agree that 0.2s for robot blackout is a good number.

tobiasheineken commented 4 years ago

@andre-ryll There are two remaining points in this issue

  1. How to define "sufficient quality"
  2. What to do if the quality is not met.

Otherwise, I agree with your comment. Simply stating "unless the ball is covered in a shadow" is easy when writing the rules, but hard to very hard when actually enforcing it. If we assume that a ball cannot teleport, we should be able to check the blackout by using some assumptions:

  1. When the ball disappears: We can calculate easily if the ball is shadowed right now. We know the height of the robot, the position of the robot and the camera and the position of the ball. If it's shadowed, we continue with:
  2. Assumption: We assume that the ball is staying inside of any robot's shadow while we don't see it. We continuously calculate where the ball might be right now. According to our implementation, these shadows are tiny, resulting in a small area.
  3. When the ball appears: Use a ball model to predict how fast the ball must have been initially and where it started, find the point where it left the shadow, calculate if blackout was ok.
g3force commented 4 years ago

I suggest to postpone this rule change to next year.

We can use the time to work on tools to supervise the quality and try to come up with reasonable numbers, instead of guessing numbers now that can not be checked by any present software anyway.

tobiasheineken commented 4 years ago

While I'm always in favor of "not changing too many things at once", I think postponing this is a mistake. Vision-Issues are a major part of delays and if we want to reduce them, we have to work here.

How about not changing anything in the rules themselves, but add a software support to detect any vision dropouts of 0.2s for any robot, while also adding a note that teams should be able to handle 0.2s 20 fps robots? In an ideal world maybe trigger ssl-vision to record images whenever this invariant is broken to fix the calibration post-match (which is one of the reasons I personally complain during matches, to fix it for future situations) ?

g3force commented 4 years ago

Isn't this what I said? Do not change the rules, but work on the software.

I've also started with something like you suggested. You are welcome to contribute or implement your own tool: https://github.com/RoboCup-SSL/ssl-quality-inspector