RoboCupAtHome / RuleBook

Rulebook for RoboCup @Home 2024
https://robocupathome.github.io/RuleBook/
Other
147 stars 61 forks source link

Sharing arena setup data #257

Closed LoyVanBeek closed 4 years ago

LoyVanBeek commented 7 years ago

At RoboCup in Japan this year, we'll have approximately 35 teams.

If all teams have to (deep-)learn the objects, have mapping slots, that is not going to work logistically.

I've heard some ideas going around at the German Open to distribute the work of gathering pictures of objects over the teams. E.g. team A takes a agreed amount of pictures of object U and V, team B of object X and Y etc. We agree on a deadline in the team leader meeting and everyone uploads the pictures to some server. You get access to the pictures after you put your images in.

For mapping, we can agree on a common map format (e.g.ROS .pgm image) and have 1 team do it for a bit of points.

@kyordhel @balkce how do you think about this?

balkce commented 7 years ago

Teams should be allowed to do whatever they want with their captured data/finished models. If multiple teams want to organize themselves to share the workload, all the better.

However, I don't want to force teams into doing this, and I'm not comfortable in awarding points for this.

A faster setup and collaborating with other teams for the success of all should be enough reward.

LoyVanBeek commented 7 years ago

That was my initial reaction as well, but it also provides an avenue for more standardization perhaps. At least we could point out that this opportunity exist, I think.

balkce commented 7 years ago

Standardization is a double-edged sword: it makes it easier for setup, but it blocks innovation ("everybody uses this, because everybody uses this").

I think this year we can point out to teams that sharing information is not only allowed but encouraged, because of logistical limitations. And that if some teams want to share information, we can probably provide them first dibs on the arena to gather it.

kyordhel commented 7 years ago

@LoyVanBeek , we have 3 arenas. This means 12 teams per arena, which is doable. The 15 minutes of arena exclusive use must be defined on the 1st day for mapping purposes (if a team has troubles and looses their slot, too bad).

About objects, we will not allow any team to kidnap them this time. Each arena will have one set of objects and I expect 12 teams made of full grown up scientists to be professional enough to share them. As always, many teams will buy their own set of objects. I would like to avoid putting the responsibility of sharing object data on a team. Each team may have different hardware and their algorithms may be sensitive to different conditions for training, so that may cause unfair [dis]advantages.

And for Turing's sake, your robot cannot deep learn all objects in your house. Teams need to come out with some quicker solution for fast, online learning (To-Do for next year?).

I backup @balkce in the point thingy. Teams must be eager to cooperate and share knowledge without any reward. I pronounce completely against rising a participation market. For standardization we have the Standard Platform Leagues, I find no reason to make OPL more standard.

LoyVanBeek commented 7 years ago

OK, sounds reasonable.

BTW: The online learning is promoted already a bit this year with pairing 2 unknown objects of the same class.

We'll just point it out during first team leader meeting

kyordhel commented 7 years ago

Arena distribution

The distribution of the teams per arena will be done as follows. Although some trustees think that all teams can be distributed evenly, I would rather like not to mix leagues, or in any case, keep SSPL isolated since their capabilities are far different from the others.

OPL (0x10 teams) - Arena 1 DSPL (0x0a teams) - Arena 2 SSPL (0x08 teams) - Arena 3

LoyVanBeek commented 7 years ago

Actually, I don't fully agree on some of the arguments: some standardization can help to promote innovation and one reason the league isn't moving forward as fast as we want is because teams don't share their software. Standardization enables this and allows teams to focus on the next big issue.

balkce commented 7 years ago

Like I said: double-edged sword. Standardization is good when you want something to solve problem A quickly so you can start solving problem B, but because of this, problem A won't get any new solutions.

In any case, back to the main topic of this issue: I think we agree on letting the teams share the things they want to share, and let them deal with the data migration between platforms (which I guess won't be trivial). If you want to encourage them to do so, maybe we can talk about providing the first slots of the mapping/data-gathering to those groups of teams that have agreed to share their data between them.

At the end of the day, I agree with @kyordhel : the amount of teams per arena are not completely out of the ordinary (we've had 20+ teams share one arena, and the most I'm seeing for Nagoya is 16). So this solution is definitely optional.

LoyVanBeek commented 7 years ago

+1 on this. I don't want to make things obligatory, just promote and maybe facilitate if it's easy.

kyordhel commented 7 years ago

@LoyVanBeek @balkce

Without any interest of making a big debate here on the topic, the platform used by virtually all the teams is pretty much now a standard. However, since ROS and all its stacks became available, I've noticed a decrease in the performance of the teams, for many just tune the packages or use them as they are out-of-the-box and, since they are the standard, other "divergent" algorithms are not being developed. What I fear may happen if the league encourage the use of standard models of the objects, is that teams will master operating over those models, but will render useless without them. Also the innovation will stop because I wouldn't dare to test an algorithm requiring data not included in those models for obvious reasons in a competition.

LoyVanBeek commented 7 years ago

ROS is currently a de-facto standard in RoboCup@Home. This does not mean one team's ROS node works with another teams ROS-based stuff. Maybe up to some level. Teams are also not collaborating or reusing each other's software, at least not to my knowledge.

Could we set up a directory of all team's software repositories, for other teams to browse around in? RoboCup is supposed to be open source, but I'm not so sure all teams really have their sources available for other teams to browse.

The RoboCup@Home Wiki is not maintained AFAIK.

kyordhel commented 7 years ago

Regarding the info in the TDP's, many teams use default ROS packages with small adaptations (or none, as in the case of turtlebot-based robots). This is not the case of experienced @Homers who mostly use their own solutions tuned for their robots. The difference in performance is more than obvious to me.

Why do I mention this? Because is closely related with the wiki and the issue of teams not sharing code. It would be times better if teams could have access to the core solutions used by other teams (properly documented) and not an undistinguishable catkin_ws directory with everything mixed.

In theory, all teams should be responsible of keeping updated the Wiki (upload video, TDP, link to website, adding special info), but they barely do that, so asking them to split their code following a View-Controller model (i.e. Library-RosNode pattern) is a weed dream.

The OC could also help to organize this, but as they barely answer emails, I'm not sure we can entrust a time consuming task like this one.

LoyVanBeek commented 7 years ago

How do other leagues manage this, if they do so at all?

kyordhel commented 7 years ago

To be honest I have no idea. We could ask @moriarty , I understand he has worked with @Work

rventura commented 7 years ago

Fostering the share of code is a very good idea. ROS community grew up pretty much based on that sharing.

In @Home, since the solutions tend to be so much based on putting together functionalities (speech interaction, navigation, perception, etc) this is even more relevant, and could help a lot new-coming teams to leverage their performance w.r.t. the ever-growing difficulty level of @Home.

Also, there is a big push nowadays at the EU commission level to foster robot functionality modules sharing, and there are EU projects running on that.

On the other hand, I remember that, many years ago, in simulation league, winning teams were forced to disclose their source code, which lead to significant discussion since it allowed new-commers to even win the competition just by hacking over previous year winners (true story!). But this is not what we are discussing. One thing is to share the whole code tree, another is to release individual modules.

One possible way of fostering code sharing would be to give points for sharing modules, as long as they can be validated by other teams. For instance, team A shares its navigation code, and if team B (or more) state that, yes, it worked for us, then a certain amount of points were given, e.g., in a per-module basis.

My two cents...

Best, Rodrigo Ventura

kyordhel commented 7 years ago

@rventura More like four cents I must say.

To me, giving points to a team for sharing code is something that must be discussed by the EC/Trustees and cannot be taken lightly. Also a commission is required to analyze the software to be sure is shareable and understandable by others (see below). In addition to your true story (I loved it), another risk is that friend teams claim to share code to get the bonus when they don't (e.g. Golem claiming to use Pumas' Blackboard, and Pumas claiming to use Golem's sound localization).

Why is this? Since we made mandatory sharing code for qualification, teams upload the whole catkin_ws and is pretty much garbage. In almost all cases there is no documentation, there are files in all kind of languages (like Spanish or Chinese), co-dependencies are not clear, the contribution graph is a mess, and no conventions are followed regarding the use of the repository, naming files, variables, etc. So getting something from there require close collaboration between teams.

However, an award for getting your open-sourced ROS-decoupled library/shared-object used by another team sounds pretty reasonable... although teams often don't care about awards.

rventura commented 7 years ago

Indeed just forcing code disclosure is not sufficient. Some form of review/curation process is key. That’s why I suggested some form of peer-reviewing. E.g., we could force the disclosed source to be blind peer-reviewed by 2-3 other teams, or even non-leage related members. And an award on the best contribution could add on top of the awarded points (which could be variable, depending on peer-reviewing code).

In the same way peer-reviewing process is key for high quality papers in top conferences/journals, peer-reviewing code contributions to the community is also key for high-quality and reusable code. I know ROS has some form of code review process that packages have to go through before being featured in ROS pages.

LoyVanBeek commented 7 years ago

I've been thinking about how to promote code sharing, but haven't been able to come up with a good set of rules to do this. One idea was: give the using team some points but also give the 'used' team some points, which may add up if many teams use a package.

A signal that a package is used by others is the dependency counter on ROS: how many other packages depend on it.

kyordhel commented 7 years ago

While ROS is the de facto standard, @Home has no policy for enforcing the use of this framework; therefore giving advantage to ROS teams would be unfair for non ROS teams, even if there is only one. In my opinion, the TC must think beyond ROS and try to find the core of the problem.

On the other hand, is also unfair that a team scores more because their software became some sort of standard in the league, specially if it makes one team qualify for 2nd stage/finals occupying the place of another one with similar performance.

I remind you the Technical Challenge initiative. It was thought for encourage knowledge sharing as the winner had to publish results and code. Somehow it didn't work. It seems that solving a complex problem and getting an award was not sufficient to motivate team collaboration. This makes me think that forcing and rewarding team collaboration will produce a loose collaboration doomed to fail.

Nonetheless, there are other unexplored ways. It came to my knowledge that some ERL teams are joining efforts and started a plan of collaboration and code sharing. The first question this triggers in my mind is, what absent conditions in @Home does ERL have that triggers team collaboration? There is space for quite some research regarding how teams interact in competitions. Maybe @Home is ill designed and should change the competitive schema for a collaborative one, in which several robots work concurrently in a task and they score only if the task is completed (Rescue has something like this).

komeisugiura commented 7 years ago

I agree that counting the number of the software users is objective and quantitative.

The point is who collects such information.

Since Exec/TC/OC/referees already have a lot to do on site, I don't want them to have extra effort on this. Instead, I'd like teams to convince Exec/TC/OC/referees that their software is useful, well-documented, and easy-to-use. Teams may or may not use the number of users as the evidence.

Would it possible to introduce an explicit criteria on such aspects in Open/Technical Challenges or Finals? Currently we have the criterion "Contribution to @Home", so we can generalize to e.g. "Open-source Software Contributions".

johaq commented 4 years ago

I think this can be closed as well. Teams should want to cooperate on their own.