Closed cburbridge closed 9 years ago
Umer had a prototype working, still in matlab. It worked ok for the examples he tested. I will ping him on progress.
Thanks for the list! Can we attach names to these tasks?
Quite a number of these things on the list are related to #20 so I feel the info_terminal group should be looking into the staff interface as well.
For the gesture detection, we need a working prototype ASAP as I feel this is a critical functionality and we need to assess its reliability to be able to decide upon its inclusion in the deployed system.
@urafi How is the stop gesture recognition coming along? When do you think we can have a ROS component for that? Does not have to be perfect at this point, let's take what we have and see how well it integrates into the scenario.
I suggest to put it into the activity_recognition repo and open a dedicated issue there to discuss its integration.
I will put it in strands_perception_people, where also the upper body detector resides.
OK, fine by me, let the repo maintainer @cdondrup have his say on this.
Since the head orientation is there as well, I think we can put it there. I will add some remarks in the issue.
Hey,
so how is it planned: it is you who are providing the interface layouts and we provide you with pictures, music etc. that you then fill in bevore the deployment or is it that you provide the interface and layout and we can add content?
@replacing card symbol to start the infoterminal: as michi and I discussed during the winterschool, maybe looking and waving at the robot (both taken together) would make a natural as well as a distinct sign for the robot to start displaying the info terminal.
There are 3 elements of the info-terminal that you will be able to edit during the deployment:
The layout of each page will be fixed, with just the content being editable. Editing/adding content will be from a web page that you can view on your desktop machine. The photo album editor (implemented already) allows you to upload new photos in JPG format, and delete existing photos. The current events manager will allow you to upload a photo and some text, and delete old events. The menu will be editable in the same way, but will take a full weeks menu (only the next meal is shown to the user).
okay! that sounds great!
and as we also were talking about displaying news informations on henry:
would it be possible to get a link to www.orf.at - one of austrias main news pages.
Could we restrict the internet function to just that side, so that users can not surfe somewhere else? Could you test if Henry's display shows that side properly and if it is "easy" to use via the touchscreen or if it is to small and rather difficult to klick headlines and navigate through the webpage?
Umer is working on the waving gesture. So the plan is to replace the cards.
(Y) cool!
I sent out one of our lunch menues but via email as github did not accept a pdf attachment
Here's a link to the menu
Umer's simple stop gesture detector developed at the school seems not fit for deployment. He is working on the full proper skeleton based one. But we have yet to get a time line for that.
Also, from limited experience, a skeleton one will struggle with several people. @urafi we need to be aware of the requirements/challenges:
Please, @urafi, comment if you deem there to be a feasible implementation ready to be included in a months time. Otherwise, I suggest to postpone the integration of this feature and rather go for something simpler, based on an analysis of a person's trajectory and "facing the robot" time or likewise.
hey :-)
i just wanted to feed back that the infoterminal GUI looks great! really cool :-) i showed it to my colleague and she also considers it very nice, clear and stylish! :-)
cheers Denise
Hi @urafi, any news on the stop/waving gesture detector? Do you think an implementation will be available before the code freeze? Or do we have to rely on alternative? Thanks, cheers
I think, in the last AAF hangout, we agreed that it is not fit for deployment and decided not to include it. But @urafi should still respond on this ASAP. The info-terminal task has been amended in the following ways (right, @yianni?):
The schedule suggested by @denisehe looks like this: I suggest that "Lobby" is translated into all the suitable waypoints in the lobby, and so is "Ambulance" where we shall have Henry roaming between those waypoints. In term of task, this means that info-terminal has to provide a "patroller task" that can be configured with waypoint sets that it patrols actively, and adjusts the waiting time according to the fremen experience. Please @mzillich, comment on this and assign people to the required sub-tasks. I'm happy to discuss in person on Thursday when I'll be in Vienna.
sorry, I didn't mean @yianni (he's bellbot after all), but I meant @cburbridge...
@cburbridge is not the leader of that task, he's pretty much up to his eyeballs with security.
I know he isn't, but @mzillich is, and he missed the meeting we agreed this, so I wanted someone else to confirm what we agreed upon. I'm not trying to sneakily steal him... (or, I won't admit it).
I am pretty sure that is what we discussed: drop the gesture recognition to make sure we have a robust system that does not inadvertently approach people who dont want to see it.
@yianni can you poke @urafi. I'm not sure he reads his github notifications...
@gestom wrote:
the FreMEn is so far tailored to model probabilities of binary states, e.g. the probability that someone will interact with the robot at a particular place if the robot waits for a certain amount of time. Therefore, it will simply cause the robot to visit more populated places more frequently.
Depending on the robot's strategy (e.g. taking into account the extra energy spent while moving), it can prefer to wait at a given location instead of going somewhere else.
We might also represent the chance that an interaction will happen within 'n' minutes as 'n' different actions, where n \in {1,2,3,4,5}. A robot arriving at a particular position would then choose an 'optimal' (again depending on its strategy) time to wait.
@gestom : is the above already implemented, and running in the system?
Apart from the 'We might ... time to wait' part, it is implemented and was running when we have demonstrated the infoterminal. As far as I know, the robot now waits for one minute and assumes that it can get to a given spot in 2 minutes. The robot uses a Monte-Carlo-based method to choose a location to go and his preferences (weights for Monte Carlo) are calculated as w_j = a_H_j(t)+(1-a)_p_j(t), (1) where p(t) is the probability of the interaction at location j and time t. The H_j(t) quantifies how much a visit to the location j will improve the FreMEn-based probabilistic model that constitutes p(t). Thus, the constant 'a' in (1) defines an exploration/exploitation ratio. Higher 'a' will cause the robot to prefer locations, where the probability of interation is about 0.5, leading to good models of p(t). Lower 'a' will cause the robot to visit places, where it is likely to be interacted with, but the obtained probabilstic models of the interacton will not be that good.
Just a comment on the fixed schedule that @denisehe suggested:
The robot is supposed to learn such a schedule on itself, that is one of the (scientific) aims of the STRANDS project. If we fix the schedule, how can the robot improve its behaviour over time ?
IMHO, the system should be doing infoterminal by default and bellbot and walking groups on demand. It should LEARN FROM EXPERIENCE that the bellbot and walking tasks are scheduled on a regular basis and do info-terminal near the areas where the bellbot and walking groups start, so that it can reach these promptly when requested. Also it should LEARN that that sometimes, there is a better chance of getting interactions at the ambulance rather than at the lobby. This is what the project promised to demonstrate.
@mzillich According your email... the timetable posted by Marc can be written down to the robot's routine. I am happy to write down such a routine, but each task needs to have its action defined (by the team responsible for it.) I am not going to write the action myself. However, the writting down the routine was kind of easy solution last year and it much more fits security scenario. As pointed out by @gestom , we should aim not fixing the routine if possible.
@mudrole1 I think there is no need to write the routine, as we will be using The Google Calendar Interface I made: https://github.com/strands-project/strands_executive/pull/132 And, that's what we need priorities for, we will have a "fremen-ised" low priority routine that suggests info-terminal tasks all the time (e.g. when not overwritten by some higher-level tasks)
These are the info-terminal team members
There are still quite a few things to be done in order to be able to deploy the info-terminal:
Most of these tasks are quite minor fixes/improvements to the functionality that we developed during the winter school, but it worries me that we never got a draft of an upper body/ gesture detection to replace the circle detector. @mzillich do you know the state of this? Is there any code on github from the sub-group working on this? If this is not going to work then we will have to fall back to pre-scheduled info-terminal waypoints only.
@bfalacerda @ToMadoRe @gestom @akshayats @mudrole1 Please check the list, add to it anything I missed.