robotology / peripersonal-space

This repository deals with the implementation of peripersonal space representations on the iCub humanoid robot.
GNU General Public License v2.0
1 stars 4 forks source link

handling of multiple events (stimuli) correctly #27

Closed matejhof closed 8 years ago

matejhof commented 8 years ago

During the processing of multiple concurrent stimuli around here: https://github.com/robotology/peripersonal-space/blob/pps-with-modulations/modules/visuoTactileRF/vtRFThread.cpp#L835 It seems that:

  1. The events buffer is not ready for that. https://github.com/robotology/peripersonal-space/blob/pps-with-modulations/modules/visuoTactileRF/vtRFThread.cpp#L323
  2. The taxel representation, taxelPWE.h cannot hold more than 1 event and hence in https://github.com/robotology/peripersonal-space/blob/pps-with-modulations/modules/visuoTactileRF/vtRFThread.cpp#L835 the last event will overwrite any previous ones.
matejhof commented 8 years ago

Regarding 1) - putting multiple events in the buffer and learning from them, I don't see any straightforward way of handling that. I guess we would need to keep track of the identity of the objects across the buffer. For now I think we can skip that and assume that during learning, there will be only one stimulus at a time. For the activations, we should support multiple - no buffer is needed there - and I'm working on that.

alecive commented 8 years ago

I think that the role of identifying which event is which should be given to the visuoTactileWrapper: it is the one that will cope with multiple events and will provide a consistent interface (i.e. if you have a vector of multiple events the second one should always be the same). But I agree with you that it is tricky: perhaps, instead of dealing with every possible combination of multiple events we should agree on deciding that we can identify a very specific number or type of events and the array of events that is exchanged willl be fixed-size (e.g. 10), with zeros where the event is not available.

matejhof commented 8 years ago

That could also be a way. For now, though, I'd like to pursue what I started - enabling activations when faced with multiple events. I'm modifying taxelPWE to hold a vector<IncomingEvent4TaxelPWE> Evnts.
https://github.com/robotology/peripersonal-space/blob/pps-with-modulations-devMultiEvent/lib/include/iCub/periPersonalSpace/taxelPWE.h#L47 (change not committed yet) When computing response, it will be computed for all events and a maximum taken. I think that is a reasonable solution for the time being. For the activation, no matching of objects over time is needed.

matejhof commented 8 years ago

For our current needs, this can be IMO considered solved. Multiple events are handled correctly and maximum of activations taken if multiple enter the RF of a particular taxel. Learning works only for one stimulus - as before.

alecive commented 8 years ago

:+1: