isi-vista / adam

Abduction to Demonstrate an Articulate Machine
MIT License
11 stars 3 forks source link

Experimental Affordance Learning #1129

Closed lichtefeld closed 2 years ago

lichtefeld commented 2 years ago

As part of Milestone 5 we implemented an Affordance Learning module that attempts to ground affordances to observable features in the given scene. As part of this process we discovered that some features may not align to well represented visual features and rather may be experimental or learned. To account for this we've proposed a second way of storing and retrieving affordances where the learner stores such affordances in a mapping to a named concept.

Technical Implementation

This new affordance learning will take the form of a map of concept tokens (strings) to affordances. Additionally a method to query the affordance learner for affordances learned of a given object will be made. Both observable and experimental affordances will be stored in this mapping.

Out of Scope

One could see a future version of this system linking with the semantic learner enabling a consolidation of affordances onto a higher-level descriptor node where 'this semantic node's affordances are true for all* it's children nodes.

Additionally out of scope, but plausibly interesting, is using learned affordances to question if they apply to other object concepts without seeing the situation. E.g. querying a domain expert (or maybe even just a google search) to try and see if an affordance applies to a given concept.

spigo900 commented 2 years ago

Didn't see this above, so reminder to ourselves: We also want a way to query for object concepts that have a given affordance, e.g. "can be eaten". We need this for the backfilling experiment (writeup TBD 🙃).

lichtefeld commented 2 years ago

Closed by #1141