isi-vista / adam

Abduction to Demonstrate an Articulate Machine
MIT License
10 stars 4 forks source link

Implement mapping affordance learner #1141

Closed lichtefeld closed 2 years ago

lichtefeld commented 2 years ago

@spigo900 Here's the initial implementation of the mapping affordance learner so you can take a look at the interface to return the affordance concepts given an object concept. Let me know if you foresee any issues with the integration with the contrastive learner.

spigo900 commented 2 years ago

@lichtefeld Thanks for sharing. I don't see any problems that should cause a crash when running the contrastive object learner.

On the other hand, I don't think we can modify the current contrastive object learner to do anything with this style of affordance, if that's what you meant. The current contrastive object learner just updates object patterns. This affordance learner relies on the object patterns to learn things. This is in such a way that we can't push anything it learns back into the object learner. And the learned information is just concept->affordance associations, and the affordance concepts don't have their own patterns for us to use. So I don't think we can use this affordance learner with the current contrastive object learning approach.

I'm not immediately seeing anything we can learn contrastively with this kind of affordance. We can still diff the lists of affordances between objects as we've discussed, which seems useful but doesn't involve learning from contrastive example pairs. I would have to think about this more.

lichtefeld commented 2 years ago

@spigo900 Agreed overall. We could propagate in these mapped affordance nodes when we recognize an object but I haven't implemented logic to do so yet. That would give us additional pattern nodes to potentially pay attention to in contrastive learning but also means we need to tackle the 'adding nodes back into pattern graphs' problem.

spigo900 commented 2 years ago

@lichtefeld But we can't propagate that way, can we? The problem in my mind is that the mapped affordance nodes are based on us recognizing an object. If we don't recognize the object, we can't add the mapped affordance. If we add the affordance to the object pattern, then to recognize the affordance we need to recognize the object but to recognize the object we need to recognize the affordance. Does this make sense?

ETA: I don't think I understood -- how would we propagate the nodes when we recognize the object?

lichtefeld commented 2 years ago

Ah right 🙃 I've made a cycle. I suppose the only weights/confidence we could track for the affordance contrastively is not related to the graph pattern for the object concept but rather the weight of the affordance appearing... (which should always be 100%).

I think the best we can do with these affordances is provide differences/similarities between the two object concepts then.

spigo900 commented 2 years ago

@lichtefeld I'm thinking about how to produce string descriptions in post_decode for the UI. I think we are missing some methods we would want to be able show descriptions.

We will probably want to show something like can be SLOT2 in "SLOT1 eats SLOT2", right? I am thinking the integrated learner has to generate this string. So I think we want the integrated learner to be able to get (a) the affordance slot, and (b) the action concept so it can look up (c) the template for the action. It doesn't look like the learner stores these things. Does that sound right? Am I missing any public method/etc. that gives the info we need?