maddalihanumateja / Share-With-Images

AR iOS app for a vision-based tangible interface
1 stars 1 forks source link

How will the user communicate with our system? How should the system respond? #16

Open maddalihanumateja opened 6 years ago

maddalihanumateja commented 6 years ago

6 got me thinking about how a language for communication via objects or pictures would look like in our project. How have past experiments approached this / what did the participants (maybe participants with dementia) do when provided with a system similar to what I'm building? How does the system provide intelligent assistance (assuming the absence of a human guide)?

We can also think of what form these templates take (maybe we can look at what linguistics and AI-assistants or chatbots do and adapt it to this setting).

maddalihanumateja commented 6 years ago

I've encountered templates as a way to program smart cards in our assisted living reference paper. This is different from the way that I look at these sharing images or objects. I want an assistive AI to be able to figure out what information could be missing/additional/optional and suggest/prompt the user to present this missing/ additional/ optional information.

This is the related image from the paper:

screen shot 2018-03-16 at 9 59 03 pm

A similar message (SMS) composition grid was found in this reference paper. In this paper the researchers have built a TUI (the example use case they've mentioned is older adults with some special needs). There is a message composition process where "in the first step user selects message core (e.g. “Please bring me”), then time (e.g. “Today”), after that place is selected (e.g. “Pharmacy”) and object at the end (e.g. “Medicines”). Result of selections is a string with value “Please bring me medicines from the pharmacy today.” which is semantically organized by the application inside the NFC probe device."

maddalihanumateja commented 6 years ago

What I'm hoping to implement however is an assistive process which doesn't force the user in a specific structured form of communication using the TUI objects. I want the "assistant" to adapt to the user's input (subject first, object first, ...) just as in a normal conversation with an assistant. I'm also assuming that this AI assistant can allow a single user to interact with the interface independently. The references I cited previously are old papers. I need to search for newer ones that mighthave already implemented something similar to what I'm thinking.

maddalihanumateja commented 6 years ago

Another reference for this (that I remember from a comp. cognitive science course) is the chapter on concepts in Thagard's "Mind" textbook. Where we can think of our basic SharingImage types as concepts that can be described by certain slots and rules for how to fill in those slots. This process may involve multiple people in our case so we could figure out where an AI/human helper can fit into this.

maddalihanumateja commented 6 years ago

While trying to resolve #22 I started drawing graphs showing relations between SharingImage objects which looked similar to what you would get from <subject, predicate, object> triplets. Also remembered RDF triples from the data modeling lit survey I did a couple of years back. Tried searching for RDF databases that already had some concepts defined for our general set of actions. I think it'll be easier if I just implement something similar on my own (for a small set of actions).