Closed bodiroga closed 7 years ago
Hi, @bodiroga . I'm no longer a member of the Mycroft team, and as such am not contributing to this repository anymore. I will be continuing my context work on my fork of Adapt, and will contribute upstream as progress is made (barring divergence of the forks). My schedule does not currently allow for regular development.
@penrods has recently written about his plans for conversational context within Mycroft, and he may be able to share more on those developments.
Hi @clusterfudge!
Many thanks for your fast answer and I'm sorry to hear that you are not longer working on the Mycroft team, good luck with your new projects ;)
It would be great if @penrods could write a little bit about the plan for the "context" integration and what's the general idea for the implementation. My idea is to develop something around this concept, so knowing if there has been a discussing about this topic in the Mycroft team would be awesome!
Many thanks for all the work you guys have done and best regards,
Aitor
Hi, @bodiroga and Sean (@clusterfudge)! My thoughts around context in Mycroft are jelling as I get to know this codebase better and think about what I had implemented in Christopher (the technology I built before Mycroft). In Christopher, there were several types of context that I will be integrating:
One tricky part in migrating this into Mycroft is how Adapt maps utterances to intent. I don't believe analyzing the words of the utterance alone can determine the intent. Instead I think it will be necessary to match likely intent Skill targets and give them all a chance to determine how confident they are that the utterance was directed at them. Then the system can either take the highest confidence Skill response or perhaps ask the user for clarification if there are two near confidence matches resulting in ambiguity. (I hadn't resolved the ambiguity issue in Christopher yet, I just took the highest confidence which worked really well on its own.)
This is completely doable using Adapt, but requires breaking some of the expectations of Skill writers and/or adding new capabilities. For example, Skills will need something like a handle_conversation(self, utterance) function that might be invoked when they are not the target of the conversation. E.g. the LightsSkill might receive "how about tomorrow" because the user turned on the light recently and got it in the Recent list. But the phrase was really directed at the WeatherSkill which the user had invoked right after turning on the lights.
I hope that helps, feel free to discuss and give feedback!
Context management (or the ability to incorporate context when parsing an utterance) has been incorporated into adapt. Managing context (ie providing/observing context, pushing it back into adapt at parse time) is the responsibility of users of adapt. A sample (in memory) context manager implementation has been provided, but can be replaced by anything that is api compatible.
https://github.com/MycroftAI/adapt/commit/01bf0e969beb4c197eea18c79786a0183620b5bc
Hi Microft Team!
Awesome library/tool! Until now I have been playing with api.ai, wit.ai, and this kind of online intent parsers, but my plan is to use an offline open source solution to speed the processing time, improve my privacy and don't require an internet connection: "Adapt" looks really great!
I have seen that you have a branch to develop "context" capabilities within Adapt, but there's only a single commit by @clusterfudge and development seems to be halted. Is anybody working on it right now? What kind of data structure did you plan to use? It would be awesome if @clusterfudge could write what his plan was ;)
Many thanks for all your work guys, keep up with the good job!
Aitor