Closed NovaGL closed 7 years ago
I'm afraid Amazon doesn't give you any tools to deal with this — you can't specify a single "fallback" intent.
I've personally found that doing robust state management on your own is a decent way to handle this. If your skill has a strong internal model of "given what the user's done so far, here are the intents that are valid inputs", you can write your code in such a way that you can give an appropriate failure message if someone says an unexpected intent. (This is essentially the advice that Amazon gives people as well)
That doesn't solve the problem of when the user says something that doesn't match any intent, sadly. In practice, I've found that as a skill grows in complexity, the likelihood that Alexa will mistakenly match an arbitrary intent instead of matching nothing tends to grow as well. I haven't done any strict instrumenting, but anecdotally + intuitively it feels like having short one-word utterances can also help increase the chance Amazon will mistakenly trigger them (which is often undesired, but a good thing in this case :) )
Going to close this for now, but feel free to follow up.
Just wondering is there an intent that can be used for something that doesn't match any of the intents
For example, you could say "oops" "blah" etc and it would select the nearest intent. I wanted it to say, this doesn't match any intents please try again.
Any pointers would be great.