Closed SimoTod closed 7 years ago
I have this exact same issue, has there been any updates?
Same, recently I've never seen any of my state-specific unhandled intents get called. It would be great to hear about plans to address or if there is something we're missing.
👍
I have the same issue. one intent is called even if I ask something what should be unhandled.
Amazon.LaunchIntent
is not a valid intent, please try using LaunchRequest
in its place.
Please take a look at the Standard Request Types reference for more information.
Same here, I put in some gibberish and it still called one of my customized intent
I am having this issue also. I played around some. This is not a very good solution but it is working for me.
I added these utterances:
UnhandledIntent a UnhandledIntent basd fasd UnhandledIntent cat video fun UnhandledIntent hdfasdf sdfas ewqrqwe
Anyone have a better work around. Also if we work togeather we can refine this list. Also this appears to be an issue with more than just this sdk. See https://forums.developer.amazon.com/questions/4856/intent-triggering-without-utterance-match.html
Presently, Alexa's NLP is horrible. Yes, you can bypass Alexa's NLP for a 3rd party service. See my post here on how to achieve it.
same here
Also having this issue.
have anyone checked this in the test-area? because i think it is an alexa issue. if i say bla bla, alexa solve this to the first utterances with 2 words, no matter how this is defined
I think that it's more a problem related to how alexa NPL works than with the sdk. But in this case I'm wondering why we have `Unhandled' intent.
When I try to use the testing environment on Alexa portal, whatever content I feed into the system an intent is called.
So from my testing 'Unhandled' is only called when some intent is received that is not handled by your handler or state handler. This IS useful, as it means that you can have a really large intent-schema with a bunch of different intents, and you don't need to have every intent covered in every state handler. The problem lies in the NLP itself, which (at least in the skills test environment online) always returns one of your intents, even if you write complete gibberish. This is pretty much resolvable through @pheintzelman solution above, but that is kinda sad...
Just to clarify: issue is still open for me and I'm still looking for a resolution that's not hacky, if anyone knows of one. Let me know if you find something.
Same issue. I think it has nothing to do with this SDK however, as the utterance tester in the dev console consistently resolves gibberish test utterances to valid defined intents (somewhat randomly).
I don't think this can be fixed since the issue lies in the NLP. In my skill, I check the slot value each time. If it is empty/undefined, I manually emit the Unhandled intent. The problem remains when the NLP resolves to an utterance without a slot though.
@knowlsie is right, the Unhandled was meant to handle cases where an intent is sent to a state which doesn't implement that Intent. To illustrate, lets say you had a shopping skill with two states
FillingCart ConfirmOrder
'Amazon.YesIntent' may be in your intent schema, but perhaps it's only valid when the skill is in the ConfirmOrder state (as a response to the question "You have x, y, and z in your cart, Would you like to place this order?")
So if someone says 'Yes' while in the FillingCart state, and a AMAZON.YesIntent is sent to the skill, the developer can safely handle this in the Unhandled intent.
If you have any specific cases where NLP is an issue, feel free to cut another issue or discuss on the Alexa forums. These are both great sources of information.
Closing this.
I don't think this should be closed as the behaviour is inconsistent.
When I implement an 'Unhandled' intent on a handler with no state, this 'Unhandled' intent will be called when the users says something which cannot be send to an Utterance/Intent (even if I don't specify any 'Unhandled' utterances - which would be absurd because it's impossible). This gives me the possibility to respond on unknown input, which is nice.
But when I define a 'Unhandled' intent inside a state, this 'Unhandled' intent doesn't get called when the users says unknown input but instead the default 'Unhandled' intent with no state gets called. I think this is wrong.
By defining an 'Unhandled' intent inside a state I want the be able to response dynamically to the current state (obviously). Example: I ask the user who old she/he is. The user then tells me some unknown stuff whereupon my 'Unhandled' intent gets called. There I want to response with something like "Sorry, I did't understand how old are you". This will not work as of currently only the default 'Unhandled' intent would be called. A workaround could be to read the state attribute inside the default 'Unhandled' intent and then "redirect" to the state specific 'Unhandled' function. This is against the current art of handling state as I don't want to have the state handling inside the intent, I want it to be outside.
// 'Unhandled' event in stateless handler
'Unhandled'() {
if (this.handler.state) {
// pass to state specific 'Unhandled' handler
this.emitWithState('Unhandled');
} else {
// default 'Unhandled' handling
this.emit(':ask', 'I didnt understand you. What did you say?, 'What did you say?');
}
}
I agree with @feedm3, this should not be closed, @knowlsie is correct, the skills test service emulator clearly demonstrates that entering utter gibberish is causing the NLP to match against the wrong intent, I am not looking for a workaround that will then have to be removed at some point, this should be fixed.
That bit is a non-SDK issue though. If you need that behaviour changed, it's probably worth asking about this on the Alexa forums like @bclement-amazon suggested. On the other hand, it may be worth updating documentation here to clarify this.
So is the only way to detect and handle an invalid utterance to do as @pheintzelman suggested? The Unhandled intent is fired when testing with the service simulator, but I'm unable to get it to fire when testing with a physical device.
The Unhandled property for the current state will get called if the user invokes an intent which is not in the current state.
Consider the following states and intents:
//...
'START', {
'AMAZON.YesIntent' : function() {
},
'AMAZON.NoIntent' : function() {
}
}
//...
'QUIZ', {
'AMAZON.YesIntent' : function() {
},
'Unhandled' : function() {
}
}
If the user is in QUIZ Mode and says 'no',
This behaviour should not differ on device vs. simulator.
When typing in an edge case scenario like "banana" into the simulator, it fires the Unhandled intent. If I do the same with a physical device, Alexa responds with either: "Hmm, I'm not sure" or "Sorry, I don't know that one."
Any idea why I might be seeing this varying behavior?
edit: Additionally, I don't have any states configured.
same issue for me. Alexa NLP engine works inconsistently.
NLP will always resolve to an intent before calling your skill. The Unhandled intent is only called if the resolved intent is not part of the current state. E.g. you have state A with intent A.P and state B with intent B.Q. If you are in state A and user input is resolve to intent Q, this will trigger A.Unhandled, since Q intent is not defined in state A.
This has nothing to do with whether your code handles unhandled intents correctly. It's to do with the fact that NLP does not resolve the intent correctly before sending.
If i have an intent with and utterance of "known intent", if you then say "unknown intent" NLP will resolve this incorrectly to the known intent purely because it has two words!!!
If a user says something unknown, it resolves incorrectly and tries to call the resolved intent which will work incorrectly. NLP shoukd recognise it as unknown.
Chris
here are screenshots I made, here we have 1 intent called test with 1 utterance "Known Intent"
Saying any two word utterance to the skill will execute this intent
"request": {
"type": "IntentRequest",
"requestId": "amzn1.echo-api.request.3a3220ef-4438-49cb-ac04-6706fe868da9",
"timestamp": "2018-03-23T13:19:35Z",
"locale": "en-GB",
"intent": {
"name": "Test",
"confirmationStatus": "NONE"
}
}
:-(
Apoloiges for another post. In the above test it doesn't matter what you say to the skill it will always resolve to this intent :-(
@ChrisAdamsUK
Yo, this is actually the expected behaviour if you only have one intent in a skill. You see the NLP always gives you some guess at the user's intent - even if what they said barely matches any of the intents you have registered. If you want to differentiate, you'll need to add more intents to your skill. 👍
This means that when a user says any nonsense the skill will execute code behind.
If I have an intent with an utterance of “do critical function” which deletes records from my database. The user saying anything unknown that resolves to this because it is “best match” then it will perform this critical function. Yes I would do a confirmation, but that wouldn’t make sense if the user said something meaningless.
There is no way to differentiate in the request jJSON whether the user said the correct terminology as both calls would be identical.
If the words don’t match it should send the unknown intent.
Chris
So, there is no such thing as an "Unknown" intent. The Alexa Skill/Amazon's NLP always sends one of the intents you have registered, and no other intents. The "Unhandled" case is only called when your skill code doesn't have a particular intent covered in a particular state. So the SDK code then goes 'damn, I got a "KnownIntent" call and Chris hasn't written code to handle "KnownIntent" in this case, I'll call "Unhandled."' Basically, there is no "Unhandled" intent as such - "Unhandled" is more a case for handling unexpected intents.
This means that when a user says any nonsense the skill will execute code behind.
Yup, that is exactly the case. You always get some intent, and you always have to handle it. This is why you need to confirm important stuff, because otherwise we risk picking up the wrong thing. If your skill asks for confirmation at the wrong time, the user will realise that Alexa misheard, and just correct the issue. Don't worry, users get used to it, because Alexa NLP is far from perfect!
However, this is also what the "Unhandled" case is for. If some intent is sent that your skill doesn't expect in its current state, you hit the "Unhandled" case which you can just use to ask again. So if you hear "Delete the records" and you're not expecting them to say that at this point in time, you can deal with it!
There is no way to differentiate in the request jJSON whether the user said the correct terminology as both calls would be identical.
This is also (pretty much) correct. You only get passed the intent, and you just have to hope that the NLP got it right (unless you check slots, or roll your own oronym checker as I've done for one skill in the past...)
If you really want to catch nonsense, you could just add another intent with a bunch of random strings that is specifically designed to catch nonsense! Call it "UnknownIntent", add some common words, and it will catch everything that isn't "KnownIntent". Then just don't handle UnknownIntent at all in your code, and whenever it gets sent your code will hit the Unhandled case. This isn't ideal practice, because it'll cause Alexa to ask the user to repeat themselves all the time, but it does work.
Thanks, I appreciate all this, my main issue is that you cannot be sure if the correct intent has been called and the response you send will be out of context.
If i have intents save and delete with utterances "delete my data" and "save data" respectively, if the user says "save my data", because this utterance is not defined NLP may resolve to "delete my data" as the closest match. Then responding to the user "Are you sure you want to delete all your data?" would really freak them out. :-)
I appreciate though that Alexa can handle more requests if it does best match :-)
Chris
Yup, it's a normal issue, and there's not much we can do about it as devs without the actual NLP implementation changing. 😞
On the bright side, if the user says "save my data", AVS will 99% of the time choose the intent with "save data" rather than the one with "delete my data". It's pretty good at that sort of thing. 😄
Except in my case :-( I have an intent utterance of "Create Item" and a different intent with an utterance of "How long until item". If the user says "How long TO item" it incorrectly resolves to "Create Item".
It won't do this once i add all the alternatives though.
:-)
Ah, bad luck there. You can always write code to help you generate all the possible utterances you can imagine.* Alternatively you can use someone else's code, or a website like this: http://alexa-utter-gen.paperplane.io/
*If you like/wanna learn some CS Theory, Context-Free Grammars are super cool for this.
Thanks I'll look in to to thi. My next issue is why unknown slot items don't always pass through on the request. Sometimes when i build the skill they come through, sometimes they don't :-(
The app calls one of the defined intents even if the input doesn't match any utterance. This is a simple skill to replicate the issue.
index.js
Intent Schema
Utterances
TestIntent test