Closed AngavaiS closed 6 years ago
Hi, thanks for the kind words about virtual google, in this case, since you're interacting exclusively with an emulator, google is not able to provide the permissions you need.
The way to address this issue is by using the addFilters method, those allow you to manipulate the request before it reaches your service. For example in this case, the request is manipulated so that it returns the device location (this is for a location permission), depending on your case you can return what you need:
you can do : ga.addFilter((request) => {
request.originalRequest.data.device = { location: { formatted_address: "633 ashbourne dr, Sunnyvale, CA, 94087", coordinates: { latitude: 37.3609787, longitude: -122.0293281 } }
};
});
On Fri, Jul 20, 2018 at 11:42 AM AngavaiS notifications@github.com wrote:
Please look into the below issue, I am completely new to voice, so I may sound silly. I tried your virtual Google assistant and it's awesome and so helpful and. I am facing the below issue and Can you suggest me how to proceed. When my request hit the intent with permission "actions_intent_PERMISSION", the response which I received is not proper or I don't know how to check the received Speech response. Please help me.
it('should return Store Hours Intent', async () => { ga.utter("store hours").then((payload) => { console.log("OutputSpeech: " + JSON.stringify(payload));
After receiving the response, how can I set the permission through unit testing code. Thanks in advance.
OutputSpeech: {"payload":{"google":{"expectUserResponse":true,"richResponse":{"items":[{"simpleResponse":{"textToSpeech":"PLACEHOLDER"}}]},"userStorage":"{"data":{}}","systemIntent":{"intent":"actions.intent.PERMISSION","data":{" @type https://github.com/type":" type.googleapis.com/google.actions.v2.PermissionValueSpec","optContext":"To address you by name and know your location","permissions":["NAME","DEVICE_PRECISE_LOCATION"]}}}},"outputContexts":[{"name":"1537461222114/contexts/_actions_on_google","lifespanCount":99,"parameters":{"data":"{}"}}]}
Thanks, Angavai S.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33, or mute the thread https://github.com/notifications/unsubscribe-auth/AIX_JJurI5xkpOtLbchAQeJZkOG4v6ozks5uIghkgaJpZM4VYYVp .
Hi Juan,
Thanks a lot for your quick response. I tried as per your comments and its working fine. Thanks a lot for all your help :)
Thanks, Angavai S.
On Jul 20, 2018, at 12:56 PM, Juan Perata notifications@github.com wrote:
Hi, thanks for the kind words about virtual google, in this case, since you're interacting exclusively with an emulator, google is not able to provide the permissions you need.
The way to address this issue is by using the addFilters method, those allow you to manipulate the request before it reaches your service. For example in this case, the request is manipulated so that it returns the device location (this is for a location permission), depending on your case you can return what you need:
you can do : ga.addFilter((request) => {
request.originalRequest.data.device = { location: { formatted_address: "633 ashbourne dr, Sunnyvale, CA, 94087", coordinates: { latitude: 37.3609787, longitude: -122.0293281 } }
};
});
On Fri, Jul 20, 2018 at 11:42 AM AngavaiS notifications@github.com wrote:
Please look into the below issue, I am completely new to voice, so I may sound silly. I tried your virtual Google assistant and it's awesome and so helpful and. I am facing the below issue and Can you suggest me how to proceed. When my request hit the intent with permission "actions_intent_PERMISSION", the response which I received is not proper or I don't know how to check the received Speech response. Please help me.
it('should return Store Hours Intent', async () => { ga.utter("store hours").then((payload) => { console.log("OutputSpeech: " + JSON.stringify(payload));
After receiving the response, how can I set the permission through unit testing code. Thanks in advance.
OutputSpeech: {"payload":{"google":{"expectUserResponse":true,"richResponse":{"items":[{"simpleResponse":{"textToSpeech":"PLACEHOLDER"}}]},"userStorage":"{"data":{}}","systemIntent":{"intent":"actions.intent.PERMISSION","data":{" @type https://github.com/type":" type.googleapis.com/google.actions.v2.PermissionValueSpec","optContext":"To address you by name and know your location","permissions":["NAME","DEVICE_PRECISE_LOCATION"]}}}},"outputContexts":[{"name":"1537461222114/contexts/_actions_on_google","lifespanCount":99,"parameters":{"data":"{}"}}]}
Thanks, Angavai S.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33, or mute the thread https://github.com/notifications/unsubscribe-auth/AIX_JJurI5xkpOtLbchAQeJZkOG4v6ozks5uIghkgaJpZM4VYYVp .
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33#issuecomment-406710502, or mute the thread https://github.com/notifications/unsubscribe-auth/AeHGTOHqCf7auX2mJh7KYDa0bzeiHURFks5uIjYKgaJpZM4VYYVp.
Hi Juan,
Thanks for all your help. I do have one more question. Since I am learning and I am a tester. I am learning Unit testing now. So the questions may sound silly.
Do the library support conversational testing ? If the conversation does not contain the intents.
describe(‘Location Details and change location, function () {
it('it should return welcome intent', *function *() {
res2 = *await *ga.utter(*‘open Starbucks’*);
});
it('it should return location intent', *function *() {
res2 = await ga.utter(‘get me nearest store’);
});
// The below two are not intents.*
it('it should return different location question', function () {
res2 = await ga.utter(‘different Location’);
});
it('it should return different location details’, function () {
res2 = await ga.utter(‘Walnut Creek CA’);
});
});
Thanks, Angavai S.
On Sat, Jul 21, 2018 at 9:11 AM, Angavai Shanmugam < angavai.shanmugam@gmail.com> wrote:
Hi Juan,
Thanks a lot for your quick response. I tried as per your comments and its working fine. Thanks a lot for all your help :)
Thanks, Angavai S.
On Jul 20, 2018, at 12:56 PM, Juan Perata notifications@github.com wrote:
Hi, thanks for the kind words about virtual google, in this case, since you're interacting exclusively with an emulator, google is not able to provide the permissions you need.
The way to address this issue is by using the addFilters method, those allow you to manipulate the request before it reaches your service. For example in this case, the request is manipulated so that it returns the device location (this is for a location permission), depending on your case you can return what you need:
you can do : ga.addFilter((request) => {
request.originalRequest.data.device = { location: { formatted_address: "633 ashbourne dr, Sunnyvale, CA, 94087", coordinates: { latitude: 37.3609787, longitude: -122.0293281 } }
};
});
On Fri, Jul 20, 2018 at 11:42 AM AngavaiS notifications@github.com wrote:
Please look into the below issue, I am completely new to voice, so I may sound silly. I tried your virtual Google assistant and it's awesome and so helpful and. I am facing the below issue and Can you suggest me how to proceed. When my request hit the intent with permission "actions_intent_PERMISSION", the response which I received is not proper or I don't know how to check the received Speech response. Please help me.
it('should return Store Hours Intent', async () => { ga.utter("store hours").then((payload) => { console.log("OutputSpeech: " + JSON.stringify(payload));
After receiving the response, how can I set the permission through unit testing code. Thanks in advance.
OutputSpeech: {"payload":{"google":{"expectUserResponse":true," richResponse":{"items":[{"simpleResponse":{"textToSpeech":"PLACEHOLDER"}}] },"userStorage":"{"data":{}}","systemIntent":{"intent":" actions.intent.PERMISSION","data":{" @type https://github.com/type":" type.googleapis.com/google.actions.v2.PermissionValueSpec ","optContext":"To address you by name and know your location","permissions":["NAME","DEVICE_PRECISE_LOCATION"]}}}}," outputContexts":[{"name":"1537461222114/contexts/_actions_on_google"," lifespanCount":99,"parameters":{"data":"{}"}}]}
Thanks, Angavai S.
— You are receiving this because you are subscribed to this thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33, or mute the thread https://github.com/notifications/unsubscribe-auth/AIX_ JJurI5xkpOtLbchAQeJZkOG4v6ozks5uIghkgaJpZM4VYYVp .
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33#issuecomment-406710502, or mute the thread https://github.com/notifications/unsubscribe-auth/AeHGTOHqCf7auX2mJh7KYDa0bzeiHURFks5uIjYKgaJpZM4VYYVp .
Hi, this library is specially for unit testing, and it does support conversational utterance in the way you indicate. Be careful of a couple of things: This is a google assistant emulator, so:
res2 = await ga.utter(‘open Starbucks’);
will work if your provided dialog flow model have "open Starbucks" as a sample utterance, but won't work if you are trying to reach the actual Starbucks action.
Other than that the context is carried on as long as you are in the same instance for the virtual google or the session is closed, so the conversational tests should work.
Another restriction we have at the moment is the we only find the intents regarding the utterance based on the sample utterances, so if you have more than one Intent using the same utterance you may get a different Intent instead of what you are expecting.
Thanks for your quick reply Juan. I have created a sample dialog flow and took some code base from dev for my learning. The order of execution is not in order.
I could see, the below is executing first and hence always reaching the fallback intent and I am getting fall back Intent . So I was little confused.
it('it should return different location details’, function () {
res2 = *await *ga.utter(*‘Walnut Creek CA’*);
Can you help me with that
On Fri, Aug 10, 2018 at 2:17 PM, Juan Perata notifications@github.com wrote:
Hi, this library is specially for unit testing, and it does support conversational utterance in the way you indicate. Be careful of a couple of things: This is a google assistant emulator, so:
res2 = await ga.utter(‘open Starbucks’);
will work if your provided dialog flow model have "open Starbucks" as a sample utterance, but won't work if you are trying to reach the actual Starbucks action.
Other than that the context is carried on as long as you are in the same instance for the virtual google or the session is closed, so the conversational tests should work.
Another restriction we have at the moment is the we only find the intents regarding the utterance based on the sample utterances, so if you have more than one Intent using the same utterance you may get a different Intent instead of what you are expecting.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33#issuecomment-412209119, or mute the thread https://github.com/notifications/unsubscribe-auth/AeHGTP0dc1Ur41h02JB2TdbEtFR_Y8QDks5uPfh8gaJpZM4VYYVp .
The execution order of your tests usually depends on the library that you are using for test, but in this case it seems to be happening in alphabetical order (hence that one running first), if the different steps are dependant, you can put all of them inside the same test
it("Complex test", async function () {
res2 = await ga.utter(*‘open Starbucks’*); });
... your assertion here
res2 = await ga.utter(‘get me nearest store’);
... another assertion here
res2 = await ga.utter(‘different Location’);
... another assertion here
res2 = await ga.utter('Walnut Creek CA’);
});
I changed the invocation name . Please check the below results.
No intentName matches utterance: fremont CA. Using fallback utterance: talk to rolls voice.
Whether I need to do something, I hope this Fremont CA must not an intent ?
Thanks,
Angavai S.
On Fri, Aug 10, 2018 at 2:27 PM, Angavai Shanmugam < angavai.shanmugam@gmail.com> wrote:
Thanks for your quick reply Juan. I have created a sample dialog flow and took some code base from dev for my learning. The order of execution is not in order.
I could see, the below is executing first and hence always reaching the fallback intent and I am getting fall back Intent . So I was little confused.
it('it should return different location details’, function () {
res2 = *await *ga.utter(*‘Walnut Creek CA’*);
Can you help me with that
On Fri, Aug 10, 2018 at 2:17 PM, Juan Perata notifications@github.com wrote:
Hi, this library is specially for unit testing, and it does support conversational utterance in the way you indicate. Be careful of a couple of things: This is a google assistant emulator, so:
res2 = await ga.utter(‘open Starbucks’);
will work if your provided dialog flow model have "open Starbucks" as a sample utterance, but won't work if you are trying to reach the actual Starbucks action.
Other than that the context is carried on as long as you are in the same instance for the virtual google or the session is closed, so the conversational tests should work.
Another restriction we have at the moment is the we only find the intents regarding the utterance based on the sample utterances, so if you have more than one Intent using the same utterance you may get a different Intent instead of what you are expecting.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33#issuecomment-412209119, or mute the thread https://github.com/notifications/unsubscribe-auth/AeHGTP0dc1Ur41h02JB2TdbEtFR_Y8QDks5uPfh8gaJpZM4VYYVp .
in you model, if you have a sample that is only a city it should match this intent, it should look like this:
{ "id": "4698ffa2-41e2-44a3-91e7-7070b3a1f97e", "data": [ { "text": "sample City", "alias": "city", "meta": "@MY_CITIES", "userDefined": true } ], "isTemplate": false, "count": 0, "updated": 1509058203 }
if yours doesn't have the meta label, it can be that is expecting only an specific value instead of any city. if that's not the case, if you could share with us how your intent samples are defined, it should look like intentName_usersays_en_us.json
Hello Juan,
Thanks for your response. I had a chat with the dev team. As you said, the dialog flow was not updated to support. Its working now :)
Thanks a lot for all your help!. You are amazing.
Also I have one question too, I am trying to write the response but its not writing in the proper order in the console or in log file. I added the timeout and tried. When I verify the logs, It's little confusing.
describe(‘Location Details and change location, function () {
this.timeout(1000) ; // Added the timeouts *
it('it should return welcome intent', function () {
res2 = await ga.utter(‘talk to rolls voice’);
});
it('it should return location intent', function () {
res2 = await ga.utter(‘get me nearest store’);
});
// The below two are not intents.*
it('it should return different location question', function () {
res2 = await ga.utter(‘different Location’);
});
it('it should return different location details’, function () {
res2 = await ga.utter(‘Walnut Creek CA’);
});
});
Thanks, Angavai S.
On Fri, Aug 10, 2018 at 3:47 PM, Juan Perata notifications@github.com wrote:
in you model, if you have a sample that is only a city it should match this intent, it should look like this:
{ "id": "4698ffa2-41e2-44a3-91e7-7070b3a1f97e", "data": [ { "text": "sample City", "alias": "city", "meta": "@MY_CITIES", "userDefined": true } ], "isTemplate": false, "count": 0, "updated": 1509058203 }
if yours doesn't have the meta label, it can be that is expecting only an specific value instead of any city. if that's not the case, if you could share with us how your intent samples are defined, it should look like intentName_usersays_en_us.json
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33#issuecomment-412225728, or mute the thread https://github.com/notifications/unsubscribe-auth/AeHGTM8NmaBHi_oMax6UiysJkOA79Mu6ks5uPg2bgaJpZM4VYYVp .
I am trying to write the response but its not writing in the proper order in the console or in log file.
This seems to be the same issue, you had before, tests are run in alphabetical order instead of the sequence you have. Setting all the utters in the same test , can give you what you expect.
Hello Juan,
Please the results. I have attached the log details. Its not in the sequential order.
describe('TC_Location_001 - Verify whether Google responds with the proper location details for Location Intent.', function () {
*this*.timeout(10000);
it("Complex test", async function () {
res2 = await ga.utter(‘Talk to rolls voice’); });
Log.logData(res2,);
res2 = await ga.utter(‘nearest store’);
Log.logData(res2,);
res2 = await ga.utter(‘different location’);
Log.logData(res2,);
res2 = await ga.utter('Fremont CA’);
Log.logData(res2,);
});
});
});
=> nearest store
[2018-08-13T15:13:17.727] [INFO] info -
[2018-08-13T15:13:17.727] [INFO] info - * Response
[2018-08-13T15:13:17.727] [INFO] info - Okay, I recommend Rolls Voice at Union Square . You can ask me to find another location.
=> different location
[2018-08-13T15:13:20.602] [INFO] info - * Response
[2018-08-13T15:13:20.602] [INFO] info - What location are you interested in? Tell me a City and State or a zip code.
=> Fremont, CA
[2018-08-13T15:13:22.373] [INFO] info - * Response
[2018-08-13T15:13:22.373] [INFO] info - Okay, I recommend Newpark Mall at Fremont CA. Would you like to find another location?
=>Talk to rolls voice
2018-08-13T15:13:22.411] [INFO] info - * Response
[2018-08-13T15:13:22.411] [INFO] info - Hello, and welcome to rolls voice.You can ask me to find a location for nearest rolls voice. Please check back in!
[2018-08-13T15:13:22.411] [INFO] info -
On Mon, Aug 13, 2018 at 1:38 PM, Juan Perata notifications@github.com wrote:
I am trying to write the response but its not writing in the proper order in the console or in log file. This seems to be the same issue, you had before, tests are run in alphabetical order instead of the sequence you have. Setting all the utters in the same test , can give you what you expect.
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33#issuecomment-412655778, or mute the thread https://github.com/notifications/unsubscribe-auth/AeHGTJoU8GqhersbthjeMDJVE0ntuGc2ks5uQePEgaJpZM4VYYVp .
you have closed your "it" after your first utterance
I tried by removing the braces, still no luck :(
describe('TC_Location_001 - Verify whether Google responds with the proper location details for Location Intent.', function () {
*this*.timeout(10000);
it("Complex test", async function () {
res2 = await ga.utter(‘Talk to rolls voice’);
Log.logData(res2);
res2 = await ga.utter(‘nearest store’);
Log.logData(res2);
res2 = await ga.utter(‘different location’);
Log.logData(res2);
res2 = await ga.utter('Fremont CA’);
Log.logData(res2);
});
});
On Mon, Aug 13, 2018 at 3:56 PM, Juan Perata notifications@github.com wrote:
you have closed your "it" after your first utterance
— You are receiving this because you authored the thread. Reply to this email directly, view it on GitHub https://github.com/bespoken/virtual-google-assistant/issues/33#issuecomment-412693138, or mute the thread https://github.com/notifications/unsubscribe-auth/AeHGTAoyCFdoh0TRjSJsPBVTDwveUZNbks5uQgQygaJpZM4VYYVp .
you can add a ga.filter(request => console.log(request)); before the first utter in order to check that your intent's are being called correctly based on the utterances.
Other than that, try appending something to the logs to verify if they are out of order as you mention, or they are getting incorrect intents, something like Log.logData("1", res2) for the first one, then Log.logData("2", res2) and so on
Please look into the below issue, I am completely new to voice, so I may sound silly.
I tried your virtual Google assistant and it's awesome and so helpful and. I am facing the below issue and Can you suggest me how to proceed. When my request hit the intent with permission "actions_intent_PERMISSION", the response which I received is not proper or I don't know how to check the received Speech response. Please help me.
it('should return Store Hours Intent', async () => { ga.utter("store hours").then((payload) => { console.log("OutputSpeech: " + JSON.stringify(payload));
After receiving the response, how can I set the permission through unit testing code. Thanks in advance.
OutputSpeech: {"payload":{"google":{"expectUserResponse":true,"richResponse":{"items":[{"simpleResponse":{"textToSpeech":"PLACEHOLDER"}}]},"userStorage":"{\"data\":{}}","systemIntent":{"intent":"actions.intent.PERMISSION","data":{"@type":"type.googleapis.com/google.actions.v2.PermissionValueSpec","optContext":"To address you by name and know your location","permissions":["NAME","DEVICE_PRECISE_LOCATION"]}}}},"outputContexts":[{"name":"1537461222114/contexts/_actions_on_google","lifespanCount":99,"parameters":{"data":"{}"}}]}
Thanks, Angavai S.