Closed theresalech closed 4 years ago
1.
A flag to indicate that this service will let the system know it has failed to parse the voice data into actionable outcomes.
It says "let the system know", but is this an update of unhandledParsedVoiceData?
2.
If the voice assistant service fails to handle the recognized voice audio data as something within the context of the app or the parsed command is beyond its control, the data will be passed through an update to its VoiceAssistantServiceData in the form of an array of VoiceRecognitionResult items. This info can be passed to subscribers of this data which should include the onboard voice recognition/assistant that handles the SDL app VR synonyms.
If there are multiple subscribers, which one will handle the data?
3.
Once the app is no longer in an HMI level that allows for the service to be active, either the default or previously active voice assistant service with their onlyActiveWhenVisible set to false will be be reactivated.
What if there are no services that meet the conditions?
4.
Adding an ability to be a voice assistant service for apps like navigation complicates the overall flow of the service. However, by limiting this to only HMI_FULL status, we can create an expected flow that must be followed.
As an SDL, does it mean that setting onlyActiveWhenVisible=true is mandatory for Navi app?
Who keeps track of the wakeWords supplied in the manifest? Are the voice assistant consumers supposed to be listening for the wake words at all times?
VoiceAssistantServiceData only has a param for unhandledParsedVoiceData. What happens when the voice data is parsed successfully? How are the service consumers notified by the voice assistant that there is an action to take place after parsing the voice commands?
I am trying to understand what needs to happen before the provider receives an OnVoiceSessionStarted. Does an app consumer do a PerformAudioPassThru request to get the audio data and then notify the voice assistant of the info received?
I think a simple approach could be to add a serviceID parameter to the PerformAudioPassThru and EndAudioPassThru RPCs. By supplying the serviceID, a consumer app would specify that the audio pass thru data should be forwarded to the voice assistant via OnVoiceSessionStarted.
Where is the AudioInputCapabilities enum defined?
If the service has the capability to obtain audio data directly from the microphone's source (phone's microphone, Bluetooth connected microphone, etc) this should be used as the preferred method.
The internal microphone is preferred to which method? The phones microphone over the vehicles embedded microphone?
10.
If the service has listed the ability to parse audio data via AUDIO_PASSTHRU, the normal flow for the audio passthrough feature should be used.
The provider or the consumer should use audio pass thru?
11.
If the voice assistant service has the onlyActiveWhenVisible flag set to true, the voice assistant service will only be active when the user has put that app into HMI_FULL.
How would the provider be triggered by a consumer when the provider is in full? Why wouldn't the provider just use PerformAudioPassThru rpcs in this case?
containsData
seems like a redundant paramter. Isnt it possible to check for the existence of bulk data without this parameter?
triggerInfo
and recognizedWakeWord
also seem like redundant paramters. Only one of these seems necessary.
1. It is describing a capability of the app service. So if the app service supports this feature, the flag will be sent and the IVI system should be prepared to handle the data after the app service as returned the unhandledParsedVoiceData.
2. It should go in priority order based on previously imposed app service rules on default active service.
3. Then there will be no active void assistant app service.
4. Yes. If they wish to expose a voice assistant for their navigation experience it should only be exposed during HMI_FULL.
5. That would be up to the app integrating the voice assistant app service. For most use cases I imagine only the IVI system would index these values.
6. When the data is parsed successfully there is nothing to pass back to the other apps.This is not a voice recognition service, it is a voice assistant service. This means the data will be handled by the service.
7. Per the proposal If a voice assistant app service is active, when the user presses the "Push-To-Talk" button or equivalent, this should initiate a voice session based on the supplied audioInputCapabilities in the manifest.
So the user presses the PTT button it will start the voice session in the voice assistant. This can also be started from wake words if the IVI system has implemented that capability.
8. This is a typo, the type should be AudioInputMode
9. Direct input into the voice assistant is preferred. This includes both the mobile device and vehicle microphones. As the proposal mentions this will circumvent the extra processing and sending time of the standard APT.
10. The Voice Assistant will start the audio pass through session via the regular RPCs.
11. In order to be tied to the PTT button or wake words, the app must be an active service.
12. No preference here.
13. There could be multiple wake words which might be useful for metrics for the app or preconditions to be met. However, just sending wake words misses the PTT button activation. Therefore both are necessary.
2.
It should go in priority order based on previously imposed app service rules on default active service.
How will a subscriber declare that it won't handle the data and the HMI should continue to the next service consumer?
4. Is that something that can be more clearly stated in the proposed solution and that we can ensure through the app certification guides?
2. It's providers that handle the data, not consumers. Each provider works in the same way as the first.
4. Sure, open to suggestions on the language.
👍
I think it would be useful for the voice assistant to be able to pass information acquired from the request back to the consumer. For example, imagine a nav app acting as a voice assistant consumer. The user presses the ppt button and says "Wakeword, find a list of nearby gas stations." The voice assistant would fulfill that request, but the voice assistant does not have a method of passing it back to the consumer nav app.
Sorry this is still unclear to me. Let me know if this is a correct description of the flow:
Lets say there is a consumer app that has a "push to talk button" presented on its ui/template. User presses the button. Consumer app gets the OnButtonPress notification and looks at the voice assistant manifest for the audioInputMode. The consumer sends Core an OnVoiceSessionStarted notification with the "preferred" audio inout mode. Core passes the notification to the active voice assistant. Voice assistant then uses the method described by the audioInput mode to collect the rest of the users voice request.
👍
OK got it, but if the provider is on a mobile device i dont think the phones internal microphone should be allowed to be used. A mobile voice assistant should only be able to use "direct microphone input" if it is connected to the vehicle's bluetooth microphone or other bluetooth headset. The user should not have to wonder why speaking to their cars microphone "isnt working" when really their phone's microphone isn't able to hear them properly.
I think the proposal should note that mobile voice assistants should only list MICROPHONE_DIRECT as the supported audio input mode if they have access to a handsfree bluetooth microphone.
👍
When a voice assistant is in HMI FULL and it has onlyActiveWhenVisible set to true, is its only use case is to be notified to start a voice session from the ivi consumer?
Sorry it is difficult to understand how an app service provider is useful when it is required to be in full. What kind of app would require this and how would it be used?
I would suggest to remove it if there is no preference from the author. I dont think any other RPCs that use bulk data need a param to notify that there is bulk data included.
👍
The Steering Committee voted to keep this proposal in review, to allow the author time to respond to the comments and for discussion to continue on the review issue.
@joeygrover -san. Thank you for your reply. 1 What are the possible cases where supportsFailureFallback is set to false?
2 We assume the following flow, so please let us know if it's right. i. The currently active service provider cannot process the data and sends unhandledParsedVoiceData. ii. HMI acquires unhandledParsedVoiceData and detects that there is data that could not be processed. iii. The HMI activates another service provider according to the priority of the policy table. iv. The newly activated service provider processes unhandledParsedVoiceData.
4 If the destination is set to the Navi application by voice recognition when the Navi application is not FULL, is the following flow correct? Precondition: The Navi app is LIMITED and inactive as a Voice Assistant service. Another Voice Assistant service, which is not the Navi app, is active. i. User presses PTT button, then sets the destination by speaking. ii-a. When the active Voice Assistant service recognizes the destination setting, SendLocation RPC is sent and the destination is set to the Navi application. ii-b. If unrecognized, send unhandledParsedVoiceData and entrust the processing to another service provider.
@Sohei-Suzuki-Nexty
1. I think it would only be if the app used direct microphone input and didn't want to (or wasn't able to) form that data into a pass-through for the head unit or other apps to use.
2. Yeah, that appears correct.
4. (ii-a) That would depend on the voice assistant's implementation, but it would be a good implementation for them to do it that way.
I am a bit concerned about possible confusion caused to the user by the "navigation" voice assistant app service switching to be the current assistant when that app is in FULL and deactivating it when it's away. It's not made explicitly clear in the proposal:
onlyActiveWhenVisible
override the existing app service flows for becoming the active service, or only for releasing the active service. I think that this will complicate the HMI flows laid out in the original proposal if the user has to choose the onlyActiveWhenVisible
as the primary service and it's not always active. Basically, selection of preferred voice assistant and HMI activation of those services could get hairy pretty quickly.I think we need an alternative flow that makes this clearer. Here's my possible proposal:
Apps can request an entitlement (through policies) from the OEM for "foreground override." If granted, the head unit should provide an additional option to the user in the standard HMI flow where the user chooses their preferred app service. The additional option is to allow a voice assistant app service to become the current voice assistant app service when they are in HMI_FULL or LIMITED. These apps must support fallback (this would be an app certification requirement).
It's also important to remember point 3 of the app services proposal section "Activating Service Providers":
When an app service provider is changed to be the "Activated" service of that type, the user should be notified.
This means that the user should always be aware of the voice assistant that is listening to them.
Alternate options / additions:
@joeljfischer -san Thank you for your reply. My questions have been answered.
The more I dig into this proposal and try to make the "temporary active" service use case I really don't believe the extra technical debt is worth it. It adds a sizable amount of complexity and creates numerous corner cases, both of which will likely lead to a very poor user experience. Therefore, I would like to revise the proposal to remove this part of the feature as well as add another aspect that I believe will solve the problem in a more straightforward way.
The main goal is to allow apps in the foreground to start a voice recognition session, and therefore I believe through some HMI guidelines we can define how that would work in addition to this proposal.
Specifically, we will define that when the Push-To-Talk button is short pressed, the active voice service will be sent an OnVoiceSessionStarted
notification and follow the flow provided in the proposal. When the PTT button is long pressed, the app in the foreground (HMI_FULL) will receive the OnVoiceSessionStarted
notification instead. It could then move through its defined process for VR. This will define a consistent user experience that solves the initial problem with much less complexity.
The Steering Committee voted to return this proposal for revisions. The author will revise the proposal to remove the “temporary active” component, and to add details of the feature to the HMI Integration Guidelines, specifically regarding where an OnVoiceSessionStarted
notification is sent based on short and long presses of the Push-To-Talk (PTT) button. Full details of these revisions can be found in this comment from the author.
The author has updated this proposal to incorporate the requested revisions, and the revised proposal is now in review until June 23, 2020.
I just started reading the proposal today. I'm doing the best I can to provide valuable feedback. If any of the feedback was already discussed then please accept my apologies:
9. Independently of the method to get to the vehicle microphone(s) I highly recommend using this mic over any other. APT is one way to get vehicle microphone data and we know it has a large number of performance issues. MICROPHONE_DIRECT
is contains a lot of assumptions. Are you assuming that the app could use HFP in order to read audio from the connected microphone? Using HFP introduces a delicate issue where IVIs think the phone starts a phone call. On Ford IVI this will result in a full screen overlay blocking any app attempts to show an alert. Another concern is that we would become more dependent on the mobile OS producers where APIs may have certain conditions like phone OS permissions, app has to be in foreground etc.
I would appreciate if the author can elaborate more on the abilities to get Bluetooth microphone data. The proposal should contain some discoveries on how the microphone can be accessed (e.g. on iOS to use AVCaptureSession
or similar). As HFP is used I suppose this proposal requires additions to the HMI integration guideline so that fetching audio data doesn't show up as a phone call.
15. The param audioInputCapabilities
is of type "String". Would it make more sense to have the type AudioInputMode
or is there a specific reason for using "String"?
16. Based on 9. result I would recommend to provide in-depth descriptions to each element.
17. Also based on 10. due to the concerns of mobile OS dependencies I would suggest a more SDL based solution as an additional InputMode. I just want to throw in the idea of receiving data through the Audio Service along the SDL protocol level or create a new service type for audio input. This vastly increases scope however the concerns of mobile OS dependencies are quite severe.
18. I couldn't find notes describing how an app can inform it finished the voice assistant session. Can you elaborate how an app could do this? I feel this to be important as the IVI must stop listening for wake words during an active voice assistant session.
19. I also couldn't find how an app could output audio and interact with the driver in a dialog. Should these apps use an Audio Service session? Or send a Speak? I think this is a very important aspect of voice assistance and must be well elaborated.
20. The proposal states The application will be responsible for displaying an alert to the user in this use case.
Would it be nice if voice assistant applications have the ability of a more powerful popup that can be altered during the session? I know Alert supports it but there may be more needed.
9. I agree that APT is not ideal and that's why the additional option was added.
This would work like how any other app accesses microphone data will connected to the supported bluetooth profiles (HSP/HFP). For example, the google assistant does this already; it obtains mic data from the connected BT device using the supported profiles. Any IVI implementations of bluetooth stacks and HMI decisions around these profiles I would consider out of scope for SDL, but there could be some guidelines added that in the case of this app service being published and active the HMI should not present the potential audio capture as a phone call.
For Android, here is a gist of how to accomplish this https://gist.github.com/shivarajp/86b05ae10dbd6456aa53.
15. Yes, it should actually be AudioInputMode
.
16. What do you mean each element? Like each audio input mode? I want to provide any information necessary, but I'm having trouble understanding what the ask is.
17. I think this would be an interesting idea, but I would also ask that it be put into another proposal that can enhance this one. This feature can work without it and solves a major problem from multiple partners so I would like to have a more in depth discussion on this protocol change outside of this proposal because I think its scope could affect other aspects of the library. For example, it could be a potential replacement for APT in some instances.
18. The original thought was that either the APT session had ended or the bluetooth audio input session had ended would be the signal of a voice session ended. That doesn't cover the "conversation" use case though so you're right we need to create a more SDL focused idea. We could do one of the following:
OnVoiceSessionEnded
notification sent from the service.VoiceAssistantServiceData
of voiceSessionActive
. OnVoiceSessionStarted
to OnVoiceSessionUpdate
or similar and include a boolean param voiceSessionActive
.voiceSessionState
with values ACTIVE
, INACTIVE
, PAUSED
, CANCELED
.19. I'm not opposed to either, but I can see how simply using the connected audio output might not work because they have to be given audio focus. If the driver was listening to the radio and hit the PTT button activating the service, the audio from the voice service wouldn't likely just come over A2DP. I might have to think on this one a little more to provide a better solution, but as of now I think using the audio service to output seems like the path of least resistance.
20. I agree that it would be very helpful to be able to use better alerts. I see this one as also being an improvement on the proposal since there could be other aspects that could use this ability. However, for the solving it right now, the app could use a PerformInteraction pop up. In the proposal I'm using alert as a general term and not specifying an SDL alert. The language in the proposal should be updated to reflect that.
9. I would double check that again. For Ford SYNC3 we have added individual behavior for Google Assistant or Siri where only a note appears on the top bar. I only have limited time to test different implementations so I installed voice recording apps and tested their behavior. If I record voice memos with apps they appear as phone calls on SYNC3. Your proposal would require additional HMI integration guidelines but I also suggest to do some investigation if the HMI can distinguish VoiceAssistant-apps from other apps like conference/VOIP apps.
I think one point that was mentioned before isn't properly addressed. Using the car mic makes most sense for your feature however the input type only assumes a connection to that mic. This feature is basically useless if the connection to the car mic doesn't exist. If the phone isn't connected to a bluetooth device this feature should not activate over the direct input type. I can see this being a dangerous situation for drivers trying to pick up their phone once they realize the phone mic is used. At least the UX is affected as it's likely that the phone mic won't get the driver's voice from in the pocket.
Also I would like to have the issue addressed that being connected to a BT device doesn't mean the phone is connected to the vehicle. This could be a problem with BT headphones in the car/bag e.g. coming from the office.
16. Sorry I wasn't clear with my message. I know you're trying to describe the input types in the proposal but proper descriptions in the mobile API are required so that developers understand their meaning.
<enum name="AudioInputMode" since="X.X">
<element name="AUDIO_PASSTHRU">
<description>PLEASE DESCRIBE THIS MODE</description>
</element>
<element name="MICROPHONE_DIRECT">
<description>PLEASE DESCRIBE THIS MODE</description>
</element>
</enum>
17. That's fair.
18. I would appreciate you as the author to revise the proposal and extend the API allowing status updates for voice assistant sessions. From my limited experience with embedded voice assistant systems I at least know there are states like LISTENING
, SPEAKING
, CONFIRM
, CANCEL
or TIMEOUT
. This way the head unit could may be assist with the HMI.
19. I think this is very important to know how the app is supposed to speak to the user. Especially due to the complexity of the HMI matrix the servicing app may be in background and not able to play audio.
20. I think it would be best if defined information is packaged in the Service RPCs so that the HMI can present a voice session overlay/view. The defined information could be a set of VrHelpItems
and much more.
Toyota cannot accept this proposal for now. We suppose the proposed "Definition of Push-to-Talk Button HMI Guidelines" relies heavily on the behavior of HMI.
9. I had no knowledge that Ford had done any customization for this so yes I only checked for Google Assistant and had no reason to check further. I see you had different results with a different app so I will have to actually build out my own POC app to test this to ensure this is the case.
With this proposal, and the possibility of the newly added VoiceSessionState
, if the IVI gets a status that a voice session is active (LISTENING, SPEAKING, CONFIRMING) then they could avoid displaying any phone related HMI screens.
If we can't come to an agreement on how this will work with the bluetooth mic it should be known that this feature is not useless without this ability. If you look back to previous proposals the functionality still provided even if using APT is still necessary for partners. The ability to define voice assistant services and essentially allow apps to receive the PTT activation when in HMI_FULL is a need from partners. The proposal will still accomplish those goals and can be improved upon in future updates.
16. Ok, here are some descriptions:
<enum name="AudioInputMode" since="X.X">
<element name="AUDIO_PASSTHRU">
<description>This mode describes the ability to use the SDL Audio Pass Thru feature to obtain audio data.</description>
</element>
<element name="MICROPHONE_DIRECT">
<description>This mode implies the use of the in-vehicle microphone without any SDL specifics. It is intended to be used with the bluetooth profiles HSP/HFP</description>
</element>
</enum>
18. I've stated previously I am not an expert in this field and am open to any feedback on voice assistants. I agree that adding the status as different states is likely the best path forward. I'm thinking it might be best to do a combination of 3 & 4. Allowing the status to be updated from both ends if possible. It will require including to a state machine of sorts to the proposal.
<enum name="VoiceSessionState" since="X.X">
<element name="LISTENING">
<description>The Voice Assistant is currently listening for voice data. No audio output should be happening at this time.</description>
</element>
<element name="SPEAKING">
<description>The Voice Assistant is currently speaking. Audio input should not be happening at this time.</description>
</element>
<element name="CONFIRM">
<description>The Voice Assistant is waiting for confirmation that the current selection is accepted.</description>
</element>
<element name="CANCEL">
<description>The voice session has been canceled by the user and all current audio data should be discarded. No actions should be taken with this voice session.</description>
</element>
<element name="TIMEOUT">
<description>If no detectable voice data is found within a period of time, the session will timeout. All pop ups or notifications should be dismissed.</description>
</element>
</enum>
This status should be added to the VoiceAssistantServiceData
as well as the renamed OnVoiceSessionUpdate
. The status in the service data will pertain to the actual status of the session, while the status contained in the OnVoiceSessionUpdate
will represent the requested status via some user interaction, eg user presses the PTT button, user cancels the voice session, etc.
I will appreciate any feedback on the descriptions of these enums or if you'd like to see more.
19. If it is agreeable then, the audio service should be used to output voice from the assistant. With the combination of the new VoiceSessionState
the HMI will know if it should output the audio provided on this service from the app.
20. If I understand you correctly this would be for the more integrated voice assistant dialog/pop up right? I'm a little hesitant to create a new UI element and then require it for this feature. If this is a must have we can move forward with designing such a way to accomplish it.
So if we are passing the status of the voice assistant session in the VoiceAssistantServiceData
, maybe we can add some items there, this would include removing the language that the user must create the alert/pop up and put the burden on the HMI to do this for them. The HMI would be displaying the app service data, not performing an alert or pop up from the app. The HMI would do this for as long as the voice session was active.
<struct name="VoiceAssistantServiceData" since="X.X">
<description>This data is related to what a voice assistant service would provide.</description>
....
<param name="vrHelpTitle" type="String" maxlength="500" mandatory="false" since="x.x">
<description>
VR Help Title text.
If omitted on supported displays, the default module help title shall be used.
If omitted and one or more vrHelp items are provided, the request will be rejected.
</description>
</param>
<param name="vrHelpItems" type="VrHelpItem" minsize="1" maxsize="100" array="true" mandatory="false" since="x.x">
<description>
VR Help Items.
If omitted on supported displays, the default SmartDeviceLink VR help / What Can I Say? screen shall be used.
If the list of VR Help Items contains nonsequential positions (e.g. [1,2,4]), the RPC shall be rejected.
If omitted and a vrHelpTitle is provided, the request will be rejected.
</description>
</param>
<param name="recognizedVoiceStrings" type="VoiceRecognitionResult" array="true" mandatory="false">
<description>This array will contain parsed and recognized voice data handled by the service from the last user voice interaction. This should be an ordered list based on the confidence score of the VoiceRecognitionResult.</description>
</param>
</struct>
The HMI can take this information and display it into their own voice assistant HMI UI elements. There would also need to be additional HMI Guidelines that state: "When a voice session session is active, APT dialogs should not be displayed. Instead, the HMI should display the current voice session's app service data into a UI element. This will include potential selections from the last voice audio data received. It should include a way to cancel the voice session."
21. Numbering this as a concern from Toyota. The HMI guidelines are required to allow the cases where an app is in HMI_FULL and should be the one to have a voice session initiated. Obviously it is up to the OEM if they want to support the voice assistant app service or not and if the wish not to, then they do not need to follow the HMI Guidelines for it. I do not see this as a reason to not accept the proposal.
The Steering Committee voted to keep this proposal in review, to allow for additional discussion on the review issue.
9. I think that an exceptional behavior can be implemented in the HMI with additional information like VoiceSessionState
. This could go hand in hand with 20. However I believe the SDLC members will need to know how the HMI should behave. For the Ford team we would need to align with the requirements of the telephony team to allow different behavior of a device if that device starts a "phone call" but one application of that device is performing some voice session. That said I can foresee race conditions where the HMI is told after the app has initiated the "phone call".
Also I would like to have the issue addressed that being connected to a BT device doesn't mean the phone is connected to the vehicle. This could be a problem with BT headphones in the car/bag e.g. coming from the office.
Can you please elaborate how to resolve this issue? The app would need to know that the device is HFP connected to the IVI.
16. Thank you :+1:
18. I think this section becomes difficult to keep track. I'm happy to support you but I believe a sequence or activity diagram would help to proceed. Please let me know how you feel about it.
19. Please note that could require the SDL security to be deployed to that application in order to with with the audio service. At least for Ford this will be required as we protect the audio and video service. Other than that I personally don't see much issues here. I think StartAudioStream
, StopAudioStream
and OnVideoDataStreaming
may be appropriate HMI APIs this proposal can leverage here.
20. I think some additional information as you proposed makes sense. This way the HMI can provide a UI to help. The HMI should take the app's theme into account and may be show the app icon or a special voice assistant icon as well. Just throwing in some ideas ...
21. Please note the proposal https://github.com/smartdevicelink/sdl_evolution/issues/387 SDL-0135 "Push To Talk" Key support. It's deferred due to the predecessor of this proposal which is also deferred.
I think the button integration into SDL needs to be resolved as the whole voice assistant experience heavily depends on the available triggers.
9. On Android the app is informed of the transport that is connected via bluetooth SPP and that can be used to compare against when retrieving the bluetooth profile connection state of HFS.
However, it could be done through the HMI using the new proposal SDL-0280. The bluetooth mac address is sent to Core through the RAI request for this feature. Using this information the HMI could know that the MAC address of the mobile device matches the MAC address of the device connected over HFS. Though for iOS, I'm don't believe it is possible to retrieve the MAC address, so this might not work.
For the race conditions, how does Ford do this today with their special integration of the Google Assistant? I think a real world solution would be the best place to start on how we could handle this on a more general level.
I agree the HMI flow needs to be defined clearly for OEMs to understand what they should do and when. This does align with point 18 and 20 for the need of such diagrams to be provided. I'm having my doubts that we can actually use this part of the feature as more and more points of contention keep arising. Maybe it's better to simply say APT to start only and work on the new AudioInputStream service proposal separately.
19. Understood. I can add that as a potential downside that some OEMs might require security libraries to use this feature.
20. I think adding some mock ups on how this would look into the proposal would be helpful.
21. I think the proposal shows exactly how it should work and after much consideration I do not see another way that works as simply as what is provided. Like I've stated previously, if the OEM does not want to implement this feature then they have that option. However, as discussed with simply subscribing to a PTT button, the HMI flows get even more complicated and confusing to the user.
I tried to keep this feature simple for the first iteration, but it is seemingly less likely that it can be. So I would request this be returned for revisions for the following:
I ask this because it could take some time to create the proper flows and think through some use cases and I don't want to hold up other proposals getting through the process if possible.
We did not have a quorum present during the 2020-07-07 Steering Committee meeting, so voting on this proposal could not take place. This proposal will remain in review until 2020-07-14.
The Steering Committee voted to return this proposal for revisions, per the author's request. The author will revise the proposal as outlined in this comment.
Closing as inactive. The issue will remain in a returned for revisions
state and unlocked so the author (@joeygrover) can notify the Project Maintainer when revisions have been submitted.
Hello SDL community,
The review of the revised proposal "SDL 0300 - Voice Assistant App Service" begins now and runs through June 23, 2020. The original review took place April 21 - May 12, 2020. The proposal is available here:
https://github.com/smartdevicelink/sdl_evolution/blob/master/proposals/0300-Voice-Assistant-App-Service.md
Reviews are an important part of the SDL evolution process. All reviews should be sent to the associated Github issue at:
https://github.com/smartdevicelink/sdl_evolution/issues/1000
What goes into a review?
The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of SDL. When writing your review, here are some questions you might want to answer in your review:
More information about the SDL evolution process is available at
https://github.com/smartdevicelink/sdl_evolution/blob/master/process.md
Thank you, Theresa Lech
Program Manager - Livio theresa@livio.io