Closed theresalech closed 4 years ago
1.
Currently, REMOTE_CONTROL and APP_SERVICES capabilities can already be updated through it, so the idea is to extend this list by VIDEO_STREAMING capabilities.
APP_SERVICES capabilities are not updated through the OnSystemCapabilityUpdated notification from HMI. DISPLAYS
and REMOTE_CONTROL
are updated through this rpc.
2.
I don't agree it is necessary to add the OnSystemCapabilitySubscription
. The HMI should always send capability updates when they change, there is no need to notify the HMI it should start sending notifications.
3.
In "Mobile libraries changes (Java Suite and iOS)", you state:
Mobile applications should be able to update the streaming content window to the new video streaming capabilities received in
OnSystemCapabilityUpdated
.
It's not just the mobile application that needs to update. Changes will need to occur in the video lifecycle managers to support updating values and passing those down to the app. Additional public APIs may be needed to support this case (telling the developer to update their view based on new streaming values). Please investigate this and update the proposal accordingly.
@JackLivio
Thank you for catching this.
Proposal will be updated: APP_SERVICES
item - excluded, DISPLAYS
- included.
Dynamic resolution switching requires some actions in HMI and Mobile application both.
The reason to have OnSystemCapabilitySubscription
notification is to let HMI know if a particular application supports resolution switching.
Hence HMI will be able to send OnSystemCapabilityUpdated
notification with updated VIDEO_STREAMING
capabilities to specific application only.
In this way both HMI and Mobile application will be in sync regarding current resolution.
4.What are the use cases for dynamically updating the values of these parameters during the ignition cycle?
@Sohei-Suzuki-Nexty
@dboltovskyi
@JackLivio
RegisterAppInterface
request, e.g. isResolutionSwitchingSupported
which will be transferred to HMI within BC.OnAppRegistered
notification (as part of HMIApplication
struct).
In this way OnSystemCapabilitySubscription
notification is not requiredThe Steering Committee acknowledged that items 1 and 2 in the comments have been agreed upon and will be necessary revisions. However, as items 3 and 4 are still awaiting resolution/confirmation, the Steering Committee voted to keep this proposal in review for another week. The author will respond to item 3, and the commenter will confirm if their questions in item 4 have been answered.
@atiwari9 -san
@Sohei-Suzuki-Nexty
VideoStreamingCapability
struct .2. I'm sorry to come in late, but how is this flag going to be populated? Does the developer set this if they support it? Where do they do that? Does the library automatically set this flag? How does this affect the HMIs behavior? If an app connects that doesn't support the feature how will it handle that as compared to one that does? I understand the new values won't be sent, but what is the HMI supposed to do in place of the feature? This seems like another arbitrary flag that adds complications and extra bookkeeping and therefore shouldn't be added.
3. Agreeing with Joel here. There is a lot of hand waving taking place here. Without including how the mobile libraries will implement this feature I do not feel the proposal should be accepted. Leaving this as a future detail simply limits the feature from ever being used until those modifications are proposed.
5. I have a feeling larger changes are going to have to be implemented to support this, including protocol changes. Based on the current protocol spec the actual resolution is determined between the StartService
and StartServiceACK
for the video service. The provided information in the VIDEO_STREAMING_CAPABILITY
gives the app information on what to use as baseline values to start the video service. Changing these values mid stream will result in additional needs for altering the stream that are not provided in this proposal.
@joeljfischer
Changes will need to occur in the video lifecycle managers to support updating values and passing those down to the app
We with @dboltovskyi spent some time developing and testing POC for the scenario when VideoStreamingCapabilities
could be changed before streaming. During development, no additional lifecycle changes were provided, though there was introduced another listener and subscription to value in order to handle incoming specific OnSystemCapabilityUpdated
notification and pass new data to VideoStreamManager
. I believe, code impact related to updating of streaming capabilities in the middle of the stream should be minor, but POC for this kind of case is still in progress
Additional public APIs may be needed to support this case (telling the developer to update their view based on new streaming values). Please investigate this and update the proposal accordingly.
In terms of this proposal, no public API available for mobile developers should be changed. We expect the changes only in the Audio/Video stream managers, though the lifecycle is not supposed to be changed
@atiwari9 -san
@Sohei-Suzuki-Nexty
@joeygrover
By default proposed isResolutionSwitchingSupported
flag is going to be set to true
automatically by library. However app developer would still be able to override default value. In case if this flag is set to false
HMI may not allow collapsing of the app's window.
We're going to provide more details in revision of proposal upon updates in mobile library once PoC is ready. Some results have already been mentioned in https://github.com/smartdevicelink/sdl_evolution/issues/983#issuecomment-610072231
Having of StartServiceACK
response with updated values - it's another approach which could be used instead of OnSystemCapabilityUpdated
notification. We're going to mention it as alternative in revision of proposal.
Existing height
and width
response parameters may need to be extended by diagonalScreenSize
, pixelPerInch
and scale
ones.
Also some analysis required to check if this solution covers the case of changing capabilities in the middle of streaming process.
2. I would like to see this section still flushed out more. While I know each OEM can and will have different UX requirements, it is very helpful to all OEMs where there is at least a reference that can be used. For this piece, I would really like to see what the HMI will do between these different cases. Mobile dev conners can be moved to 3.
3. Ok, yes I would like to see how this is handled and will reserve additional comments until this is included.
5. I don't think I explained the situation well enough. The solution in the proposal is ok, but there are protocol level changes that need to happen too. The actual resolution of the stream is negotiated in the StartService
and StartServiceAck
packets for the video service. So if the resolution changed, the HMI sends the OnSystemCapabilityUpdated
with the new resolution, the library will have to inform the VideoStreamingManager
, it will have to send a new StartService
with the potential new resolution supplied in the OnSystemCapabilityUpdated
, the IVI system will hav etc respond with a StartServiceAck
that confirms this new resolution of the stream. Then the video streaming manager will have to update the supplied display/codecs/etc. Therefore this is not an alternative, but an extension that should be required as well.
@atiwari9 -san
@Shohei-Kawano
videoStreamingCapabilities
combinations even without this proposal as well an OEM can pick any of the resolution. But well that's not the case isn't it. In reality OEM/App developer perform the integration testing and ensure that all the use cases are working fine. That'd be the case with dynamically switching videoStreamingCapabilities
as well, most likely an OEM would have two different resolutions to switch between. But we can't limit what videoStreamingCapabilities
an OEM can or can't use. It'd be up to OEM and app developers to do the integration testing for supported videoStreamingCapabilities
, which of course is a finite number.4. @atiwari9 I don't think that your logic works out. Even if app developers did have to support "infinite" capability combinations (though in reality they're mostly various versions of 16:9 or close to it), now you're adding exponentially onto the infinite amount. Stating that it's up to the app developer to do their testing for a given OEM ignores the fact that if you are adding "smaller" windows, the aspect ratios of those windows are going to be all over the place. Some may be portrait, some may be square, some may be landscape but much smaller.
The fact that you're greatly adding onto the burden of app developers is the real issue with this point. App developers go from having their aspect ratio be "landscape, around 16:9," to having their views required to be able to stream in many additional aspect ratios and screen sizes (1 inch square, 3 inch high 1 inch wide, etc.).
I don't think that hand-waving away that fact by "app developers will have to do integration testing" is enough to account for the large increase in the burden on developers you're adding with this point. It at least needs to be addressed in the downsides that you're adding an exponential burden onto navigation app developers.
A quorum was not present during the 2020-04-07 Steering Committee Meeting, so this proposal will remain in review until our meeting on 2020-04-14.
@joeljfischer
Ok, so what is the argument here, that App Developers might need to support both portrait and landscape modes? Or is it that the window size would be smaller etc? This proposal just aims to add capability for HMI to be able to switch from one resolution to the other upon some trigger(which HMI decides). Now for a practical use case, it would be:
And of course HMI could have Regular screen layout in either Portrait or Landscape mode. We do not tell HMI that it can only have a landscape or portrait screen. In reality it is up to HMI to decide what layout is best. So in this case, if OEM1 wants portrait and OEM2 wants landscape, app partner would still need to support both? More over since apps primarily run on phone, which support split screen/PIP/Portrait/Landscape modes all in one phone, i do not see that doing a basic use case of switching resolution is that hard for app to adapt to.
Only valid point seems to be super small resolutions which apps might not be able to adapt to due to variety of content an interactions etc(if app does not support PIP or floating window on phone anyways). In that case you may propose to increase minValue for resolution params to certain plausible value which we can discuss.
Refer this video for Split screen on Android
@atiwari9 -san
@Sohei-Suzuki-Nexty
4. @atiwari9 I believe he is simply saying since the playback window intends to be smaller, why not just have the IVI system take the higher resolution stream from the application and convert it for the smaller window resolution.
5. Also still looking for a response here.
@joeljfischer @joeygrover @Sohei-Suzuki-Nexty
4. Please let me put more details here. Let's imagine we have projection mobile application which streams the data to IVI system. By some action on IVI system this application is switched from Full to Minimized mode.
We may have the following examples:
In example 1) the whole video content is resized.
Video frame as well as UI elements (buttons and labels) are resized proportionally by the same scale (e.g. 2
)
Currently this can be implemented completely in IVI system.
In example 2) video frame is resized, however size (in inches) of UI elements remains pretty much the same as in Full screen mode. Retaining the size of UI elements was the main goal of 0179-pixel-density-and-scale proposal. However this can not be achieved without updates on Mobile side.
VideoStreamManager
was modified constructor, and additional subscription, like this :
this.internalInterface.addOnSystemCapabilityListener(SystemCapabilityType.VIDEO_STREAMING, new OnSystemCapabilityListener() { @Override public void onCapabilityRetrieved(Object capability) { VideoStreamingParameters params = new VideoStreamingParameters(); params.update((VideoStreamingCapability) capability, vehicleMake); //Streaming parameters are ready time to stream VideoStreamManager.this.parameters = params; virtualDisplayEncoder.setNewParams(params); // here setting new params if (isStreaming()) { stopStreaming(); } resumeStreaming(); }
@Override
public void onError(String info) {
}
});
So that we are subscribed to `ON_SYSTEM_CAPABILITY_UPDATED`,
and what goes in `stopStreaming()` and `resumeStreaming()` is :
for `stopStreaming()`
if(remoteDisplay!=null){ remoteDisplay.stop(); } if(virtualDisplayEncoder!=null){ virtualDisplayEncoder.shutDown(); } stateMachine.transitionToState(StreamingStateMachine.STOPPED);
and for `resumeStreaming()`
if(stateMachine.getState() != StreamingStateMachine.STOPPED){ return; } startEncoder();
where `startEncoder()` restarts `SdlRemoteDisplay`
the only problem here is in restarting, while this process `SdlRemoteDisplay` is recreated, thus state of View would be lost. Right now we are working on handling of this scenario, trying to implement new `Surface` creation with new Parameters for `SdlRemoteDisplay` and just resizing `SdlRemoteDisplay`
@joeygrover
Currently we do not expect the changes on a protocol level.
We also do not expect new StartService
/ StartServiceAck
messages from Mobile and SDL.
Please take a look at https://github.com/smartdevicelink/sdl_evolution/issues/983#issuecomment-613582128
We suppose currently when mobile library receives StartServiceAck
from SDL it performs some validation of parameters.
The same validation logic may be applied when mobile library receives OnSystemCapabilityUpdated
with new video capabilities.
The Steering Committee voted to keep this proposal in review, so SDLC Members and the Project Maintainer can respond to the latest comments from the author.
The PM worked to summarize our feedback to the open items below. Thanks!
2.
When you show this flow in the revised proposal, please make sure default for isResolutionSwitchingSupported
is FALSE.
3. This solution doesn't allow the developer to make changes to their UI size. Again, for the solution this proposal is trying to reach, Public API changes will be needed. The current code solution would actually result in the same outcome as the IVI simply resizing the stream.
4. The options presented display a poor UX for option 2 that should be clear. Seeing a navigation app being pushed to a minimized view with having its buttons overtake most of the display area would be a very awful UX. The user wouldn't be able to see the map, only buttons. Whereas, simply resizing the video stream is a much less complicated solution and makes using a navigation app in smaller window more viable. Instead of recreating these UI elements to fit in an infinite set of resolutions, it makes more sense to resize the given stream, and when a user presses on the window, bring the app into HMI_FULL and display the app in the normal resolution.
5. If the solution is accepted with changing resolutions on the mobile side, protocol changes are needed. The stream params must be renegotiated. The resolution can't simply be changed without notification. Please update the solution to reflect this requirement.
Per the author's request, the Steering Committee voted to defer this proposal. This will allow the author time to investigate a new solution that resolves the issues identified by the Project Maintainer in this comment. Once the author has identified a new solution, we can bring this proposal back in review to discuss their proposed revisions for which the proposal can be returned.
The author has advised that they have investigated alternative solutions and are ready to bring this proposal back in to review to discuss those solutions/their proposed revisions. This proposal will now be in review until 2020-05-12.
We would appreciate if Steering Committee be able to review this comment and provide some feedback.
We would like to stress that the main goal of the proposal is to cover the following use cases related to dynamic resolution switching:
The picture shown previously in case 2. (big buttons) is just an example for scale=2. In case if scale=1.25 (or 1.5) UI elements won't look so large and they will be more usable comparing to scale=1.
HMI may follow the behavior of case 1. (small buttons) if mobile application does not support dynamic resolution switching.
In order to support case 2. we propose the following new solution.
Part 1) Negotiation of video streaming capabilities (VSCs) between HMI and Mobile application
a) Introduce a new optional availableVideoStreamingCapabilities
array parameter under SystemCapability
structure.
It would be populated by HMI within UI.GetCapabilities
response and includes all available video streaming capabilities that HMI may handle.
b) Mobile application would be able to obtain these available VSCs through GetSystemCapability
request/response of VIDEO_STREAMING
type. Within this RPC mobile application also be able to subscribe to VSC updates.
c) Introduce new SelectDisplayCapability
RPC
+ <function name="SelectDisplayCapability" functionID="SelectDisplayCapabilityID" messagetype="request" since="6.x">
+ <description>Provides an information about supported display capabilities</description>
+ <param name="supportedVideoStreamingCapabilities" type="VideoStreamingCapability" array="true" minsize="1" maxsize="100" mandatory="true">
+ <description>List of supported video streaming capabilities</description>
+ </param>
+ </function>
Through this RPC Mobile application would be able to provide list of supported VSCs to HMI. And HMI would adapt it's internal behavior accordingly.
Part 2) Resolution Switching
In order to notify Mobile application about current VSC HMI may use existing OnSystemCapabilityUpdated
notification of VIDEO_STREAMING
type with existing videoStreamingCapability
parameter populated.
Mobile application would receive such updates if it has been previously subscribed to.
Resolution switching would be done according to existing stop streaming and start streaming flows:
Hi @dboltovskyi, thank you for providing this new solution. The Project Maintainer has reviewed and provided feedback below:
3. The proposal still needs to address the original concern about developers changing the UI size and Public API changes needed (including manager changes).
6. SelectDisplayCapability
RPC should be renamed because it's not a Display Capability and you cannot select it. This new RPC should also be reflected in Resolution Switching diagram
7. We need to handle the case for apps that won't support this feature. Can you please provide information on how this would work?
8. We should not add two parameters under the SystemCapability
structure for the same capability type. This will cause large issues in the library manager layer. Either this should be added as a nested array in the VideoStreamingCapabilities struct itself or a new SystemCapabilityType be defined.
9. We would like the proposal to include that the following should be added to the HMI Integration Guidelines:
@theresalech Thank you for the review.
We have completely working PoC for Android and details upon changes in mobile library going to be provided in separate comment.
Agree, current SelectDisplayCapability
is not the best name. We have to find out more suitable one (e.g.: SupportedCapabilities
)
I would propose to show VSCs negotiation on a separate diagram since Negotiation and Resolution switching sequences are not strictly depends on each other.
7.
In case if app doesn't support this feature it won't send SelectDisplayCapability
request with supported VSCs.
HMI may handle this app in 2 ways:
8. New array parameter describes all available VSCs supported by HMI, however existing old non-array parameter is related to current VSC selected by HMI.
9. Scaling (increasing) of UI elements for a smaller app's window is the main idea of this proposal. It's assumed touchable UI elements (buttons) would work in this case as well. Available and supported VSCs would be aligned during negotiation procedure. And it's up to HMI is to support (or not) touch events for a particular resolution.
@theresalech 3. Attaching most important code changes from POC with detailed description of why this change is needed and how it works
In SDLManager
class :
@Override
public void stopVideoService(boolean withPendingRestart) {
if(proxy.getIsConnected()){
proxy.endVideoStream();
}
if (withPendingRestart && videoStreamManager != null && proxy.getIsConnected()){
proxy.startVideoService(videoStreamManager.getLastCachedIsEncrypted(),
videoStreamManager.getLastCachedStreamingParameters());
}
}
As you can see, here we’ve extended startVideoService
with VideoStreamingParams
to give actual values for streaming and inrtuced new boolean in stopVideoService
- so that entity could know if pending restart is planned
In SDLRemoteDisplay
class :
public void resizeView(final int newWidth, final int newHeight) {
uiHandler.post(new Runnable() {
@Override
public void run() {
try {
Constructor<? extends ViewGroup.LayoutParams> ctor =
mainView.getLayoutParams().getClass().getDeclaredConstructor(int.class, int.class);
mainView.setLayoutParams(ctor.newInstance(newWidth, newHeight));
mainView.requestLayout();
invalidate();
} catch (Exception e) {
e.printStackTrace();
}
}
});
}
SDLRemoteDisplay
(actually one for extension by third-party developers) is extended with resize
method in order to provide resizing only layout, keeping all UI controls (Buttons, TextViews) with same size. In this way goal of our type of scaling is achieved
In VirtualDisplayEncoder
class :
public void pauseEncoding(){
try {
if (encoderThread != null) {
encoderThread.interrupt();
encoderThread = null;
}
if (mVideoEncoder != null) {
mVideoEncoder.stop();
mVideoEncoder.release();
mVideoEncoder = null;
}
if (inputSurface != null) {
inputSurface.release();
inputSurface = null;
}
}
This method is quite similar to existing stop
method, though the huge difference - this one is not releasing VirtualDisplay
. If we ever do that, according to [Android documentation](https://developer.android.com/reference/android/hardware/display/VirtualDisplay#release()), everything else will be destroyed (surface, remoteDisplay), so that it will produce loss of state, because RemoteDisplay
once disposed should be reinstantiated
Another change in VirtualDisplayEncoder
if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.LOLLIPOP && virtualDisplay != null) {
virtualDisplay.setSurface(null);
virtualDisplay.setSurface(inputSurface);
}
else {
virtualDisplay = mDisplayManager.createVirtualDisplay(TAG,
predefined_width, predefined_height,
streamingParams.getDisplayDensity(), inputSurface, DisplayManager.VIRTUAL_DISPLAY_FLAG_PRESENTATION);
}
In encoder we are checking if current VirtualDisplay
is not null. If it is - we should create one, otherwise - reattach canvas which was released by pauseEncoding
And in VideoStreamingManager
:
this.internalInterface.addOnSystemCapabilityListener(SystemCapabilityType.VIDEO_STREAMING, new OnSystemCapabilityListener() {
@Override
public void onCapabilityRetrieved(Object capability) {
VideoStreamingParameters params = new VideoStreamingParameters();
params.update((VideoStreamingCapability)capability, vehicleMake); //Streaming parameters are ready time to stream
VideoStreamManager.this.parameters = params;
virtualDisplayEncoder.setStreamingParams(params);
restartStreaming();
}
@Override
public void onError(String info) {
}
});
public void restartStreaming(){
if (this.isStreaming()) {
if(virtualDisplayEncoder!=null){
virtualDisplayEncoder.pauseEncoding();
}
stateMachine.transitionToState(StreamingStateMachine.STOPPED);
this.internalInterface.stopVideoService(true);
}
}
In VideoStreamingManager
we have subscription to VideoStreaming
capability in order to trigger restart process using provided flow. Also restartStreaming
disposes current encoder, surface and interrupts encoder thread to stop frame sending. After - sends signal to internalInterface
passing withPendingRestart = true
value in order to restart service
Hi @dboltovskyi and @kostyaBoss, please find responses from the PM below:
3. Thank you. Can you please provide the changes required for iOS? We will need to be able to review both implementations simultaneously to ensure alignment among the libraries.
6. We would like to recommend a generalized way for the mobile device to notify Core about its capabilities similar to how Core notifies an app about its System Capabilities. When an app wants to notify Core about its capabilities, the app should send Core a notification, OnAppCapabilitiesUpdated . This would be used in place of proposed rpc SelectDisplayCapability. Note it is assumed that Core is automatically subscribed to this notification.
<function name="OnAppCapabilityUpdated" functionID="OnAppCapabilityUpdatedID" messagetype="notification" since="x.x">
<description>A notification to inform SDL Core that a specific app capability has changed.</description>
<param name="appCapability" type="AppCapability" mandatory="true">
<description>The app capability that has been updated</description>
</param>
</function>
<struct name="AppCapability" since="x.x">
<param name="capabilityType" type="AppCapabilityType" mandatory="true">
<description>Used as a descriptor of what data to expect in this struct. The corresponding param to this enum should be included and the only other param included.</description>
</param>
<param name="videoStreamingCapability" type="AppVideoStreamingCapability" mandatory="false">
<description>Describes supported capabilities for video streaming </description>
</param>
</struct>
<enum name="AppCapabilityType" since="x.x">
<description>Enumerations of all available app capability types</description>
<element name="VIDEO_STREAMING"/>
</enum>
<struct name="AppVideoStreamingCapability" since="x.x">
<description>Contains information about this app's video streaming capabilities.</description>
<param name="supportedResolutions" type="ImageResolution" mandatory="false" array="true">
<description>Supported streaming resolutions</description>
</param>
</struct>
7. Sounds good, thanks.
8. We understand what the proposal is stating and how you see these parameters working. However, we see issues with this implementation, as noted in our comment previously. Please advise how you can address these issues (either this should be added as a nested array in the VideoStreamingCapabilities struct itself or a new SystemCapabilityType be defined.).
9. We were under the impression that the purpose of this proposal was to resize the video streaming window, not to scale the UI elements. If the purpose is to scale UI, the motivation section should be updated to reflect this. We can agree to allowing some touches on UI elements of the screen if touch events are offset and there's a minimum window size required for all integrations. We are strongly against any touch events being passed through to the app in PIP (1 in the comment above). For split screen (2), we can agree to some limited pass through of the touch events to the app, given a few restrictions. (2) changes to this:
If the change is a split-screen type change (cannot be handled by scaling the original video), then go through this flow. If the app does not support your custom split-resolution, either don't allow the user to put the app in split screen, or scale the video stream to a size that fits the window and use "black bars" to fill the rest of the window. Touches may be passed to the app in certain limited cases. (1) The window must have a width of at least 3" and a height of at least 3", (2) The touches must be offset for the developer so that 0,0 continues to be in the top-left corner of the app window. If these requirements are not met, either a system menu / buttons should be displayed when selected, or the selection should bring the user immediately back to the full-screen app.
8.
In order to resolve this point we propose to introduce new VIDEO_STREAMING_OPTIONS
system capability type:
<enum name="SystemCapabilityType">
...
+ <element name="VIDEO_STREAMING_OPTIONS"/>
</enum>
And add new videoStreamingOptions
array parameter in SystemCapability
struct:
<struct name="SystemCapability">
...
+ <param name="videoStreamingOptions" type="VideoStreamingCapability" mandatory="false" array="true">
</struct>
HMI would be able to pass all supported VSCs within this new parameter.
Mobile application then would be able to obtain all available VSCs through GetSystemCapability
request/response with systemCapabilityType=VIDEO_STREAMING_OPTIONS
Name VIDEO_STREAMING_OPTIONS
is just an example and may by changed to more suitable one.
9. We agree that motivation section needs to be updated to reflect possibility to scale UI.
We may add a specific section upon guidelines for HMI in terms of how HMI should create the projection areas and how touch events needs to be handled. However we suppose that is not in scope of this proposal. That has to be a broader topic for discussion. Current proposal related to VSC changes and aspects how Mobile application supports the VSC provided by head unit. The way how head unit uses that VSC is up to OEM.
6.
We like PM suggestion and would like to clarify a few items:
1) We assume Core would have to transfer OnAppCapabilityUpdated
notification to HMI.
2) HMI would need to match somehow it's available system VSCs against received app's supported VSCs.
In order to simplify this process we would propose to use existing SystemCapabilityType
enum and VideoStreamingCapability
struct in new notification:
<function name="OnAppCapabilityUpdated" functionID="OnAppCapabilityUpdatedID" messagetype="notification" since="x.x">
<param name="appCapability" type="AppCapability" mandatory="true" />
</function>
<struct name="AppCapability" since="x.x">
<param name="appCapabilityType" type="SystemCapabilityType" mandatory="true" />
<param name="videoStreamingOptions" type="VideoStreamingCapability" mandatory="false" array="true" />
</struct>
During the 2020-05-12 meeting, the author noted that they will be posting information regarding the iOS implementation (item 3) once available, they agree with the PM’s stance on item 6, and they have shared their responses to items 8 and 9 on the review issue. The PM asked that responses to items 3 and 6 also be posted to the review issue for greater Steering Committee visibility. The Steering Committee voted to keep this proposal in review to allow the author and PM more time to discuss the open items on the review issue.
@theresalech Please notice we've posted response for 6 with some suggestions and hence items 6, 8 and 9 awaiting PM's review.
Hi @dboltovskyi, thank you! Once item 3 has been answered by the Luxoft team, we will respond to all open items collectively.
@theresalech Thank you for update.
The Steering Committee voted to keep this proposal in review, to allow time for the author to comment with changes required for iOS implementation (open item 3), and for the PM and SDLC members to review and respond.
Hello Steering Committee members, let me introduce iOS changes (following the point 3)
Objective-C classes and methods to update:
In the following method the client app view gets resized to self.streamManager.videoScaleManager.appViewportFrame
- (void)sdl_applyDisplayDimensionsToViewController:(UIViewController *)viewController {
const CGRect appFrame = self.streamManager.videoScaleManager.appViewportFrame;
...<app state and size validation logic>...
viewController.view.frame = appFrame;
viewController.view.bounds = appFrame;
SDLLogD(@"Setting CarWindow frame to: %@", NSStringFromCGSize(appFrame.size));
}
Take for instance the following capabilities JSON with the following parameters:
{
"prefferedResolution" : {
"resolutionWidth": 800,
"resolutionHeight": 380,
},
scale: 2,
…<protocol, codec, etc are dropped>…
}
Display size: 800x380 Scale factor: 2.0 View size: (800x380)/2.0 ==> (400x190)
The method syncFrame generates screenshots 15 times a second (it is adjustable value)
- (void)syncFrame {
//...<validate state parameters logic>...
UIImage *screenshot = nil;
const CGRect bounds = self.streamManager.videoScaleManager.appViewportFrame;
UIGraphicsBeginImageContextWithOptions(bounds.size, YES, 1);
CGContextRef context = UIGraphicsGetCurrentContext();
switch (self.renderingType) {
case SDLCarWindowRenderingTypeLayer: {
[self.rootViewController.view.layer renderInContext:context];
} break;
case SDLCarWindowRenderingTypeViewAfterScreenUpdates: {
[self.rootViewController.view drawViewHierarchyInRect:bounds afterScreenUpdates:YES];
} break;
case SDLCarWindowRenderingTypeViewBeforeScreenUpdates: {
[self.rootViewController.view drawViewHierarchyInRect:bounds afterScreenUpdates:NO];
} break;
}
screenshot = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
CGImageRef imageRef = screenshot.CGImage;
CVPixelBufferRef pixelBuffer = imageRef ? [self.class sdl_createPixelBufferForImageRef:imageRef usingPool:self.streamManager.pixelBufferPool] : nil;
if (pixelBuffer != nil) {
const BOOL success = [self.streamManager sendVideoData:pixelBuffer];
if (!success) {
SDLLogE(@"Video frame will not be sent because the video frame encoding failed");
}
CVPixelBufferRelease(pixelBuffer);
} else {
SDLLogE(@"Video frame will not be sent because the pixelBuffer is nil");
}
}
- (void)resumeVideo {
self.shouldAutoResume = NO;
if (self.protocol == nil) {
SDLLogV(@"No session established with head unit. Cannot continue video.");
return;
}
if (self.isVideoSuspended) {
[self.videoStreamStateMachine transitionToState:SDLVideoStreamManagerStateReady];
} else {
[self sdl_startVideoSession];
}
}
- (void)suspendVideo {
if (!self.protocol) {
SDLLogV(@"No session established with head unit. Cannot suspend video.");
return;
}
[self.touchManager cancelPendingTouches];
if (self.isVideoConnected) {
[self.videoStreamStateMachine transitionToState:SDLVideoStreamManagerStateSuspended];
} else {
[self sdl_stopVideoSession];
}
}
Request Video Capabilities and subscribe for SDLSystemCapabilityTypeVideoStreaming notifications
the callback is: -sdl_displayCapabilityDidUpdate:
- (void)didEnterStateVideoStreamStarting {
SDLLogD(@"Video stream starting");
__weak typeof(self) weakSelf = self;
[self sdl_requestVideoCapabilities:^(SDLVideoStreamingCapability * _Nullable capability) {
SDLLogD(@"Received video capability response, Capability: %@", capability);
[weakSelf sdl_applyVideoCapability:capability];
// Apply customEncoderSettings here. Note that value from HMI (such as maxBitrate) will be overwritten by custom settings.
for (id key in weakSelf.customEncoderSettings.keyEnumerator) {
weakSelf.videoEncoderSettings[key] = [weakSelf.customEncoderSettings valueForKey:key];
}
if (weakSelf.dataSource != nil) {
SDLLogV(@"Calling data source for modified preferred resolutions");
weakSelf.preferredResolutions = [weakSelf.dataSource resolutionFromHeadUnitPreferredResolution:weakSelf.preferredResolutions.firstObject];
SDLLogD(@"Got specialized video resolutions: %@", weakSelf.preferredResolutions);
}
[weakSelf.systemCapabilityManager subscribeToCapabilityType:SDLSystemCapabilityTypeVideoStreaming withObserver:weakSelf selector:@selector(sdl_displayCapabilityDidUpdate:)];
[weakSelf sdl_sendVideoStartService];
}];
}
Note here the new introduced flag: shouldAutoResume Which indicates that once suspended the video streaming must resume again.
- (void)sdl_displayCapabilityDidUpdate:(SDLSystemCapability *)systemCapability {
SDLVideoStreamingCapability *videoCapability = systemCapability.videoStreamingCapability;
SDLLogD(@"Video capabilities notification received: %@", videoCapability);
self.videoStreamingCapability = videoCapability;
self.shouldAutoResume = YES;
dispatch_async(dispatch_get_main_queue(), ^{
[self suspendVideo];
// videoStreamingCapability will be applied later when video resumes
});
}
If shouldAutoResume set to YES (as in the previous method) then the following method auto resume video streaming
- (void)didEnterStateVideoStreamSuspended {
SDLLogD(@"Video stream suspended");
[self sdl_disposeDisplayLink];
if (self.shouldAutoResume) {
self.shouldAutoResume = NO;
dispatch_async(dispatch_get_main_queue(), ^{
if (self.videoStreamingCapability) {
[self sdl_applyVideoCapability:self.videoStreamingCapability];
}
[self resumeVideo];
});
}
[[NSNotificationCenter defaultCenter] postNotificationName:SDLVideoStreamSuspendedNotification object:nil];
}
Known limitations and additional info. When streaming video the SDL library uses the following code:
switch (self.renderingType) { case SDLCarWindowRenderingTypeLayer: { [self.rootViewController.view.layer renderInContext:context]; } break; case SDLCarWindowRenderingTypeViewAfterScreenUpdates: { [self.rootViewController.view drawViewHierarchyInRect:bounds afterScreenUpdates:YES]; } break; case SDLCarWindowRenderingTypeViewBeforeScreenUpdates: { [self.rootViewController.view drawViewHierarchyInRect:bounds afterScreenUpdates:NO]; } break; }
Using this approach It is impossible to capture metal based layers (*1) such an attempt results in a black square instead of expected image.
(*1) Metal based layers include but not limited to video player layer, SCNView and Scene framework, etc.
The Steering Committee voted to keep this proposal in review, to allow time for the PM to respond to the latest comments from the author.
Hi @dboltovskyi, @kostyaBoss, @yoooriii please see our responses to 3, 6, 8, and 9.
3. Please note that this is not a code review. This must be considered example code and will be required to go through a full code review when the PR is put to Github.
Android:
We can't change public methods without a breaking change (stopVideoService
). It is also unnecessary. These changes could occur in the VideoStreamingManager itself. Simply using the service listener once the service is stopped, if the flag is set in the manager to restart the service it does so then.
StreamingStateMachine
PAUSED makes senseSdlRemoteDisplay
adding a new method as a callback to developers like onResizeView
that the developers can override would be helpful. I would imagine it returned a boolean to state if they wish for the resizing to continue or not.iOS: There are many issues with the presented iOS code. I feel that there must have been a major miscommunication. The goal was to see the public API changes and a description of other changes that may need to take place.
a. You provide no public APIs. That is, given the iOS code, it is impossible for the developer to actually support a subset of the head unit's provided video streaming capabilities. This was the primary point of reviewing iOS code.
b.
SDLCarWindow
The code you provide is effectively unchanged from the code in the library and was unnecessary to provide. If there are differences that I missed, please point them out more effectively.
c.
SDLStreamingVideoLifecycleManager
i. You provided new private methods but did not indicate how or why they would be used. The library already has ways to suspend and resume video streaming (e.g. if the app goes into the background on the phone). You have not explained why your new methods are necessary or why they exist.
ii.
Request Video Capabilities and subscribe for SDLSystemCapabilityTypeVideoStreaming notifications
This is already done, unless you mean your VIDEO_STREAMING_OPTIONS
type, which we dispute is necessary, see above. It would be better to roll it all into the existing type rather than to have to subscribe to two different types in this manager. That will become complicated.
iii.
Note here the new introduced flag: shouldAutoResume Which indicates that once suspended the video streaming must resume again.
What cases would this be false? We always want to autoResume, right?
iv.
- (void)sdl_displayCapabilityDidUpdate:(SDLSystemCapability *)systemCapability {
Why are you checking the videoCapabilities
in the displayCapabilityDidUpdate
method? That doesn't seem to be correct.
v.
- (void)didEnterStateVideoStreamSuspended
This code doesn't make sense. When the stream is suspended it should not be immediately resumed like this. Then the autoResume
flag is turned to false
, though it doesn't explain why. Suspension can happen for more reasons than because the video capabilities are changing.
vi.
Using this approach It is impossible to capture metal based layers (*1) such an attempt results in a black square instead of expected image.
This is entirely irrelevant to this proposal. If you want to change anything about this, open a new proposal.
6. Agree to the added points for OnAppCapability
flow but should follow original naming proposed by the project maintainer. This would including the capability remaining VideoCapabilities
as discussed in point 8.
We do not want to create a subset of SystemCapabilitiy
enum values that only apply to app capabilities or a subset of system capabilities. Otherwise we would simply continue to use all of SystemCapability
RPCs and structs.
8. I believe adding this as a param to the already existing capability will make the most sense. There should be no issue including an array of objects based on itself. It also ensures apps only have to handle one capability and all capabilities related to that feature are included in a single struct. This will help prevent cases of race conditions.
<struct name="VideoStreamingCapability" since="4.5">
<!- Existing params ->
<param name="additionalVideoStreamingCapabilities" type="VideoStreamingCapability" array="true" minvalue="1" maxvalue="99" mandatory="false" since="X.X">
</param>
</struct>
9. We disagree, we feel that guidelines on how a feature should be used are in-scope to a proposal. If guidelines are not provided, OEMs could create conflicting implementations that app developers will have to handle themselves.
Hi @joeljfischer , @theresalech! I've created fork with specific branch where code of POC for this proposal could be found. Using this link, you can check all the changes for Android platofrm, I believe this would help us to avoid miscommunication and improve efficiency. There are still things to do (both technical cleanup and additional logic implementation), but general implementation of feature introduced. Please, let me know, if there would be additional concerns
Dear Steering Committee members I have just created a POC. In this POC I implemented the proposed functionality for iOS platform. It was tested on core 6.1.0. Please take a look at the changes proposed. In this POC I updated both the SDL library itself and 'SmartDeviceLink-Example-ObjC' project to use and test the proposed functionality. (Please note the SWIFT example project did not change and may or may not work).
The POC is available at: yoooriii/sdl_ios/sdl0296_upd_video_streamcap
The proposed change does not affect the public API though it adds a few new methods to the API.
b). SDLCarWindow
- (void)updateVdeoStreamingCapability:(SDLVideoStreamingCapability *)videoStreamingCapability {
[self sdl_applyDisplayDimensionsToViewController:self.rootViewController];
}
c). SDLStreamingVideoLifecycleManager
In the method didEnterStateVideoStreamStarting
it subscribes for notifications from SDLSystemCapabilityManager
instead of sending a capability request.
- (void)didEnterStateVideoStreamStarting {
[self.systemCapabilityManager unsubscribeFromCapabilityType:SDLSystemCapabilityTypeVideoStreaming withObserver:self];
[self.systemCapabilityManager subscribeToCapabilityType:SDLSystemCapabilityTypeVideoStreaming withObserver:self selector:@selector(sdl_displayCapabilityDidUpdate:)];
}
i). In the new implementation when video capability changes the SDL library closes the current video session and opens a new video session when doing reconnect the SDL lib renegotiates video parameters with the head unit.
ii). Please see the above method didEnterStateVideoStreamStarting
, it subscribes for capability change.
iii). The new introduced flag shouldAutoResume
is set to NO by default and it becomes YES when new video capabilities arrive and the SDL library is in state Ready SDLVideoStreamManagerStateReady
. Maybe the name shouldAutoResume
is ambiguous in this context if such case I am ready to rename it.
iv). SDLStreamingVideoLifecycleManager subscribes for notifications from SDLSystemCapabilityManager
instead of sending a capability request. And as soon as either current or new updated capabilities arrive the method sdl_displayCapabilityDidUpdate
gets called.
v). I agree it was a misconception on my part. Video streaming does not suspend but disconnects first and then reconnects once again. The video streaming state machine goes the following states: SDLVideoStreamManagerStateReady --> SDLVideoStreamManagerStateShuttingDown --> SDLVideoStreamManagerStateStopped
and after that video streaming restarts: SDLVideoStreamManagerStateStopped --> SDLVideoStreamManagerStateStarting --> SDLVideoStreamManagerStateReady
. The state SDLVideoStreamManagerStateSuspended
is not used in this case.
vi). Yes sure it is not relevant though since the client app can be of any kind I thought it was worth mentioning. For instance video player will not work out of the box in comparison with the android counterpart.
Hello SDL community,
The review of the revised proposal "SDL 0296 - Possibility to update video streaming capabilities during ignition cycle" begins now and runs through June 30, 2020. The original review of this proposal took place March 24 - June 16, 2020. The proposal is available here:
https://github.com/smartdevicelink/sdl_evolution/blob/master/proposals/0296-Update-video-streaming-capabilities-during-ignition-cycle.md
Reviews are an important part of the SDL evolution process. All reviews should be sent to the associated Github issue at:
https://github.com/smartdevicelink/sdl_evolution/issues/983
What goes into a review?
The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of SDL. When writing your review, here are some questions you might want to answer in your review:
More information about the SDL evolution process is available at
https://github.com/smartdevicelink/sdl_evolution/blob/master/process.md
Thank you, Theresa Lech
Program Manager - Livio theresa@livio.io