smartdevicelink / sdl_evolution

Tracking and proposing changes to SDL's public APIs.
https://smartdevicelink.github.io/sdl_evolution/
BSD 3-Clause "New" or "Revised" License
33 stars 122 forks source link

[Rejected] SDL 0091 SDLCarWindow Video Projection Developer Interface #277

Closed theresalech closed 6 years ago

theresalech commented 7 years ago

Hello SDL community,

The review of the revised proposal "SDL 0091 - SDLCarWindow Video Projection Developer Interface" begins now and runs through October 17, 2017. Reviews of previous iterations of "SDLCarWindow Video Projection Developer Interface" took place September 27 - October 3, 2017 and September 6 - September 12, 2017. The proposal is available here:

https://github.com/smartdevicelink/sdl_evolution/blob/master/proposals/0091-SDLScreen-SDLWindow-Projection.md

Reviews are an important part of the SDL evolution process. All reviews should be sent to the associated Github issue at:

https://github.com/smartdevicelink/sdl_evolution/issues/277

What goes into a review?

The goal of the review process is to improve the proposal under review through constructive criticism and, eventually, determine the direction of SDL. When writing your review, here are some questions you might want to answer in your review:

More information about the SDL evolution process is available at

https://github.com/smartdevicelink/sdl_evolution/blob/master/process.md

Thank you, Theresa Lech

Program Manager - Livio theresa@livio.io

joeljfischer commented 7 years ago

The app developer uses the carWindow property on the streaming media manager to obtain the SDLCarWindow instance. Then to begin capture, encoding and streaming of view hierarchy video frames, the developer assigns the root view controller to be projected to the streamingViewController property on the car window instance.

Would this be better as a change to the SDLStreamingMediaConfiguration? Or are they expected to be able to set the streamingViewController multiple times?

SDLInterfaceManager

As I noted in my review for this component, I don't think this name is descriptive for what it does. A better name might be SDLHapticItemLocator.

If the head unit communicates that it handles focus and selection, the SDLInterfaceManager will use the BOOL appHandlesTouches property on the SDLStreamingMediaConfiguration class to determine whether to iterate over the views contained in the SDLCarWindow streamingViewController property to determine and send the haptic spatial configuration to the head unit.

I also don't think appHandlesTouches is a good name, as this property doesn't seem to indicate whether or not the app handles touches, it indicates whether or not the app handles sending haptic rects. Perhaps a BOOL hapticLocatorEnabled.

The SDLCarWindow class will also include the previously app-level common logic that interacts with the existing SDLStreamingMediaManager class to stream pixel buffers and receive audio data.

I don't think I understand what this means, can you clarify?

@property (strong, nonatomic, nullable) UIViewController *streamingViewController;

Should this be rootViewController to match UIWindow?

In choosing a projection API that differs from the standard UIScreen/UIWindow model for displaying content on an external screen, we are asking developers to adopt a model that is somewhat unfamiliar.

I think this is kinda the same, it's not a UIWindow, but it's supposed to be the same thing.

Additional Thoughts I have a few more additional thoughts.

  1. Because the CarWindow isn't something we are passing to the developer, but it's something they have to retrieve, I can see some confusion resulting. For example, what is the object lifecycle like? Is it init when the SMM is, or after it's started. The obvious answer would seem to be to let them retrieve it before the SMM connection has solidified, but it has issues, such as the negotiation of the screen size. This means that the dev needs to know when the streaming starts, check the screen size, and perhaps adjust their content.

  2. How are we handling the screen size with the set view controller? The VC they set is unlikely to be at the same resolution as the negotiated screen size.

  3. What happens if they set the streamingViewController to nil? Do we send the background frames that also occur when the app goes into the background? Those frames are designed to tell the user to bring the app back to the foreground, so the message wouldn't be proper.

It seems to me that the dev should be passed a CarWindow when it's ready to be used. Perhaps though a delegate or a block interface of some sort.

davidswi commented 7 years ago

This is a large comment with multiple topics, so I will respond by topic to enable more granular discussion.

JF: Would this be better as a change to the SDLStreamingMediaConfiguration? Or are they expected to be able to set the streamingViewController multiple times?

DS: It is possible the app developer will need to change the view controller being projected through the course of the session. For example, if the user selects a 3D map view rather than the standard 2D map, the app may need to switch view controllers rather than just swapping views. Since we can't know in advance how the app developer has implemented the views to be projected, it's better to allow flexibility, IMHO. That said, I could see a majority use case where the view controller is not changed. I'm in favor of allowing it to be set in the SDLStreamingMediaConfiguration and changeable through a property.

davidswi commented 7 years ago

JF> As I noted in my review for this component, I don't think this name is descriptive for what it does. A better name might be SDLHapticItemLocator.

DS> The term 'haptic' is also a bit of a misnomer because sending focusable regions to the head unit does not imply the head unit provides haptic feedback. It means only that the head unit is managing focusing and selection. How it provides feedback about which region has focus and whether it has been selected is up to the OEM. I'd suggest SDLSpatialItemLocator or SDLFocusableItemLocator. My personal preference would be SDLFocusableItemLocator because the point of sending the regions to the head unit is to enable a focus and select user interaction model rather than direct touch. This is industry standard convention and Android and iOS both use it.

davidswi commented 7 years ago

JF> I also don't think appHandlesTouches is a good name, as this property doesn't seem to indicate whether or not the app handles touches, it indicates whether or not the app handles sending haptic rects. Perhaps a BOOL hapticLocatorEnabled.

DS> Building on my last comment, I'd suggest SDLSpatialItemLocatorEnabled or SDLFocusableItemLocatorEnabled.

davidswi commented 7 years ago

JF> I don't think I understand what this means, can you clarify?

DS> I need to make the language clearer. The point is that SDLCarWindow provides the logic every app developer would otherwise have to re-implement in order to project their view hierarchy.

davidswi commented 7 years ago

JF> Should this be rootViewController to match UIWindow?

DS> Yes, I agree.

davidswi commented 7 years ago

JF> I think this is kinda the same, it's not a UIWindow, but it's supposed to be the same thing.

DS> Yes, but we have not followed the UIScreen/UIWindow object model that OS X and iOS use in order for applications to place windows on multiple screens. I would prefer to be the same rather than almost the same.

davidswi commented 7 years ago

_JF> Additional Thoughts I have a few more additional thoughts.

Because the CarWindow isn't something we are passing to the developer, but it's something they have to retrieve, I can see some confusion resulting. For example, what is the object lifecycle like? Is it init when the SMM is, or after it's started. The obvious answer would seem to be to let them retrieve it before the SMM connection has solidified, but it has issues, such as the negotiation of the screen size. This means that the dev needs to know when the streaming starts, check the screen size, and perhaps adjust their content._

DS> iOS has the same issue with an external AirPlay screen. The screen can come and go depending on whether the iOS device is on the same network, AirPlay is enabled on the external device, etc. iOS solves the problem in two ways. First, the app can query [UIScreen screens] to get the connected external screens. Second, it can register for notifications of screen connection and disconnection. At the moment, we are targeting a single screen in the vehicle, but one could imagine scenarios where the SDL app is projecting content to the center console screen and the instrument cluster so the design should allow for the possibility of multiple car windows.

davidswi commented 7 years ago

JF> How are we handling the screen size with the set view controller? The VC they set is unlikely to be at the same resolution as the negotiated screen size.

DS> Again, iOS provides a model for handling a different aspect ratio and resolution on an external display. The app developer needs to read the UIScreenMode properties for the screen and adjust presentation of content accordingly. I think the SDL API needs to provide a similar model.

davidswi commented 7 years ago

JF> What happens if they set the streamingViewController to nil? Do we send the background frames that also occur when the app goes into the background? Those frames are designed to tell the user to bring the app back to the foreground, so the message wouldn't be proper.

DS> We need to distinguish between swapping view controllers (as I said above, I think we need to support this) and explicitly stopping streaming. I think it would be better to say there is not a nil'able property, but rather startStreamingWithRootViewController and stopStreaming. startStreamingWithRootViewController should fail if streaming is already in progress, requiring the app developer to stop streaming first before switching.

When the SDLCarWindow is in the stopped state, it would be better to display the last frame for the first 100ms or so, then terminate video projection automatically if streaming is not resumed.

davidswi commented 7 years ago

JF> It seems to me that the dev should be passed a CarWindow when it's ready to be used. Perhaps though a delegate or a block interface of some sort.

DS> I think the delegate interface is reasonable for several reasons. First, the app developer is likely to have a single class that manages video projection rather than multiple. Second, the app needs to know when the screen is connected/disconnected, which may or may not occur at the same time the transport/proxy are connected/disconnected. Third, a delegate interface is better for supporting multiple screens becoming ready which, as I said above, should be supported by the API.

joeljfischer commented 7 years ago

DS: It is possible the app developer will need to change the view controller being projected through the course of the session. For example, if the user selects a 3D map view rather than the standard 2D map, the app may need to switch view controllers rather than just swapping views. Since we can't know in advance how the app developer has implemented the views to be projected, it's better to allow flexibility, IMHO. That said, I could see a majority use case where the view controller is not changed. I'm in favor of allowing it to be set in the SDLStreamingMediaConfiguration and changeable through a property.

That seems like a good compromise

DS> The term 'haptic' is also a bit of a misnomer because sending focusable regions to the head unit does not imply the head unit provides haptic feedback. It means only that the head unit is managing focusing and selection. How it provides feedback about which region has focus and whether it has been selected is up to the OEM. I'd suggest SDLSpatialItemLocator or SDLFocusableItemLocator. My personal preference would be SDLFocusableItemLocator because the point of sending the regions to the head unit is to enable a focus and select user interaction model rather than direct touch. This is industry standard convention and Android and iOS both use it.

I like SDLFocusableItemLocator. Also, it has the advantage of being understandable to more laymen.

DS> Building on my last comment, I'd suggest SDLSpatialItemLocatorEnabled or SDLFocusableItemLocatorEnabled.

I'd remove SDL since it's a property, but agreed.

DS> iOS has the same issue with an external AirPlay screen. The screen can come and go depending on whether the iOS device is on the same network, AirPlay is enabled on the external device, etc. iOS solves the problem in two ways. First, the app can query [UIScreen screens] to get the connected external screens. Second, it can register for notifications of screen connection and disconnection. At the moment, we are targeting a single screen in the vehicle, but one could imagine scenarios where the SDL app is projecting content to the center console screen and the instrument cluster so the design should allow for the possibility of multiple car windows.

DS> I think the delegate interface is reasonable for several reasons. First, the app developer is likely to have a single class that manages video projection rather than multiple. Second, the app needs to know when the screen is connected/disconnected, which may or may not occur at the same time the transport/proxy are connected/disconnected. Third, a delegate interface is better for supporting multiple screens becoming ready which, as I said above, should be supported by the API.

Agreed. Not quite sure how this would look compared to current models of streaming via SDLStreamingMediaManager though.

DS> Again, iOS provides a model for handling a different aspect ratio and resolution on an external display. The app developer needs to read the UIScreenMode properties for the screen and adjust presentation of content accordingly. I think the SDL API needs to provide a similar model.

DS> We need to distinguish between swapping view controllers (as I said above, I think we need to support this) and explicitly stopping streaming. I think it would be better to say there is not a nil'able property, but rather startStreamingWithRootViewController and stopStreaming. startStreamingWithRootViewController should fail if streaming is already in progress, requiring the app developer to stop streaming first before switching.

When the SDLCarWindow is in the stopped state, it would be better to display the last frame for the first 100ms or so, then terminate video projection automatically if streaming is not resumed.

Addressing these together, I think it should react the same whether the dev is streaming via car window (or whatever it's called), and without it. I'm worried that these changes go against the grain of how other streams are handled currently.

davidswi commented 7 years ago

The current APIs for starting and stopping streaming on the SDLStreamingMediaManager appear to be adaptable to SDLCarWindow as follows:

-(void)startVideoSessionWithRootViewController:(UIViewController *)vc startBlock:(SDLStreamingStartBlock)sb;

-(void)startTLSVideoSessionWithRootViewController:(UIViewController *)vc (SDLEncryptionFlag)encryptionFlag startBlock:(SDLStreamingEncryptionStartBlock)startBlock;

-(void)stopVideoSession;

theresalech commented 7 years ago

The Steering Committee has voted to return this proposal for revisions. The author will revise based on the comments made on this issue. As 2017-09-12 is the last day to accept proposals for consideration in the iOS 5.0 Release, this proposal will be included in an iOS 5.1 Release, scheduled for a few weeks after iOS 5.0. Once this proposal is revised, the new names will also be applied to SDLInterfaceManager, which will still be planned for the iOS 5.0 Release.

theresalech commented 6 years ago

Proposal has been revised to include requested changes, and the revised proposal has now moved into review. The Steering Committee will vote on this proposal during the October 3, 2017 meeting.

joeljfischer commented 6 years ago

Hi, I see a few issues with the current revision of this proposal:

A minor point, some naming is outdated from this proposal (e.g. the SDLInterfaceManager is named the SDLHapticManager.

First, we need to figure out how the integration of the car window will work with the streaming lifecycle manager currently owning the haptic interface and a UIWindow being set on the configuration. How will backward compatibility work in this case?

If we want all developers using streaming outside of OpenGL to use CarWindow, we should deprecate the use of UIWindow in the streaming media configuration. I would recommend that use case. However, currently, the haptic manager is only being created if the window exists, so this will have to change if there's a car window instead of the UIWindow, especially since it wouldn't be created in the configuration so we wouldn't know at initialization time.

I think I would recommend that the streaming media manager deprecate its own haptic manager and instead instantiate it on the car window property as part of the initializer for the car window (i.e. dependency injection).

Second, I'm not sure that the configuration property appHandlesTouches is granular enough, as in this proposal it handles the haptic interface entirely, and not just whether the haptic interface handles touches (which I believe is a later proposal but part of this feature for 5.1? I would think that we should break out whether the app handles touches and whether it handles haptic data. This could be a single property like enableHapticManager. What's currently happening in 5.0 is, if the UIWindow property on the streaming media configuration is set, the haptic manager exists. On the haptic interface the streaming media manager exposes, there's a property that the developer can turn off to stop sending RPCs, but they have no choice in that if the haptic manager exists it will at least try to return a view in the touch manager delegate method. However, if the haptic manager is extended to automatically handle touches (even if only some), then there need to be additional controls.

Third, the handling focusable UIButtons section is incorrect. The listed method does not work (it would be providing a category for a property that already exists on the object), and so the haptic manager currently just has a special case for UIButtons as an implementation detail. It's definitely not ideal, but it's workable for now.

Fourth, I'm not certain this proposal does enough to discuss things like how the car window API works, i.e. how does it capture the screen and send it. Has investigation been done into the iOS 11 RPScreenRecording API? (Incidentally, I've done some minor investigation and while it works quite well, the user needs to accept screen recording each time it is started via a prompt on their device). What is the performance look like with the method that is being used. This talks a bit about the interface, but the implementation details are very important here. What paths are available, and what are their up/downsides?

Fifth, on the Car Window class, I don't think that the screen dimensions would be a public property, as they are already on the streaming media manager. I'm not certain it's clear, additionally, how the data to be sent is generated and sent. Is it entirely handled by the car window class or forwarded somehow to the streaming media manager?

GNXClone commented 6 years ago

Hi Joel. Wow ;-)

I've taken over SDLCarWindow from David. In the last few weeks, I've made a lot of changes due to learning the best way to integrate with existing (and future 5.x) SDL code. I've summarized those changes in a revised proposal for SDLCarWindow (0091).

Many of your concerns have been addressed.

Naming: I worked from the release/5.0.0 branch, so all names should be the latest. If not, they will be corrected prior to PR.

1) The revised proposal describes how SDLCarWindow interoperates with SDLStreamingManager, SDLStreamingLifecycleManager, SDLTouchManager, etc. The method used to activate SDLCarWindow is now based on Apple standard techniques for supporting external displays.

2) My thoughts on reporting touch/haptic capability differ in-that I feel we can determine haptic support based on the RPC return result. Meaning we wouldn't need to add more capability flags and supporting logic. Any head unit which doesn't support haptic will NACK the sendHapticRectData RPC. The ACK for sendHapticRectData could contain more specific capabilities as necessary. Existing head units which lack haptic support will already NACK the RPC. Head unit's with touch screens would also NACK. SDLCarWindow modifies its behavior based on the result of the RPC. If the head unit doesn't support haptic, the haptic interface is disabled. Touch events (and gestures) are still processed as normal by SDLCarWindow. (Haptic selections arrive as touch events, generated by the HMI using the center coordinate of the rect)

3) I didn't use the method called out in the first proposal. Instead I check the view's class name for "UIButton".

BOOL canBecomeFocused = ([view isKindOfClass:[UIButton class]] == YES) || (view.canBecomeFocused == YES);

4) Hopefully the revised proposal will answer a lot of this.

Performance is good, unless there is animation. Animation makes it stutter. This has to be resolved, but time has not allowed for that yet.

I haven't looked into the screen recording API. I think forcing the user to allow screen recording each time they run the app might be asking too much. Also, it's iOS 11 only? Shouldn't we also support iOS 10? I think it's common practice to support the two most recent major iOS versions.

We have another test app which streams OpenGL at 30 FPS, WITH animation. That app will likely serve as the source for OpenGL support in SDLCarWindow in the future.

The (non-OpenGL) screen is captured like this: This is the code which was delivered as SDLCarWindow from Alexander Muller. It's what David gave me when I asked for SDLCarWindow source.

- (void)sdl_sendFrame:(CADisplayLink *)displayLink {
    dispatch_async(dispatch_get_main_queue(), ^{
        if (!self.sdlManager.streamManager.isStreamingSupported) {
            return;
        }

        if (self.sameFrameCounter == 30 && ((displayLink.timestamp - self.lastMd5HashTimestamp) <= 0.1)) {
            return;
        }

        if (self.isLockScreenMoving) {
            return;
        }

        self.lastMd5HashTimestamp = displayLink.timestamp;

        CGRect bounds = self.streamingViewController.view.bounds;

        UIGraphicsBeginImageContextWithOptions(bounds.size, YES, 1.0f);
        [self.streamingViewController.view drawViewHierarchyInRect:bounds afterScreenUpdates:YES];
        UIImage *screenshot = UIGraphicsGetImageFromCurrentImageContext();
        UIGraphicsEndImageContext();

        CGImageRef imageRef = screenshot.CGImage;

        // We use MD5 Hashes to determine if we are sending the same frame over and over. If so, we will only send 30.
        NSString *currentMd5Hash = [SDLUtility md5HashForImageRef:imageRef];
        if ([currentMd5Hash isEqualToString:self.previousMd5Hash]) {
            if (self.sameFrameCounter == 30) {
                //fprintf(stderr, "Same Frame\n");
                return;
            }
            self.sameFrameCounter++;
        } else {
            self.sameFrameCounter = 0;
        }
        //fprintf(stderr, "Frame: %d\n", (int)self.sameFrameCounter);

        self.previousMd5Hash = currentMd5Hash;

        CVPixelBufferRef pixelBuffer = [SDLUtility pixelBufferForImageRef:imageRef
                                                                usingPool:self.sdlManager.streamManager.pixelBufferPool];
        [self.sdlManager.streamManager sendVideoData:pixelBuffer];
        CVPixelBufferRelease(pixelBuffer);
    });
}

5) The dimensions are still public because SDLTouchManager uses them. We can hide these if absolutely necessary. But not sure it's a real problem if someone can access the screen dimensions?

Regards, -Michael

brandon-salahat-tm commented 6 years ago

My thoughts on reporting touch/haptic capability differ in-that I feel we can determine haptic support based on the RPC return result. Meaning we wouldn't need to add more capability flags and supporting logic. Any head unit which doesn't support haptic will NACK the sendHapticRectData RPC. The ACK for sendHapticRectData could contain more specific capabilities as necessary. Existing head units which lack haptic support will already NACK the RPC. Head unit's with touch screens would also NACK. SDLCarWindow modifies its behavior based on the result of the RPC. If the head unit doesn't support haptic, the haptic interface is disabled. Touch events (and gestures) are still processed as normal by SDLCarWindow. (Haptic selections arrive as touch events, generated by the HMI using the center coordinate of the rect)

Is there a benefit to this over letting the head unit explicitly state its preference? Additionally, will this technique support head units that support head unit haptics, as well as app-drawn haptics? I think the idea with adding the flag is that it allows some future proofing. It seems a little unreliable to blindly call the RPC and make assumptions based on the RPC response.

BOOL canBecomeFocused = ([view isKindOfClass:[UIButton class]] == YES) || (view.canBecomeFocused == YES);

Since the canBecomeFocused flag is not functioning correctly on UIButtons, I think this logic needs to be expanded to check for the situations that canBecomeFocused would normally represent.

The view is hidden.
The view has alpha set to 0.
The view has userInteractionEnabled set to false.
The view is not currently in the view hierarchy.

I think it would be good to add a function inside of an extension of the UIButton class that makes this logic very obvious, but it could remain as an if statement as well.

joeljfischer commented 6 years ago

My thoughts on reporting touch/haptic capability differ in-that I feel we can determine haptic support based on the RPC return result. Meaning we wouldn't need to add more capability flags and supporting logic. Any head unit which doesn't support haptic will NACK the sendHapticRectData RPC. The ACK for sendHapticRectData could contain more specific capabilities as necessary. Existing head units which lack haptic support will already NACK the RPC. Head unit's with touch screens would also NACK. SDLCarWindow modifies its behavior based on the result of the RPC. If the head unit doesn't support haptic, the haptic interface is disabled. Touch events (and gestures) are still processed as normal by SDLCarWindow. (Haptic selections arrive as touch events, generated by the HMI using the center coordinate of the rect)

Like Brandon, I don't see any advantage to ripping out how all of this works now and replacing it with this. And it seems pretty strange to call the send data method and receive capabilities in response. Plus, we already have a capability system accepted, coded, and released.

The dimensions are still public because SDLTouchManager uses them. We can hide these if absolutely necessary. But not sure it's a real problem if someone can access the screen dimensions?

I think there was a misunderstanding. The SDLStreamingMediaManager already has the dimensions, it shouldn't be replicated publicly elsewhere.

theresalech commented 6 years ago

The Steering Committee has voted to return this proposal for revisions, to address the feedback left in the comments, and include changes proposed by Xevo in #305. Members of the Steering Committee will meet on October 4 to discuss these revisions in greater detail.

GNXClone commented 6 years ago

Since the canBecomeFocused flag is not functioning correctly on UIButtons, I think this logic needs to be expanded to check for the situations that canBecomeFocused would normally represent. The view is hidden. The view has alpha set to 0. The view has userInteractionEnabled set to false. The view is not currently in the view hierarchy.

All of these are already checked for by the existing code, except for if the view is in the hierarchy. If the view is not in the hierarchy, it won't be part of the traversal in the first place. I had also added a check of the width and height to make sure they are > 0. Because any view which is one or zero dimensional should not be focusable (IMO).

    BOOL canBecomeFocused = ([view isKindOfClass:[UIButton class]] == YES) || (view.canBecomeFocused == YES);
    if ((canBecomeFocused == YES) &&
        (view.frame.size.width > 0) && (view.frame.size.height > 0) &&
        (view.isHidden == NO) && (view.alpha > 0.0) && (view.userInteractionEnabled == YES))
    {
        ....
    }

Like Brandon, I don't see any advantage to ripping out how all of this works now and replacing it with this.

No problem. Consider that idea dropped. I thought it was neat because head units which pre-date "haptics" will already NACK the "haptic" RPC. Which makes it easy for SDLCarWindow to determine "haptic" is not supported without additional coding.

I think there was a misunderstanding. The SDLStreamingMediaManager already has the dimensions, it shouldn't be replicated publicly elsewhere.

Yes, it was a misunderstanding. Screen dimension held by SDLCarWindow is now internal to SDL only, no longer accessible by the app.

GNXClone commented 6 years ago

New Pull Request #305

theresalech commented 6 years ago

Proposal has been revised to include requested changes, and the revised proposal has now moved into review. The Steering Committee will vote on this proposal during the October 17, 2017 meeting.

brandon-salahat-tm commented 6 years ago

In the current proposal, is there any reason not to have SDLCarWindow subclass UIWindow, and allow the proxy to initialize it as an off-screen window? This would allow apps to stream and interact with the interface without needing to modify the view stack that their normal application uses.

I have used this as an interface for template based SDL apps, but I am not sure if there was a technical limitation preventing it for VPM.

GNXClone commented 6 years ago

The reason is, this is the form in which SDLCarWindow was handed to us. SDLCarWindow was written by Alexander Muller of Ford Company. In its current form, Alexander’s implementation has only been modified slightly to work with ‘release/5.0.0’ branch changes.

A previous proposal which included enhancements to Alexander’s version did separate the VPM view from the device’s view. After several weeks of effort, that proposal was rejected. Thus, we have reverted back to the original implementation in order to close this one out in the time frame needed to include SDLCarWindow in the iOS5.1 release.

Perhaps you can create a new proposal to enhance SDLCarWindow with the interface you have used for template based SDL apps.

brandon-salahat-tm commented 6 years ago

The reason is, this is the form in which SDLCarWindow was handed to us. SDLCarWindow was written by Alexander Muller of Ford Company. In its current form, Alexander’s implementation has only been modified slightly to work with ‘release/5.0.0’ branch changes.

A previous proposal which included enhancements to Alexander’s version did separate the VPM view from the device’s view. After several weeks of effort, that proposal was rejected. Thus, we have reverted back to the original implementation in order to close this one out in the time frame needed to include SDLCarWindow in the iOS5.1 release.

Perhaps you can create a new proposal to enhance SDLCarWindow with the interface you have used for template based SDL apps.

If implemented in the currently proposed form, a breaking change may be required to separate the view window from the device's view in the suggested way (which is entirely different from what was previously proposed)

Since this would arguably improve the interface from an integration and maintainability perspective by allowing app developers to decouple their VPM screens and flow if they desire (without the use of swizzling or other alterations to the Apple runtime)

Toyota cannot vote to accept a VPM interface that does not allow decoupling of the SDL VPM screens/app flow from the native screens app flow, barring any significant technical limitations/risks to do so.

joeljfischer commented 6 years ago

I agree with Brandon here that ideally we ought to allow the developer to not have the view onscreen during the video streaming. Additionally, I believe making SDLCarWindow a UIWindow subclass and providing a separate UIScreen (barring technical limitations) is a good path. It should at least be explored. We don't want to make major changes later if we don't have to. I think the reason the previous version of the proposal was rejected was that it did too much, it tried to rewrite everything, including already implemented features. That doesn't mean that we need to strictly go with what Alex wrote as a POC over a year ago. We still have some time to work this into a good API, but a POC should probably be simultaneously built with the proposal to flesh the interface into the way we want it without fear of technical limitation.

GNXClone commented 6 years ago

SDLCarWindow doesn't need to inherit from UIWindow in order to achieve separate views. Instead of assigning the device screen's root view controller as streamingViewController, simply assign a different view controller which is desired for streaming over VPM. No changes to SDLCarWindow are necessary for this.

The bounds [UIWindow initWithFrame:] do not matter because SDLCarWIndow resets bounds to SDL screen size prior to streaming. The window's frame is offset by the device's screen width to move the window offscreen.

- (void)sdl_didReceiveVideoStreamStarted:(NSNotification *)notification {
    dispatch_async(dispatch_get_main_queue(), ^{

        // Offset SDL window offscreen from the LCD
        CGRect bounds = CGRectMake(0, 0, self.streamingMediaManager.screenSize.width, self.streamingMediaManager.screenSize.height);
        CGRect frame = CGRectOffset(bounds, UIScreen.mainScreen.bounds.size.width, 0);
        self.streamingViewController.view.window.bounds = bounds;
        self.streamingViewController.view.window.frame  = frame;

Tested and verified as seen here: https://www.dropbox.com/s/fppklz1txuu28tz/SeparateViews.mov?dl=0

    CGRect bounds = CGRectMake(0, 0, 1, 1);
    UIWindow* window = [[UIWindow alloc] initWithFrame:bounds];
    UIStoryboard* storyboard = [UIStoryboard storyboardWithName:@"SDLWindow" bundle:[NSBundle mainBundle]];
    UIViewController* vc = [storyboard instantiateInitialViewController];
    window.rootViewController = vc;
    vc.view.frame = bounds;
    vc.view.bounds = CGRectMake(0, 0, bounds.size.width, bounds.size.height);
    [window addSubview:vc.view];

    // Enable VPM
    [self.lifecycleConfig setAppType:SDLAppHMITypeNavigation];

    // Attach the view controller we want streamed over VPM
    self.lifecycleConfig.streamingViewController = window.rootViewController;

@joeljfischer An app cannot create a usable UIScreen instance without calling non-public APIs because the attributes are read only.

https://developer.apple.com/documentation/uikit/uiscreen/1617838-bounds?language=objc

theresalech commented 6 years ago

The Steering Committee has voted to return this proposal for revisions, based on the comments left on the issue by the Toyota and Livio teams.

The major revisions proposed, that will need to be further investigated for technical feasibility are:

theresalech commented 6 years ago

The Steering Committee voted to reject this proposal in favor of the previously accepted SDL 0111.