Open ianthetechie opened 9 months ago
Yep, this is a solid idea. A couple of notes either new or from previous discussions:
Beyond the basics like speed or speed limit based zoom/pitch, I can see needing different versions of the camera parameters for a mobile UI versus CarPlay/AndroidAuto within the same app.
Yes, good call-out! I think there are a few things packed into this so let me try to get it all out in the open.
Anything I'm missing?
Considerations to enable future configurations (e.g. bbox for next maneuver)
Yeah, that's actually a really good call-out too. Many navigation experiences will want this to be able to quickly show a maneuver and highlight it in a route detail. I think this should actually be a property on RouteStep
. Let me know if you agree and I'll capture that in an issue.
It should be extremely easy to extend either by adding either fields to the data structures (for stuff that is useful for a wide range of applications) or extension methods and the like (for those which are not used as often / make sense to compute lazily).
@Archdoog to do a SwiftUI first pass at what the core state value looks like.
Ok, so first thoughts on this now that I'm getting into it...
cameraLookingAtCenterCoordinate:acrossDistance:pitch:heading:
.Some salient points from the discussion with @Archdoog today:
MLNLocationManager
on iOS LocationEngine
on Android.LocationProvider
interfaces into MapLibre so that we'll get things like simulation for free.followWithCourse
, you can push the user's position down to the bottom of the screen using contentInset
Every frontend will be in charge of camera handling, and in many cases, the user may be allowed to temporarily take over. However, for the case of the "default" camera following the user around during navigation, we want to outsource this logic in a fairly generic manner to the core, which returns the key bits to be translated into camera control of the renderer by the frontend.
This issue will track individual cases until we finish the next milestone.
39
40