maplibre / swiftui-dsl

Making it easy to use MapLibre in SwiftUI
BSD 3-Clause "New" or "Revised" License
24 stars 9 forks source link

Basic support for binding the camera viewport #2

Closed ianthetechie closed 8 months ago

ianthetechie commented 1 year ago

We need to support setting the camera viewport explicitly to a binding, which will update the map. This has a number of thorny issues, which I'll attempt to spell out here. I'm sure I've missed some cases, so feedback welcome.

Methods of camera manipulation

There are three basic types of camera manipulation in MapLibre today.

Group one: direct manipulation of properties

This family of properties can be manipulated directly via a single family of method overloads. They can all be manipulated either separately or together currently. I assume we could do this via a single modifier with nullable arguments.

Group two: visibility specification

The second family can be specified by two modifiers: one sets a visible bbox and the other specifies a set of points. For the latter, it is probably useful to expose a higher-level API (maybe even make it the default behavior as in MapKit) to automatically fit all user-added overlays, annotation, etc. (We can make a task for this later if we decide to make that).

Group three: direct camera control

You can also construct an MGLMapCamera. As far as I'm aware, this is actually currently the only way to set pitch, which is a bit funky. We might want to have the others create cameras internally but additionally expose this. Opinions welcome.

Open questions

What's the best way to signify animation parameters? Can/should we be considering making this a bit more swift-y? Stick with the classic Obj-C-style animated parameter? Make animation opt-out? Something else?

How to handle user tracking? It's currently viewed as a separate parameter, but at a higher level, "track the user" is simply another camera mode. Half-baked idea literally off the top of my head, but I think the camera state might be best modeled as a sum type. We have the explicit parameters (center coord, zoom level, direction, and pitch) as one state. Visible coords and bounds are just convenience ways to construct this state. Then there's a separate state which is "follow the user at this zoom level, direction, and pitch."

Related to the above, we will need a way of handling user interaction. The user can of course "break" programmatic tracking of any sort, and this special case will need to be modeled, maybe with read-only observability with by the parent (the tracking status is internal @State that will be dependent on user interaction).

ianthetechie commented 1 year ago

@Archdoog I'm starting to write up some of the architectural/design questions now on GitHub and here's the first one :) I'd love your input on this when you get a chance.

ianthetechie commented 1 year ago

Okay, so an MVP of sorts is actually very much within reach! I've pushed an example: https://github.com/stadiamaps/maplibre-swiftui-dsl-playground/blob/main/Sources/MapLibreSwiftUI/Examples/Camera.swift.

I'm pretty happy with the interface and bindings. I've also verified that it works both ways with a modified body like this:

    var body: some View {
        MapView(styleURL: styleURL, camera: $camera)
        .task {
            print("Before camera: \(camera)")
            try! await Task.sleep(for: .seconds(3))

            camera = MapView.Camera.centerAndZoom(switzerland, 6)
            print("After camera: \(camera)")

            try! await Task.sleep(for: .seconds(2))
            print("VERY after camera: \(camera)")
        }
    }

The output is

Before camera: centerAndZoom(__C.CLLocationCoordinate2D(latitude: 46.80111099999998, longitude: 8.226666999999992), Optional(4.0))
After camera: centerAndZoom(__C.CLLocationCoordinate2D(latitude: 46.801111, longitude: 8.226667), Optional(6.0))
VERY after camera: centerAndZoom(__C.CLLocationCoordinate2D(latitude: 46.80111099999998, longitude: 8.226666999999992), Optional(6.0))

That actually brings up an interesting quick which I'm surprised by, but I'm not sure it matters now. I'd expect the camera update to only occur after the animation as implemented.

The big unknown in still if we can do better with animations, or whether we should just stick what's already in MapLibre. In any case, that isn't a decision that needs to be made urgently.

Archdoog commented 1 year ago

Working on my implementation of a MapCamera enum, I've landed on this question a few times:

Should the MapCamera contain automation for user location?

With the original MapView, Apple's MapKit and more, the answer is typically kind of. Those apis allow you to set a user location following mode to effectively control the map camera indirectly. When this is running, the binding would just want updates from the MapView of the MapCamera every time a location update pan finishes. When to update the binding is definitely tricky depending on the reasoning of the current behavior/mode. Furthermore, The provider of the coordinate data may ideally want to update the location every 60th of a second, every second, or once in a while. The slower the frequency, the easier it is to just let the camera and animation do its thing. The 60th of a second may require a totally different mechanism (e.g. DisplayLink), but it's worth thinking on.

Without User Location

This simpler camera option detaches the MapCamera from a location provider. Using this ideology we could see a service updating the center location constantly with a refresh rate.

enum MapCamera {

    /// Center on a coordinate with default values
    case center(CLLocationCoordinate2D)

    /// Center on a cooridinate with a specific zoom and pitch
    case center(CLLocationCoordinate2D, Zoom, Pitch)

    /// Pass a specific camera object to display
    case camera(MGLMapCamera)

    /// Fit the camera exactly to a bounding box.
    case boundingBox(BoundingBox)

    /// Fit the camera to a bounding box with padding.
    case boundingBox(BoundingBox, EdgeInsets)
}

With User Location

With user location, we now have to assume the camera can be attached to the user location with a location provider. This can be done with the MapView itself by just toggling the follow user behavior as defined in the user tracking mode: https://maplibre.org/maplibre-native/ios/api/Classes/MGLMapView.html#/c:objc(cs)MGLMapView(py)userTrackingMode. The bonus here is, we can let the location provider update the location many times on the MapView before updating the external binding value.

enum MapCamera {

    /// Center on a coordinate with default values.
    case center(CLLocationCoordinate2D)

    /// Center on a cooridinate with a specific zoom and pitch.
    case center(CLLocationCoordinate2D, Zoom, Pitch)

    /// Pass a specific camera object to display.
    case camera(MGLMapCamera)

    /// Fit the camera exactly to a bounding box.
    case boundingBox(BoundingBox)

    /// Fit the camera to a bounding box with padding.
    case boundingBox(BoundingBox, EdgeInsets)

    /// Follow the user's location with a north map orientation.
    case followUser // or followLocation

    /// Follow the user's location and orient the map to their course of travel.
    case followUserWithCourse // or followLocationWithCourse

    // etc.
}

As a note, with the LocationProvider as a common tool to manipulate centering location data, we could rename User to Location, LocationProvider or similar. By default the MapView could contain its standard platform specific UserLocationProvider, but allow customization.

Extending This

It would be easy to consider extensions to this ideology that include common use cases like MapCamera.showcase(Route), MapCamera.showcase([Route]), etc. While an enum is nice, maybe another container is ideal to allow extending custom MapCamera's that contain logic based on the foundation.

Thinking Forward

The Basics

@State var mapCamera: MapCamera = .center(.init(45.5, 45.5), 10, 45)

var body: some View { 
    MapView(styleURL: styleURL,
            camera: $mapCamera)
}

Injecting a custom location provider

This could be a good way to leverage a lot of functionality in an easy to understand manner:

@State var mapCamera: MapCamera = .followLocation

var body: some View { 
    MapView(
        styleURL: styleURL,
        camera: $mapCamera
    )
    .locationProvider(navigationLocationProvider)
}

Animation

As it sits now, you want the Binding for map camera to update at the end of the animation process. This is fine so long as we can manipulate the animation or frame rate to fit the use case for navigation or another custom profile. This topic needs a lot more discussion, but I would love if for navigation we could give the MapView a location provider that provided a realistic user location value every frame and the MapView could just follow it, updating the binding at the end of each window of frames. Don't treat the example below as a real idea, just trying to illustrate the idea as inspiration, implenting this likely has many flaws.

@State var mapCamera: MapCamera = .followLocation

var body: some View { 
    MapView(
        styleURL: styleURL,
        camera: $mapCamera
    )
    .locationProvider(navigationLocationProvider)
    .onCameraAnimation { eventUpdate in
        switch eventUpdate.status {

        case .ended:
            mapCamera = eventUpdate.mapView.camera
        default: 
            break
        }
    }
    .cameraBindingUpdate(on: .ended)
    .cameraAnimation { 
        // This would likely only "buffer" external mapCamera updates. Updates from gestures within the map would have to ignore this.
        CameraAnimation(
            ...
            // Lots to brainstorm here, but how to control the animation/flow between frequent and infrequent camera updates.
        )
    }
}

Animation needs a lot more thought, but it's something to think about in this context.