Open DerKarlos opened 2 years ago
Yes, I would very much like to see such an API.
The only issue we have to navigate around is that input depends on a "windowing library". That is for example right now winit. In the future we might replace winit on iOS or Android because it is not sufficient right now.
That's why we already have the maplibre-winit
crate which doesn't need to be used on Android e.g.
I would say that traits
belonging to input handling should probably live in maplibre
crate. Then for each windowing library we create a new crate like maplibre-winit
, maplibre-android
.
That is one part of the story. The other is to make the map to things. That could be a different kind of API.
I envision a event-based API, in which each processed input (keypresses, but also window resizes) can be send to the maplibre
crate. Based on the configuration of maplibre
or by supplying input handlers, the map can be moved.
As you can see there are multiple layers which must be though of. E.g. for users of the renderer the way how input is processed is maybe not important, but how input is handled. On the other hand there might be users who want to port maplibre-rs to another platform. Those users would be interested in customizing the input processing.
So winit is only for windows/desktop, not like wgpu generated according to the target? As newbee I assumed, winit should do this or get improved if we have problems. I see winit as called by the app example, so it will be target dependent anyway.
This issue was mend for custom controls, not for winit. And event-based is in issue #91
The controler api should be the fn window_input(&mut self, event: &WindowEvent)
.
I prefer refurbishing instead of detailed planes. We may start with splitting into 2 crates and do the OS optimisations afterwards.
traits belonging to input handling should probably live in maplibre crate.
Hm? That means, the user can not change the control reactions, as I intended.
In maplibre-JS the input events generate deltas and new set values for the map. The map is an extended camera! The set values can be jumped to or interpolated by the map. We may do the same. The interpolation would be in the map crate and the input reactions in the control crate.
So winit is only for windows/desktop
Winit is only for complete apps, running the entire app lifecycle in it (in Rust). I presume maplibre-rs also targets running as single embedded view within an externally defined Android app?
May I write an extra issue for winit? Or someone else, I don't know much about the problem :-(
@DerKarlos What do you want to make an issue about, and what do you expect as response/outcome?
I made this issue about controlers. But it was used for winit. There should be a new issue for winit.
So winit is only for windows/desktop
It also supports mobile, but not in a way in which we require it. winit currently only supports full screen apps. I'm not sure if changing/extending winit is desired, or if we implement the parts we need ourselves. winit
is already quite complex, and probably has a different usecase compared to rendering map views within other apps.
I think we have a little bit too much issues now open about how we want to handle input. I would propose to merge all of this discussions to #91.
We also need to think first about how to support the low-level features of input handling and then we can go and think about defining "different controls".
For example right now pinch-to-zoom is not working on the web, android and ios.
Also it is not possible to update the view e.g. from click listeners in browsers. So there is a lot of basic work required.
The core feature of maplibre, rendering the map/scene, should be in a Rust crate like "osm-ml-render". The "default" control should be in a separate Rust crate like "osm-ml-def-controls". A demo application shows use both crates.
🤔 Expected Behavior
A user of maplibre-rs could replace the control by a deviated, user defined control.
😯 Current Behavior
The whole repository has to be forked.
💁 Possible Solution
We offer an core Rust crate for rendering OSM, independent of any input UI and a separate control crate, a user could fork and adapt as wished. The window input events will go into the control instance and the camera will be moved to change the view of the map
🔦 Context
There are different possible uses and such controls for OSM map and 3D rendering. Even gamification with 1st or 3rd person controls are possible.
💻 Examples
An air plain simulator will have quite a different control but may use the same OSM rendering.