maplibre / maplibre-gl-js

MapLibre GL JS - Interactive vector tile maps in the browser
https://maplibre.org/maplibre-gl-js/docs/
Other
6.41k stars 690 forks source link

How difficult would it be to implement AR/XR #3395

Open DerKorb opened 10 months ago

DerKorb commented 10 months ago

I am trying to marry potree viewer for point clouds with maplibre-gl. Potree viewer does render on XR devices and I am trying to figure out how realistic it would be to do XR in here as well. Whats neccessary for XR is mostly:

Would love to hear some insight from someone in the knows of the rendering pipeline here.

HarelM commented 10 months ago

I believe you can create two maps and control the camera of each so that they are in sync with the relevant offset, but I'm sure you already knew that, so I'm not sure I understand the question...

DerKorb commented 10 months ago

I believe this would be an approach that could work, but it's not how you usually do it. For one this would mean loading and storing in memory all your data twice instead of using the same instances. Also you need to use the webXR API to make this work and I am not sure if it would support beeing handed to instances. Interesting idea for sure, I will look into it, maybe it could be an easy hack for now.

wipfli commented 10 months ago

I guess you would use MapLibre Native for this, no?

DerKorb commented 10 months ago

No, my plan would be to use webXR API and thats just html and javascript/typescript. My software should work on Quest and browser.

DerKorb commented 10 months ago

This is how potree viewer does it: https://github.com/potree/potree/blob/develop/src/viewer/viewer.js#L1921

DerKorb commented 10 months ago

I think what I need is the posibility to render(cameraPosition, targetRect). So in principal my question is how hard would it be to archieve that.

HarelM commented 10 months ago

Target rect = the map's canvas. Camera position can be calculated using camera from to, or some similar method I can't remember ATM. So in theory, it should be "easy".

DerKorb commented 10 months ago

I am not yet sure how it works exactly, but I am quite certain I need to have both pictures in the same canvas. Maybe that can be done by copying both cavases into a combined one.

DerKorb commented 10 months ago

So I looked it up, this is how it works:

onXRFrame(t, frame) {
  let pose = frame.getViewerPose(xrRefSpace);
  let glLayer = session.renderState.baseLayer;
  // bind the WebGL layer's framebuffer,
  // which is where any content to be displayed on the XRDevice must be
  // rendered.
  gl.bindFramebuffer(gl.FRAMEBUFFER, glLayer.framebuffer);

  // Clear the framebuffer
  gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

  // Loop through each of the views reported by the frame and draw them
  // into the corresponding viewport.
  for (let view of pose.views) {
    let viewport = glLayer.getViewport(view);
    gl.viewport(viewport.x, viewport.y,
                viewport.width, viewport.height);

    scene.draw(view.projectionMatrix, view.transform);
  }

  // Per-frame scene teardown. Nothing WebXR specific here.
  scene.endFrame();
}

From what I have seen so far, it looks to me like this is something I would have to manage in the painter class?

DerKorb commented 10 months ago

So I managed to add anything that is relevant, but for some reason nothing is rendered (feels like I might render to the wrong framebuffer) and also I notices that rendering takes about 50ms (simple-map example) on my PC, which feels like adding VR to it is pointless anyways because 20fps and VR don't mix well.

DerKorb commented 10 months ago

I think I got it mostly working, now I only need to manage translating the projectionMatrix I get from VR (typically it is a camera tracking the hmd where the position is in meters compared to real space). So my matrix might represent the camera beeing at (2,1.8,2) looking towards (0,1,0) or so. Maplibre seems to have quite a different approach. I take it I cannot assume the map to be a object with some size at a certain position?

HarelM commented 10 months ago

Can you clarify the question?

DerKorb commented 10 months ago

So maplibre works in a way where the map extends the camera and the camera has a transform. The transform is controlled by center, height, pitch and I think it was bank angle. My VR headset will give me a transform for both cameras. Now I tried to set the cameras projection Matrix directly but could not seem to find the Map anywhere. I think I can take the position of the headset and the postion where I want to have the map in the room and calculate center and the angles from those, but that feels quite complicated. I tried setting the projMatrix directly but then I could not find the map anywhere. I want to find a way to tell the renderer: render a map as if it was on a virtual table at (0/1/0) and the camera is currently at (1/2/1) and looks in a direction specified by a quaternion or whichever.

HarelM commented 10 months ago

I don't know why this isn't documented properly, but I think you might be able to use: calculateCameraOptionsFromTo

Or maybe not, IDK...

DerKorb commented 10 months ago

That might be a workaround, but I will have to see if it really works as it is LngLat also. But maybe I can just interpret the Headset/eye coordinates as if they were in LngLat and it will work.

DerKorb commented 10 months ago

For example, how could I look away from the map using that?

DerKorb commented 10 months ago

Is looking away from the map even possible at all?

HarelM commented 10 months ago

What is the definition of looking away?

DerKorb commented 10 months ago

Imagine standing in a room where the map is at the floor. Now look at the ceiling. Maybe its easier to describe the core problem this way: I would need to have a first person shooter like view on the map. Where you can walk around and look anywhere you want.

HarelM commented 10 months ago

I guess you would need to define where the canvas is in the room and according to that if and how to render to it... I don't think this is specific to this library, but I might be missing something obvious. If I needed to write something like that on the browser I would first define where this canvas is and its dimensions and angle before rendering the map to it...

DerKorb commented 10 months ago

Rendering the canvas onto a plane would be trivial. I want a DEM rendered in 3d so the map needs to be a 3d-object in virtual space.

HarelM commented 10 months ago

I don't think you can "zoom out of the map" currently, you'll probably need to render the scene and cut only the relevant part I guess. or maybe define a canvas that's a bit bigger than scene and cut it, IDK...

DerKorb commented 10 months ago

In VR you can move your camera with 6DoF, but it looks to me like maplibre only knows 5DoF. In principal my hope was to just write the projMatrix myself, but it feels like that would be breaking a lot of maplibre internals, I could not get it to work.