Open blairmacintyre opened 7 years ago
Hey @blairmacintyre ,
Unless you use scaling factors like --buildingsExtrusionScale
or --terrrainExtrusionScale
, the scale of the models is correct relative to the rest of the geometry within the tile. So, say you have a building A of height 10 meters next to a building B of 30 meters, the building A would be one third of the height of building B in the dimensions they are exported to.
But you are right, everything is squashed to be within the bounds of [-1, 1]
on x
and y
-axes
, and [0, maxHeight]
on the z-axis
, with maxHeight
scaled proportionally with the tile size; you can think of that as normalized coordinates.
To answer your question, it would be possible to output the geometry positions where the distance relative between each of the geometry vertices can be expressed in meters, I think it is a nice idea.
I poked around in the code, I can see where scaling occurs, but that's just scaling as you describe above. I assume the actual mapping from map tile to meters is based on latitude, unfortunately; there are lots of places that define this on the web, but I'm not immersed enough in this to whip this change up. (mostly, I've been cheating by using cesium's math libraries).
I'd also love an option for using the elevation heights correctly (e.g., instead of normalizing the elevation, just use the meter's above sea level). So that I have the correct altitude.
My interest in this is that I want to create virtual reality and augmented reality experiences that are based in the real world, with a simple model of the real world used as a backdrop (rendered in VR, perhaps rendered as a ghost or just for occlusion in AR). So these models are a great first step. One of the folks on the tangram gitter pointed me at this; ideally, I want to be able to generate models like this on the fly in the future!
btw, I love some of the other work you're doing on the tangram renderer and so on, the project is great. In our Argon AR web project (argonjs.io) we have a notion of "realities" that present a view of reality which can be used instead of "the world around you." For example, we can have one for panoramic geocoded images (hopefully 360 video soon), and for using Streetview (so you can walk along a street anywhere to experience an AR application as if you are there). I came upon this stuff because I want to create a "3D virtual version of the world" that will let people go anywhere on the earth an see "some approximation" of the real world there, from a first person ground level perspective.
Our realities are implemented as web pages, so the tangram stuff, or the models here, or other sites that use mapzen like vizicities, are "almost there"! But none of the real time ones do what you're experimenting with here, I think: generate a mesh from elevation data, and then put the buildings and other features on it.
I'd also love an option for using the elevation heights correctly (e.g., instead of normalizing the elevation, just use the meter's above sea level). So that I have the correct altitude.
That sounds fair to me, I can surely add some option --meters
to save all of the model vertices to be in meters, for both buildings and elevation.
I want to be able to generate models like this on the fly in the future!
It could be possible to port this with emscripten (with the potential drawback of library size). Or, depending on the needs of your client application you could make a server hosting this project, here is a simple snippet of code of a forward node server that would generate models on the fly; which could be enough for a demo or experimentation/research.
btw, I love some of the other work you're doing on the tangram renderer and so on, the project is great.
Thanks :)
Our realities are implemented as web pages, so the tangram stuff, or the models here, or other sites that use mapzen like vizicities, are "almost there"! But none of the real time ones do what you're experimenting with here, I think: generate a mesh from elevation data, and then put the buildings and other features on it.
This is still a little experimental for some of the features that are exposed so far, but I can see how useful that could be for your project; especially if you need to think of the models in term of meters to properly experience them in AR. I'll look into this option next when I get a chance!
Thanks for the ideas. I hope you can add the meter's soon! 👍
The simple server might be a good start, as you say, for simple experimentation. Although, as I think about how I'm going to use this, I realize I probably need to enhance things to be able to relate the meshes to meaningful features from the openstreetmap data (e.g., know which building is which, what mesh is the basemap, which meshes are roads and what roads they are, etc).
Fro example, I'd need to know what the ground mesh is (so that I can use it as the "ground" in VR, or to snap the user to in AR), which are buildings (so I can render them in VR in a different style than the ground), and which are things like roads.
Finally, I assume (naively) that since the mesh is based on a top-down height map, that it should be possible to take imagery (eg. of the sort openstreetmap or mapzen provides?) and use it as a texture. Understanding how to do this (e.g., are the UV coordinates set in a way that, if I had tile imagery at the right zoom level, I could use it as a texture?). This question is probably a result of my unfamiliarity with the data.
I really do appreciate you taking the time to talk to me!
I second this request. I want to use the models to give a backdrop to architectural renders, but without proper scaling, that's difficult to do.
e.g., having the model units be in meters?
Right now, it appears to be outputting the model with a fixed total size of 1x1x1, I think?