3dcitydb / 3dcitydb-web-map

Cesium-based 3D viewer and JavaScript API for the 3D City Database
Apache License 2.0
377 stars 125 forks source link

Directly load data without export #65

Closed ahmadzainuririzaldi closed 3 years ago

ahmadzainuririzaldi commented 3 years ago

i want to create application with features:

  1. every user can upload 3D file(cityjson or gml)
  2. 3D file that uploaded on step 1 can be viewed from map by all user

i was read documentation and tried sample project, everything is okay. my step is :

  1. import cityjson manually via IMPORTER/EXPORTER TOOLS
  2. export manually via IMPORTER/EXPORTER TOOLS
  3. open exported result on webmap client

but my expectation step is :

  1. import cityjson programmatically via myapplication (I can handle with run CLI command from programming language)
  2. open 3D data via web map client. without export. (directly load via API such as geoserver)

is it posible ?

thanks

thomashkolbe commented 3 years ago

Currently, there is no web service or REST-based API to the 3DCityDB that would allow to visualize the data in a web client directly from the database (neither the 3DCityDB Webmap Client nor any other web client that could directly connect to PostGIS which I would be aware of).

A solution to your intended application could be to add an "upload" button to the 3DCityDB Webmap Client, which opens a file browser for the user to select a local file. This would then be uploaded via http POST request to your web server. Then your server would have to start a batch CityJSON or CityGML import process, followed by a batch KML/COLLADA/glTF export process generating all necessary files for the visualization. Then the resulting visualization files would have to be moved to a folder that can be served from your web server. Finally, in the 3DCityDB Webmap client you can automatically add a new layer referring to the location of the visualization export on your web server. Since the city model import and the visualization export will take some time (depending on the size of the model), you might need to reload the entire configuration of the 3DCityDB Webmap Client after some time (either manually by the user or also add a process that repeatedly checks, if the visualization model has been generated and is accessible from your webserver).

If you would run this setup using Docker, you could even instantiate a new Docker container for each upload process and delete the container after the visualization export has been performed.

Hope this helps Thomas Kolbe

ahmadzainuririzaldi commented 3 years ago

Thanks for your response. Ok i see your scenario. But i have one question.

Amount of building on my application about one hundred thousand data. Is your scenario still realible ?

thomashkolbe commented 3 years ago

Yes, this should be possible. If people are using a cloud hoster platform to upload and share their large files, the files are often also uploaded via http POST. Import into the 3DCityDB could take some time (depending on the file sizes and complexity of the dataset, and on the hardware of your server). To give you an impression: importing the entire Tokyo dataset (2 Mio buildings from a compressed ZIP-Archive of 9.5GB size incl. textures requires on a modern desktop PC with SSD drive and 16GB RAM between 30mins and 2h). The generation of the entire visualization export takes another 8h.

Example settings and Tokyo datasets: https://wiki.tum.de/display/gisproject/Semantic+3D+City+Model+of+Tokyo

But if you want to provide that interactive import functionality, you will probably not ask for such large datasets.

ahmadzainuririzaldi commented 3 years ago

Thanks thomashkolbe

your eexplanation is about one file with big size. How about with small file but thousand of amount.

Scenario of my application is, each user can upload 1 or 2 building inside one file. then maybe nextweek he can upload 2 building again. My question is :

  1. Is my application will has thousand of layer? or
  2. I must delete generated tile and re-generate tile if new building has been uploaded by user

Thanks in advance

thomashkolbe commented 3 years ago

In this case I would recommend to run one instance of the 3DCityDB which accumulates all uploaded models over time and for which you have a nightly visualization export that regenerates all tiles. If somebody uploads a new (small) dataset, you should launch another instance of the 3DCityDB in a Docker container as described above. After the import of the few buildings and the generation of a visualization export (which will be fast) you could then generate a 3DCityDB webclient project, which contains two layers: the first layer is a link to the visualization model generated from the first instance of the 3DCityDB and the second layer is a link to the visualization export of the newly added buildings. If the user (or an administrator) agrees to the uploaded models they can be merged into the first instance and become part of the growing city model. After the merging of the new models into the first 3DCityDB instance, the Docker container with the temporary second 3DCityDB could be deleted again.

ahmadzainuririzaldi commented 3 years ago

thanks for your answer sir