cs-utulsa / VRSandbox

Sandbox for testing VR
MIT License
1 stars 1 forks source link

Test Scanning Options #7

Closed ProfessorAkram closed 1 year ago

Ku-Bri commented 1 year ago

PROGRESS

Looking at viability of several promising methods of creating floor plans from scans and pursuing automated 2D floorplan to 3D model system.

Got the FloorplanToBlender program running and currently trying to refine the output to be usable.

Looking into ways to convert single mesh into room with objects automatically (AI detection).

SLOWDOWNS

-The FloorplanToBlender program is not detecting windows and doors correctly and there are oddities in the walls.

COMPLETED

After modifying floorplans obtained from scans to be best used in the Floorplan to Blender automation, the results are not high enough quality for our purposes. It might be possible to modify the program to produce acceptable results but not without time intensive research, development, and testing, which is outside of the scope my immediate goals.

RESOURCES

FloorplanToBlender from github: https://github.com/grebtsew/FloorplanToBlender3d/blob/master/README.md

PROGRESS IMAGES

Image

Image

Image

Ku-Bri commented 1 year ago

PROGRESS

Working on refining the Matterport scanning process to produce better mesh results in the hopes that, the more clearly defined the items in the room are, the easier we might be able to automate the separation process of those items into separate objects.

Still trying to identify viable options for segmenting the items into separate objects from the single mesh we get from point clouds.

Looking at options for buying or building a mount that will work for spaces that require lower scans/scans up against the wall to get the detail for separation of objects from the wall.

Going to use IPAD for photogrammetry comparison of room 2055, using the same tripod locations (for the most part) that were used in the Matterport scans.

SLOWDOWNS

Known issues: limits of the tripod - the legs limit how close to walls and other objects the camera can be placed and when the tripod is set as low as possible, the legs can be seen in the scans. Lowest scan possible with current tripod is the camera base at 14 1/8".

COMPLETED

Took 84 scans of Rayzor Hall 2055 with the Matterport. Scans 1-24 were taking at camera base height of 63 5/8", scans 25-50 were taken at camera base height of 35 3/8", and scans 51-84 were taken at camera base height of 14 1/8". The scan is still processing. Immediately known issues: the tripod limits how close to walls etc that we can scan, as well as how low with the lowest height being 14 1/8". Also, at the lowest height the tripod legs can be seen in the scan.

RESOURCES

Some links I used to get started with the Matterport: https://matterport.com/matterport-academy https://www.youtube.com/watch?v=VTvFVUZhm7Y https://www.wegetaroundnetwork.com/topic/5495/10-scanning-tips-for-new-matterport-users/

Tripod potentials: https://www.amazon.com/Legged-Thing-stabilizer-Monopods-attachement/dp/B07RX18KCP/ref=asc_df_B07RX18KCP/?tag=hyprod-20&linkCode=df0&hvadid=416875525560&hvpos=&hvnetw=g&hvrand=7495965025241545875&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9026541&hvtargid=pla-786870917415&psc=1&tag=&ref=&adgrpid=100759324264&hvpone=&hvptwo=&hvadid=416875525560&hvpos=&hvnetw=g&hvrand=7495965025241545875&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9026541&hvtargid=pla-786870917415#customerReviews

https://www.amazon.com/Celestron-Astronomy-Telescope-Compatible-Telescopes/dp/B0B7CPR7VT/ref=asc_df_B0B7CPR7VT/?tag=hyprod-20&linkCode=df0&hvadid=598358753210&hvpos=&hvnetw=g&hvrand=2209249805133225812&hvpone=&hvptwo=&hvqmt=&hvdev=c&hvdvcmdl=&hvlocint=&hvlocphy=9026541&hvtargid=pla-1802567714566&psc=1#customerReviews

For Building one: https://www.youtube.com/watch?v=ZsIhGy_Helk

PROGRESS IMAGES

18dmontgomery commented 1 year ago

PROGRESS

During my research into Matterport scanning, I explored the various add-ons and features it offers. One of the first options I looked into was MatterPak, which provided a .obj file containing mesh objects. However, we encountered a challenge as there was no effective way to differentiate one object from another within the file. Consequently, this approach did not prove useful for our purposes.

Next, I delved into e57 files, which offered a more promising outcome. These files contained a significant amount of data, although sorting through it required considerable effort. While it provided a step forward, we continued to seek a solution that would better suit our needs.

Ultimately, I pursued the option of outsourcing BIM (Building Information Modeling) files to Matterport. This avenue yielded favorable results, providing us with a 3D representation of the space. However, it is important to note that the BIM files did not include textures.

SLOWDOWNS

One of the significant considerations during this process is the time required for processing the BIM files. Typically, it takes approximately 2-4 business days to complete this task. It is essential to account for this timeline when planning projects involving BIM file conversion.

Moreover, it is crucial to consider the cost associated with outsourcing BIM files to Matterport. I reached out directly to Matterport and received the following price breakdown for furniture and architecture BIM files:

These costs provide a perspective on the financial implications associated with generating BIM files through outsourcing.

COMPLETED

As part of my work, I successfully generated two BIM files—one for the North Campus and another for Dr. Gamble's office. These files now serve as 3D representations of the respective spaces.

RESOURCES

For further information on the Scan to BIM process, the following resource from Matterport's support website provides valuable insights: Scan to BIM - Matterport Support

PROGRESS IMAGES

Scan 1

Scan 2

Ku-Bri commented 1 year ago

PROGRESS

Photogrammetry Exploration/Comparison Continued:

I used the Ipad to take 1014 photos of Rayzor Hall 2055. I took photos at 3 different levels: approximately 7', 5'3", and 21". Using a computer in 2055, I imported the photos via cable and uploaded them to Reality Capture. Simplified both the ipad photo model and the matterport model of 2055 inside of Reality Capture because without simplifying it Blender could not open it.

RealityScan vs. Polycam for objects:

Ava scanned a chair using RealityScan photo and I scanned a chair using Polycam Photo and Lidar. I put all three models into Blender to compare the meshes. The chair was chosen because it is a complicated object with semitransparent backing so thus it is good indicator of the success of the scanning program.

SLOWDOWNS

The process of taking the photos is tedious and took between 1.5 and 2 hours. The time for aligning images varies but took between 30 minutes and and an hour and crashed Reality Capture several times. I am not familiar with Reality Capture and manipulating the scans to get a better model is very time consuming.

COMPLETED

Took photos for VR production by Reality Capture. Created 3D mesh in Blender from photos in Reality Capture. Scanned a chair using Reality Capture, Polycam Photo, and Polycam Lidar and compared meshes in Blender.

Opened both the IPAD photo model and MatterPort model in blender to compare the meshes. While both meshes are severely lacking in refinement, some of which was lost in the simplification process in Reality Capture, each has noticeable advantages to the other. The model created from the ipad photos has much clearer and more accurate textures though any reflective, transparent, or semitransparent surfaces resulted in "holes" in the model (not actual holes, the model is topologically intact). The model created from the MatterPort scans while lacking in the complete detection of many objects (e.g. chair legs) the basic structure of the room has no wholes and the scan seemed to pick up the basic flatness of the surface better than in the photos. For both the Matterport and Ipad Photo methods I took scans/photos from three different heights, It is possible that some of the differences are due difference between the highest, middle, and lowest scan heights and what information each was able to capture at their relative scans/images.

PROGRESS IMAGES

Screenshot 2023-06-15 133021 Screenshot 2023-06-15 133053 Screenshot 2023-06-15 133123

IPAD RealityCapture-Simplified Blender IPADRCSimplifiedBlend

Matterport RealityCapture-Simplified to Blender MPRCSimplifiedBlend

Ku-Bri commented 1 year ago

PROGRESS

Testing modified photogrammetry image capture:

I am testing a new pattern of image capture to get a more detailed, complete, and accurate model.

Because Blender can only open simplified exports from Reality Capture, and that simplification process seems to make any improvements in the image capture process irrelevant, I intend to take another set of images to create model-3, where the image count greatly reduced. I would like to verify my hypothesis that a quicker, more sparse image covering can produce near to the same results as the first two models. If so, this would greatly reduce the scanning process and allow more time for modeling in Blender based on the exports from Reality Capture.

Use the models from the scan to create an object separated model:

After speaking with Ava about what she is currently working on, I have asked her to create a model of 2055 with separated objects (similar to the BIM models that we paid Matterport to make) but using the scan model as a reference and guide in order to speed up the creation process. She is recording her time spent working on this specific task which I hope will allow us to make a relative comparison in time and money to create the models ourselves from the scans vs outsourcing.

SLOWDOWNS

The process of both creating models in Blender as well as taking the new photos.

Difficulty aligning photos that Reality Capture cannot automatically place. Right now there are approximately 1005 out of 1511 photos aligned in model-2.

Unfortunately, Blender crashes every time I try open a mesh exported from Reality Capture. If I simplify the mesh in Reality Capture before exporting, then Blender can open it but a significant amount of physical data is lost in the process. I believe the necessity of simplifying the model in Reality Capture makes any differences in the image capture process irrelevant.

COMPLETED

The first set of images (model-1): I took photos of 2055 with the iPad Pro for photogrammetry within Reality Capture - I used advise from Reality Capture and other sources and followed the contour of the walls to photograph the room. Using advise from Matterport and other sources I did this at 3 different levels. Then, at certain points that appeared to be junctions of large amounts of visual data, I captured images (facing out from that point) in a circle in that area.

The second set of images (model-2): I took photos at two different levels, facing the walls and following the contours of the walls and large objects where I spent more time on denser areas in order to fill some of the holes/indistinct areas. I took more photos in the highly dense areas under the tables, hoping to capture more data to create a better formed mesh in these areas but having more photos seems to be a detriment at this point.

The third set of images (model -3: I used Reality Scan app on the iPad which uses a maximum of 200 images to create the model. While there are some obvious deficits in the completeness of the model produced, I do think it would be relatively the same as the other two processes in terms of using it as a reference for building our own model. One problematic area, which will be explored more at a later time, is using the scan textures for texturizing the models (both Matterport BIM and our own models). The deficit of clear visual data will likely be most noticeable in the texturizing process.

PROGRESS IMAGES

Model 1 ~850 images

2055_Model1_3Heights

Model 2 ~1050 images

2055_Model2_MidLowCeiling

Model 3 ~200 images

2055_Model3

Ku-Bri commented 1 year ago

PROGRESS

Creating object separated models using the scan data as a reference:

Starting from the North Campus Matterport MatterPak download, which includes a .obj and .mtl file, I am using the single mesh model as a reference for creating an object separated model similar to the North Campus BIM model we purchased from Matterport.

The goal is to record the time it takes to create an equivalent model of North Campus to compare to the time/money/manpower expense necessary to obtain the Matterport BIM model. Since the BIM model was created from the Matterport scan, I will do the same. This comparison will not currently include the scanning and single mesh model creation process, which could be covered in subsequent tests. In addition to the comparison itself, this is intended to be the basis of a submission to the IEEE DTPI 2023 conference (submission due July 27th).

I have found an add-on that gives Blender CAD-like precision modeling. It requires the same version of Python that your Blender version runs on to be installed on your device. 2055 Computers have Blender 3.5 on them. Using this website I can find the python version required to run the add-on - https://svn.blender.org/svnroot/bf-blender/tags/ - version 3.10.9 Install link: https://www.python.org/downloads/release/python-3109/

I have nearly completed the North Campus model.

SLOWDOWNS

Before the production comparison I researched how to effectively and efficiently create a similar quality model in Blender. To this end, I looked for add-ons for Blender that enable CAD-like creation and editing as well as specific functions within Blender that could speed up the modeling process. After finding the add-on, I took time to learn how to use the tools to best benefit.

The 2055 computer did not have python installed. I needed administrator access to install it in order to use the add-on. I downloaded everything on my personal computer to move forward: Blender 3.4 and I had to install Python 3.10.8 which solved the Solver Module issue.

COMPLETED

Free non-destructive CAD add-on for Blender: https://github.com/hlorus/CAD_Sketcher CAD Sketcher shortcut sheet: https://makertales.gumroad.com/l/dpzwt

Built-in Blender add-ons: Turn on Bool Tool. Turn on Modifier Tools.

Tutorials: https://www.youtube.com/watch?v=1jNDLUDL0gc https://www.youtube.com/watch?v=4Al3yw2klHA

PROGRESS IMAGES

CADsketcher

North Campus BIM:

Screenshot 2023-06-27 143629

North Campus BK:

Screenshot 2023-06-27 143415

North Campus BIM:

Screenshot 2023-06-27 143603

North Campus BK:

Screenshot 2023-06-27 143445

ProfessorAkram commented 1 year ago

CLOSED:

Research shows that scanned data will not yield the results we want. Moving forward we will develop room models referencing scanned data and CAD files. Progress continues to issue cs-utulsa/VIRSA#11