NVlabs / BundleSDF

[CVPR 2023] BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects
https://bundlesdf.github.io/
Other
1.04k stars 114 forks source link

cleaned_mesh.obj doesn't look like the object #114

Closed monajalal closed 11 months ago

monajalal commented 11 months ago

I created a longer video with 714 frames (previous one had 480 frames). Now I am getting a different error.

(py38) root@ada:/home/mona/BundleSDF# python run_custom.py --mode run_video --video_dir /home/mona/BundleSDF/cup --out_folder /home/mona/BundleSDF/cup/out --use_segmenter 1 --use_gui 1 --debug_level 2

MESSAGES

2023-11-06 12:15:41.011] [warning] [FeatureManager.cpp:1695] after ransac, frame 16992993180193521242 and 16992992859297534238 has too few matches #0, ignore
#optimizeGPU frames=10, #keyframes=13, #_frames=19
16992992705850898180 16992992726532797110 16992992732537614939 16992992735873464498 16992992807927311950 16992992827274530208 16992992853960119968 16992992855294470787 16992992859297534238 16992993180193521242 
[2023-11-06 12:15:41.011] [warning] [Bundler.cpp:894] frame 16992993180193521242 few global_corres, mark as FAIL
[2023-11-06 12:15:41.011] [warning] [Bundler.cpp:920] OptimizerGPU begin, global_corres#=0
global_corres=0
maxNumResiduals / maxNumberOfImages = 216000 / 10 = 21600
m_maxNumberOfImages*m_maxCorrPerImage = 10 x 0 = 0
m_solver->solve Time difference = 11.085[ms]
[2023-11-06 12:15:41.028] [warning] [Bundler.cpp:924] OptimizerGPU finish
[2023-11-06 12:15:41.028] [warning] [Bundler.cpp:67] forgetting frame 16992993180193521242
[2023-11-06 12:15:41.028] [warning] [FeatureManager.cpp:469] forgetting frame 16992993180193521242
[bundlesdf.py] processNewFrame done 16992993180193521242
[bundlesdf.py] rematch_after_nerf: True
[2023-11-06 12:15:41.028] [warning] [Bundler.cpp:961] Welcome saveNewframeResult
[2023-11-06 12:15:41.048] [warning] [Bundler.cpp:1110] saveNewframeResult done
[2023-11-06 12:15:45.040] [warning] [Bundler.cpp:49] Connected to nerf_port 9999
[2023-11-06 12:15:45.040] [warning] [FeatureManager.cpp:2084] Connected to port 5555
default_cfg {'backbone_type': 'ResNetFPN', 'resolution': (8, 2), 'fine_window_size': 5, 'fine_concat_coarse_feat': True, 'resnetfpn': {'initial_dim': 128, 'block_dims': [128, 196, 256]}, 'coarse': {'d_model': 256, 'd_ffn': 256, 'nhead': 8, 'layer_names': ['self', 'cross', 'self', 'cross', 'self', 'cross', 'self', 'cross'], 'attention': 'linear', 'temp_bug_fix': False}, 'match_coarse': {'thr': 0.2, 'border_rm': 2, 'match_type': 'dual_softmax', 'dsmax_temperature': 0.1, 'skh_iters': 3, 'skh_init_bin_score': 1.0, 'skh_prefilter': True, 'train_coarse_percent': 0.4, 'train_pad_num_gt_min': 200}, 'fine': {'d_model': 128, 'd_ffn': 128, 'nhead': 8, 'layer_names': ['self', 'cross'], 'attention': 'linear'}}
[bundlesdf.py] last_stamp 16992993180193521242
[bundlesdf.py] keyframes#: 13
[tool.py] compute_scene_bounds_worker start
[tool.py] compute_scene_bounds_worker done
[tool.py] merge pcd
[tool.py] compute_translation_scales done
translation_cvcam=[ 0.0573355  -0.06019406 -0.23087394], sc_factor=4.756784035291714
[nerf_runner.py] Octree voxel dilate_radius:1
level 0, resolution: 16
level 1, resolution: 20
level 2, resolution: 24
level 3, resolution: 28
level 4, resolution: 34
level 5, resolution: 41
level 6, resolution: 49
level 7, resolution: 59
level 8, resolution: 71
level 9, resolution: 85
level 10, resolution: 102
level 11, resolution: 123
level 12, resolution: 148
level 13, resolution: 177
level 14, resolution: 213
level 15, resolution: 256
GridEncoder: input_dim=3 n_levels=16 level_dim=2 resolution=16 -> 256 per_level_scale=1.2030 params=(20411696, 2) gridtype=hash align_corners=False
sc_factor 4.756784035291714
translation [ 0.0573355  -0.06019406 -0.23087394]
[nerf_runner.py] denoise cloud
[nerf_runner.py] Denoising rays based on octree cloud
[nerf_runner.py] bad_mask#=411
rays torch.Size([115401, 12])
Start training
[nerf_runner.py] train progress 0/2001
[nerf_runner.py] Iter: 0, valid_samples: 655360/655360, valid_rays: 2048/2048, loss: 25.0223598, rgb_loss: 24.8045540, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.0021485, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 0.0514414, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.1642172, 

[nerf_runner.py] train progress 200/2001
[nerf_runner.py] train progress 400/2001
[nerf_runner.py] train progress 600/2001
[nerf_runner.py] train progress 800/2001
[nerf_runner.py] train progress 1000/2001
[nerf_runner.py] train progress 1200/2001
[nerf_runner.py] train progress 1400/2001
[nerf_runner.py] train progress 1600/2001
[nerf_runner.py] train progress 1800/2001
[nerf_runner.py] train progress 2000/2001
cp: cannot stat '/home/mona/BundleSDF/cup/out//nerf_with_bundletrack_online/image_step_*.png': No such file or directory
[nerf_runner.py] query_pts:torch.Size([9261000, 3]), valid:413232
[nerf_runner.py] Running Marching Cubes
[nerf_runner.py] done V:(11468, 3), F:(22716, 3)
[acceleratesupport.py] OpenGL_accelerate module loaded
[arraydatatype.py] Using accelerated ArrayDatatype
project train_images 0/13
/home/mona/BundleSDF/nerf_runner.py:1530: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  uvs_unique = torch.stack((uvs_flat_unique%(W-1), uvs_flat_unique//(W-1)), dim=-1).reshape(-1,2)
project train_images 1/13
project train_images 2/13
project train_images 3/13
project train_images 4/13
project train_images 5/13
project train_images 6/13
Traceback (most recent call last):
  File "run_custom.py", line 223, in <module>
    run_one_video(video_dir=args.video_dir, out_folder=args.out_folder, use_segmenter=args.use_segmenter, use_gui=args.use_gui)
  File "run_custom.py", line 107, in run_one_video
    run_one_video_global_nerf(out_folder=out_folder)
  File "run_custom.py", line 152, in run_one_video_global_nerf
    tracker.run_global_nerf(reader=reader, get_texture=True, tex_res=512)
  File "/home/mona/BundleSDF/bundlesdf.py", line 785, in run_global_nerf
    mesh = nerf.mesh_texture_from_train_images(mesh, rgbs_raw=rgbs_raw, train_texture=False, tex_res=tex_res)
  File "/home/mona/BundleSDF/nerf_runner.py", line 1511, in mesh_texture_from_train_images
    locations, distance, index_tri = trimesh.proximity.closest_point(mesh, pts)
  File "/opt/conda/envs/py38/lib/python3.8/site-packages/trimesh/proximity.py", line 153, in closest_point
    all_candidates = np.concatenate(candidates)
  File "<__array_function__ internals>", line 200, in concatenate
ValueError: need at least one array to concatenate
Process Process-5:
Traceback (most recent call last):
  File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/process.py", line 315, in _bootstrap
    self.run()
  File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/mona/BundleSDF/bundlesdf.py", line 89, in run_nerf
    join = p_dict['join']
  File "<string>", line 2, in __getitem__
  File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/managers.py", line 835, in _callmethod
    kind, result = conn.recv()
  File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/connection.py", line 250, in recv
    buf = self._recv_bytes()
  File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/connection.py", line 414, in _recv_bytes
    buf = self._recv(4)
  File "/opt/conda/envs/py38/lib/python3.8/multiprocessing/connection.py", line 379, in _recv
    chunk = read(handle, remaining)
ConnectionResetError: [Errno 104] Connection reset by peer
[2023-11-06 12:16:53.551] [warning] [Bundler.cpp:59] Destructor
[2023-11-06 12:16:53.553] [warning] [Bundler.cpp:59] Destructor
(py38) root@ada:/home/mona/BundleSDF# 

Further the mesh_cleaned.obj makes no sense (is it like the handle of cup??)

Screenshot from 2023-11-06 15-20-27

first frame of the following: mask

Screenshot from 2023-11-06 15-21-33

Screenshot from 2023-11-06 15-22-12

depth Screenshot from 2023-11-06 15-22-43

I use this script to convert ros 2 bags to depth:

from pathlib import Path

from rosbags.highlevel import AnyReader
import numpy as np
from PIL import Image
from datetime import datetime
from matplotlib import image
import cv2

import matplotlib.pyplot as plt

# create reader instance and open for reading

with AnyReader([Path('/home/mona/rosbag2_2023_11_06-14_34_30')]) as reader:
    connections = [x for x in reader.connections if x.topic == '/camera/camera/aligned_depth_to_color/image_raw']

    for connection, timestamp, rawdata in reader.messages(connections=connections):
        msg = reader.deserialize(rawdata, connection.msgtype)
        timestamp_dt = datetime.fromtimestamp(msg.header.stamp.sec + msg.header.stamp.nanosec * 1e-9)
        timestamp_str = timestamp_dt.strftime("%Y-%m-%d %H:%M:%S.%f")

        # Convert the timestamp to nanoseconds
        timestamp_ns = msg.header.stamp.sec * 1e9 + msg.header.stamp.nanosec

        # Convert the timestamp to the desired numeric representation
        numeric_timestamp = int(timestamp_ns / 1e-9)
        image_data = msg.data.reshape(480, 640,-1)*1000#.astype(np.uint16)

        # Take only the first channel (grayscale)
        grayscale_image = image_data[:, :, 0]
        depth_image_name = 'depth/' + str(numeric_timestamp)[:20] + '.png'
        cv2.imwrite(depth_image_name, grayscale_image.astype(np.uint16))

and this one to convert to rgb images (and then feed them to XMEM for masks --> then change the RGB mask to gray masks)

from pathlib import Path

from rosbags.highlevel import AnyReader
import numpy as np
from PIL import Image
from datetime import datetime
from matplotlib import image
import cv2

import matplotlib.pyplot as plt

# create reader instance and open for reading
with AnyReader([Path('/home/mona/rosbag2_2023_11_06-14_34_30')]) as reader:

    connections = [x for x in reader.connections if x.topic == '/camera/camera/color/image_raw']

    for connection, timestamp, rawdata in reader.messages(connections=connections):
        msg = reader.deserialize(rawdata, connection.msgtype)
        timestamp_dt = datetime.fromtimestamp(msg.header.stamp.sec + msg.header.stamp.nanosec * 1e-9)
        timestamp_str = timestamp_dt.strftime("%Y-%m-%d %H:%M:%S.%f")

        # Convert the timestamp to nanoseconds
        timestamp_ns = msg.header.stamp.sec * 1e9 + msg.header.stamp.nanosec

        # Convert the timestamp to the desired numeric representation
        numeric_timestamp = int(timestamp_ns / 1e-9)
        image_data = np.reshape(msg.data, (480, 640, 3)) # for rgb image

        # uncomment for rgb 
        image = Image.fromarray(image_data)
        # numeric timestamp is 29 digits we only want the first 20 digits
        image.save('rgb/' + str(numeric_timestamp)[:20] + '.png')

also, from visualization during running the command, we can see that the orientation of the cup is not always calculated correctly, right? Screenshot from 2023-11-06 15-28-21 Screenshot from 2023-11-06 15-28-27

monajalal commented 11 months ago

@wenbowen123 you mentioned here https://github.com/NVlabs/BundleSDF/issues/98 that error can be ignored. Could you though please help how to fix the cleaned_mesh.obj? it doesn't represent the mesh of a up as you see above.

monajalal commented 11 months ago

I ran it again with a blue cup so it wouldn't interfere with my red nail so the segmentation would be better. I didn't get an error for step 0 this time but then both mesh_cleaned.obj and textured_mesh.obj look weird and don't resemble the cup. Please let me know if you have any suggestion to get a better capture.

(py38) root@ada:/home/mona/BundleSDF# python run_custom.py --mode run_video --video_dir /home/mona/BundleSDF/cup --out_folder /home/mona/BundleSDF/cup/out --use_segmenter 1 --use_gui 1 --debug_level 2
MESSAGES

[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2004053145647049 0.9004648327827454
[loftr_wrapper.py] pair_ids (120,)
[loftr_wrapper.py] corres: (120, 5)
[2023-11-06 13:14:57.165] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.165] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035087690602534 and 16993034825501428111 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034822833023342
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2002098709344864 0.8244357109069824
[loftr_wrapper.py] pair_ids (103,)
[loftr_wrapper.py] corres: (103, 5)
[2023-11-06 13:14:57.190] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.191] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035087690602534 and 16993034822833023342 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034816161692560
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2020709216594696 0.9244020581245422
[loftr_wrapper.py] pair_ids (142,)
[loftr_wrapper.py] corres: (142, 5)
[2023-11-06 13:14:57.215] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.216] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035087690602534 and 16993034816161692560 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034664054080245
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20001345872879028 0.8155645132064819
[loftr_wrapper.py] pair_ids (262,)
[loftr_wrapper.py] corres: (262, 5)
[2023-11-06 13:14:57.239] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.240] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035087690602534 and 16993034664054080245 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034657382612024
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20021183788776398 0.9857192039489746
[loftr_wrapper.py] pair_ids (284,)
[loftr_wrapper.py] corres: (284, 5)
[2023-11-06 13:14:57.263] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.264] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035087690602534 and 16993034657382612024 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034649376957911
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20205609500408173 0.9194096326828003
[loftr_wrapper.py] pair_ids (284,)
[loftr_wrapper.py] corres: (284, 5)
[2023-11-06 13:14:57.288] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.289] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035087690602534 and 16993034649376957911 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034646708456934
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20011433959007263 0.9029887318611145
[loftr_wrapper.py] pair_ids (263,)
[loftr_wrapper.py] corres: (263, 5)
[2023-11-06 13:14:57.312] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.312] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035087690602534 and 16993034646708456934 has too few matches #0, ignore
[bundlesdf.py] frame 16993035087690602534 has not suitable ref_frame, mark as FAIL
[2023-11-06 13:14:57.313] [warning] [Bundler.cpp:67] forgetting frame 16993035087690602534
[2023-11-06 13:14:57.313] [warning] [FeatureManager.cpp:469] forgetting frame 16993035087690602534
[bundlesdf.py] processNewFrame done 16993035087690602534
[bundlesdf.py] rematch_after_nerf: True
[2023-11-06 13:14:57.313] [warning] [Bundler.cpp:961] Welcome saveNewframeResult
[2023-11-06 13:14:57.335] [warning] [Bundler.cpp:1110] saveNewframeResult done
[bundlesdf.py] percentile denoise start
depth.shape (480, 640)
mask.shape (480, 640)
valid:  [[False False False ... False False False]
 [False False False ... False False False]
 [False False False ... False False False]
 ...
 [False False False ... False False False]
 [False False False ... False False False]
 [False False False ... False False False]]
valid.sum():  14243
Minimum depth: 0.0
Maximum depth: 65.464
Minimum mask: 0
Maximum mask: 1
[bundlesdf.py] percentile denoise done
[bundlesdf.py] processNewFrame start 16993035088357813678
[bundlesdf.py] process frame 16993035088357813678
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2016979455947876 0.978831946849823
[loftr_wrapper.py] pair_ids (180,)
[loftr_wrapper.py] corres: (180, 5)
[2023-11-06 13:14:57.388] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.389] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035088357813678 and 16993035041657835748 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034825501428111
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.200063094496727 0.8900307416915894
[loftr_wrapper.py] pair_ids (154,)
[loftr_wrapper.py] corres: (154, 5)
[2023-11-06 13:14:57.413] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.414] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035088357813678 and 16993034825501428111 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034822833023342
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20046207308769226 0.9293252229690552
[loftr_wrapper.py] pair_ids (90,)
[loftr_wrapper.py] corres: (90, 5)
[2023-11-06 13:14:57.443] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.444] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035088357813678 and 16993034822833023342 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034816161692560
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20008330047130585 0.8223296999931335
[loftr_wrapper.py] pair_ids (150,)
[loftr_wrapper.py] corres: (150, 5)
[2023-11-06 13:14:57.474] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.475] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035088357813678 and 16993034816161692560 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034664054080245
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2009420096874237 0.9040892720222473
[loftr_wrapper.py] pair_ids (307,)
[loftr_wrapper.py] corres: (307, 5)
[2023-11-06 13:14:57.506] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.507] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035088357813678 and 16993034664054080245 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034657382612024
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20089547336101532 0.9631022810935974
[loftr_wrapper.py] pair_ids (282,)
[loftr_wrapper.py] corres: (282, 5)
[2023-11-06 13:14:57.574] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.574] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035088357813678 and 16993034657382612024 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034649376957911
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20001888275146484 0.9685002565383911
[loftr_wrapper.py] pair_ids (308,)
[loftr_wrapper.py] corres: (308, 5)
[2023-11-06 13:14:57.600] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.601] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035088357813678 and 16993034649376957911 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034646708456934
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20171652734279633 0.9068623185157776
[loftr_wrapper.py] pair_ids (330,)
[loftr_wrapper.py] corres: (330, 5)
[2023-11-06 13:14:57.626] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.627] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035088357813678 and 16993034646708456934 has too few matches #0, ignore
[bundlesdf.py] frame 16993035088357813678 has not suitable ref_frame, mark as FAIL
[2023-11-06 13:14:57.627] [warning] [Bundler.cpp:67] forgetting frame 16993035088357813678
[2023-11-06 13:14:57.627] [warning] [FeatureManager.cpp:469] forgetting frame 16993035088357813678
[bundlesdf.py] processNewFrame done 16993035088357813678
[bundlesdf.py] rematch_after_nerf: True
[2023-11-06 13:14:57.627] [warning] [Bundler.cpp:961] Welcome saveNewframeResult
[2023-11-06 13:14:57.649] [warning] [Bundler.cpp:1110] saveNewframeResult done
[bundlesdf.py] percentile denoise start
depth.shape (480, 640)
mask.shape (480, 640)
valid:  [[False False False ... False False False]
 [False False False ... False False False]
 [False False False ... False False False]
 ...
 [False False False ... False False False]
 [False False False ... False False False]
 [False False False ... False False False]]
valid.sum():  14294
Minimum depth: 0.0
Maximum depth: 65.464
Minimum mask: 0
Maximum mask: 1
[bundlesdf.py] percentile denoise done
[bundlesdf.py] processNewFrame start 16993035089025016575
[bundlesdf.py] process frame 16993035089025016575
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20121163129806519 0.9182050228118896
[loftr_wrapper.py] pair_ids (158,)
[loftr_wrapper.py] corres: (158, 5)
[2023-11-06 13:14:57.702] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.703] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089025016575 and 16993035041657835748 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034825501428111
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20231683552265167 0.9367936849594116
[loftr_wrapper.py] pair_ids (183,)
[loftr_wrapper.py] corres: (183, 5)
[2023-11-06 13:14:57.727] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.728] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089025016575 and 16993034825501428111 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034822833023342
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20010095834732056 0.9356865882873535
[loftr_wrapper.py] pair_ids (108,)
[loftr_wrapper.py] corres: (108, 5)
[2023-11-06 13:14:57.752] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.752] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089025016575 and 16993034822833023342 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034816161692560
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20038703083992004 0.8559938669204712
[loftr_wrapper.py] pair_ids (166,)
[loftr_wrapper.py] corres: (166, 5)
[2023-11-06 13:14:57.779] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.779] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089025016575 and 16993034816161692560 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034664054080245
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20092955231666565 0.8868588209152222
[loftr_wrapper.py] pair_ids (258,)
[loftr_wrapper.py] corres: (258, 5)
[2023-11-06 13:14:57.804] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.804] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089025016575 and 16993034664054080245 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034657382612024
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2007264941930771 0.952292799949646
[loftr_wrapper.py] pair_ids (292,)
[loftr_wrapper.py] corres: (292, 5)
[2023-11-06 13:14:57.829] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.829] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089025016575 and 16993034657382612024 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034649376957911
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20015984773635864 0.9417436718940735
[loftr_wrapper.py] pair_ids (280,)
[loftr_wrapper.py] corres: (280, 5)
[2023-11-06 13:14:57.853] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.854] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089025016575 and 16993034649376957911 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034646708456934
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20022131502628326 0.9273026585578918
[loftr_wrapper.py] pair_ids (287,)
[loftr_wrapper.py] corres: (287, 5)
[2023-11-06 13:14:57.877] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.878] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089025016575 and 16993034646708456934 has too few matches #0, ignore
[bundlesdf.py] frame 16993035089025016575 has not suitable ref_frame, mark as FAIL
[2023-11-06 13:14:57.878] [warning] [Bundler.cpp:67] forgetting frame 16993035089025016575
[2023-11-06 13:14:57.878] [warning] [FeatureManager.cpp:469] forgetting frame 16993035089025016575
[bundlesdf.py] processNewFrame done 16993035089025016575
[bundlesdf.py] rematch_after_nerf: True
[2023-11-06 13:14:57.878] [warning] [Bundler.cpp:961] Welcome saveNewframeResult
[2023-11-06 13:14:57.899] [warning] [Bundler.cpp:1110] saveNewframeResult done
[bundlesdf.py] percentile denoise start
depth.shape (480, 640)
mask.shape (480, 640)
valid:  [[False False False ... False False False]
 [False False False ... False False False]
 [False False False ... False False False]
 ...
 [False False False ... False False False]
 [False False False ... False False False]
 [False False False ... False False False]]
valid.sum():  14498
Minimum depth: 0.0
Maximum depth: 65.464
Minimum mask: 0
Maximum mask: 1
[bundlesdf.py] percentile denoise done
[bundlesdf.py] processNewFrame start 16993035089692230467
[bundlesdf.py] process frame 16993035089692230467
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20026762783527374 0.8782411217689514
[loftr_wrapper.py] pair_ids (157,)
[loftr_wrapper.py] corres: (157, 5)
[2023-11-06 13:14:57.954] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.955] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089692230467 and 16993035041657835748 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034825501428111
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2011818140745163 0.945624828338623
[loftr_wrapper.py] pair_ids (137,)
[loftr_wrapper.py] corres: (137, 5)
[2023-11-06 13:14:57.980] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:57.981] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089692230467 and 16993034825501428111 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034822833023342
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2011655867099762 0.7188431024551392
[loftr_wrapper.py] pair_ids (109,)
[loftr_wrapper.py] corres: (109, 5)
[2023-11-06 13:14:58.005] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:58.005] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089692230467 and 16993034822833023342 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034816161692560
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20066553354263306 0.929513156414032
[loftr_wrapper.py] pair_ids (147,)
[loftr_wrapper.py] corres: (147, 5)
[2023-11-06 13:14:58.030] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:58.031] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089692230467 and 16993034816161692560 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034664054080245
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20053265988826752 0.877680242061615
[loftr_wrapper.py] pair_ids (238,)
[loftr_wrapper.py] corres: (238, 5)
[2023-11-06 13:14:58.055] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:58.055] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089692230467 and 16993034664054080245 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034657382612024
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.20018860697746277 0.9063171744346619
[loftr_wrapper.py] pair_ids (252,)
[loftr_wrapper.py] corres: (252, 5)
[2023-11-06 13:14:58.078] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:58.079] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089692230467 and 16993034657382612024 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034649376957911
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2005765289068222 0.905853271484375
[loftr_wrapper.py] pair_ids (238,)
[loftr_wrapper.py] corres: (238, 5)
[2023-11-06 13:14:58.103] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:58.103] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089692230467 and 16993034649376957911 has too few matches #0, ignore
[bundlesdf.py] trying new ref frame 16993034646708456934
[bundlesdf.py] frame_pairs: 1
[loftr_wrapper.py] image0: torch.Size([1, 1, 400, 400])
[loftr_wrapper.py] net forward
[loftr_wrapper.py] mconf, 0.2000451534986496 0.9235278964042664
[loftr_wrapper.py] pair_ids (262,)
[loftr_wrapper.py] corres: (262, 5)
[2023-11-06 13:14:58.127] [warning] [FeatureManager.cpp:1589] start multi pair ransac GPU, pairs#=1
[2023-11-06 13:14:58.127] [warning] [FeatureManager.cpp:1695] after ransac, frame 16993035089692230467 and 16993034646708456934 has too few matches #0, ignore
[bundlesdf.py] frame 16993035089692230467 has not suitable ref_frame, mark as FAIL
[2023-11-06 13:14:58.127] [warning] [Bundler.cpp:67] forgetting frame 16993035089692230467
[2023-11-06 13:14:58.127] [warning] [FeatureManager.cpp:469] forgetting frame 16993035089692230467
[bundlesdf.py] processNewFrame done 16993035089692230467
[bundlesdf.py] rematch_after_nerf: True
[2023-11-06 13:14:58.128] [warning] [Bundler.cpp:961] Welcome saveNewframeResult
[2023-11-06 13:14:58.148] [warning] [Bundler.cpp:1110] saveNewframeResult done
[2023-11-06 13:15:02.122] [warning] [Bundler.cpp:49] Connected to nerf_port 9999
[2023-11-06 13:15:02.123] [warning] [FeatureManager.cpp:2084] Connected to port 5555
default_cfg {'backbone_type': 'ResNetFPN', 'resolution': (8, 2), 'fine_window_size': 5, 'fine_concat_coarse_feat': True, 'resnetfpn': {'initial_dim': 128, 'block_dims': [128, 196, 256]}, 'coarse': {'d_model': 256, 'd_ffn': 256, 'nhead': 8, 'layer_names': ['self', 'cross', 'self', 'cross', 'self', 'cross', 'self', 'cross'], 'attention': 'linear', 'temp_bug_fix': False}, 'match_coarse': {'thr': 0.2, 'border_rm': 2, 'match_type': 'dual_softmax', 'dsmax_temperature': 0.1, 'skh_iters': 3, 'skh_init_bin_score': 1.0, 'skh_prefilter': True, 'train_coarse_percent': 0.4, 'train_pad_num_gt_min': 200}, 'fine': {'d_model': 128, 'd_ffn': 128, 'nhead': 8, 'layer_names': ['self', 'cross'], 'attention': 'linear'}}
[bundlesdf.py] last_stamp 16993035089692230467
[bundlesdf.py] keyframes#: 7
[tool.py] compute_scene_bounds_worker start
[tool.py] compute_scene_bounds_worker done
[tool.py] merge pcd
[tool.py] compute_translation_scales done
translation_cvcam=[ 0.02024092 -0.09154674 -0.30408731], sc_factor=16.39673238642096
[nerf_runner.py] Octree voxel dilate_radius:1
level 0, resolution: 16
level 1, resolution: 20
level 2, resolution: 24
level 3, resolution: 28
level 4, resolution: 34
level 5, resolution: 41
level 6, resolution: 49
level 7, resolution: 59
level 8, resolution: 71
level 9, resolution: 85
level 10, resolution: 102
level 11, resolution: 123
level 12, resolution: 148
level 13, resolution: 177
level 14, resolution: 213
level 15, resolution: 256
GridEncoder: input_dim=3 n_levels=16 level_dim=2 resolution=16 -> 256 per_level_scale=1.2030 params=(20411696, 2) gridtype=hash align_corners=False
sc_factor 16.39673238642096
translation [ 0.02024092 -0.09154674 -0.30408731]
[nerf_runner.py] denoise cloud
[nerf_runner.py] Denoising rays based on octree cloud
[nerf_runner.py] bad_mask#=0
rays torch.Size([20882, 12])
Start training
[nerf_runner.py] train progress 0/2001
[nerf_runner.py] Iter: 0, valid_samples: 655360/655360, valid_rays: 2048/2048, loss: 25.5224533, rgb_loss: 24.6962833, rgb0_loss: 0.0000000, fs_rgb_loss: 0.0000000, depth_loss: 0.0000000, depth_loss0: 0.0000000, fs_loss: 0.0024072, point_cloud_loss: 0.0000000, point_cloud_normal_loss: 0.0000000, sdf_loss: 0.6999149, eikonal_loss: 0.0000000, variation_loss: 0.0000000, truncation(meter): 0.0100000, pose_reg: 0.0000000, reg_features: 0.1238480,

[nerf_runner.py] train progress 200/2001
[nerf_runner.py] train progress 400/2001
[nerf_runner.py] train progress 600/2001
[nerf_runner.py] train progress 800/2001
[nerf_runner.py] train progress 1000/2001
[nerf_runner.py] train progress 1200/2001
[nerf_runner.py] train progress 1400/2001
[nerf_runner.py] train progress 1600/2001
[nerf_runner.py] train progress 1800/2001
[nerf_runner.py] train progress 2000/2001
cp: cannot stat '/home/mona/BundleSDF/cup/out//nerf_with_bundletrack_online/image_step_*.png': No such file or directory
[nerf_runner.py] query_pts:torch.Size([226981, 3]), valid:93961
[nerf_runner.py] Running Marching Cubes
[nerf_runner.py] done V:(3744, 3), F:(6890, 3)
[acceleratesupport.py] OpenGL_accelerate module loaded
[arraydatatype.py] Using accelerated ArrayDatatype
project train_images 0/7
/home/mona/BundleSDF/nerf_runner.py:1530: UserWarning: __floordiv__ is deprecated, and its behavior will change in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor').
  uvs_unique = torch.stack((uvs_flat_unique%(W-1), uvs_flat_unique//(W-1)), dim=-1).reshape(-1,2)
project train_images 1/7
project train_images 2/7
project train_images 3/7
project train_images 4/7
project train_images 5/7
project train_images 6/7
/home/mona/BundleSDF/nerf_runner.py:1537: RuntimeWarning: invalid value encountered in cast
  tex_image = np.clip(tex_image,0,255).astype(np.uint8)
Done
[2023-11-06 13:16:07.556] [warning] [Bundler.cpp:59] Destructor
[2023-11-06 13:16:07.954] [warning] [Bundler.cpp:59] Destructor

Screenshot from 2023-11-06 16-18-39 Screenshot from 2023-11-06 16-19-18

My first frame looks like the following: Screenshot from 2023-11-06 16-22-00

I have 664 frames.

wenbowen123 commented 11 months ago

possible duplicate of https://github.com/NVlabs/BundleSDF/issues/115