Closed hyh16601377106 closed 1 year ago
我回来了,发现那段源码是'intrinsic_depth.txt',我下载的数据中的文件名intrinsics_depth.txt,就差个s,现在运行出现了ValueError: could not convert string to float:
[`C:\Users\25717\AppData\Local\Microsoft\WindowsApps\ubuntu1804.exe])) run "export PYTHONUNBUFFERED=1 && export PYTHONIOENCODING=UTF-8 && export \"PYTHONPATH=/mnt/c/Users/25717/Desktop/NeuralRecon/NeuralRecon-master:/mnt/e/9-python/2-SoftWareSpace/pycharm/PyCharm 2020.2/plugins/python/helpers/pycharm_matplotlib_backend:/mnt/e/9-python/2-SoftWareSpace/pycharm/PyCharm 2020.2/plugins/python/helpers/pycharm_display\" && export PYCHARM_HOSTED=1 && export PYCHARM_DISPLAY_PORT=63342 && cd /mnt/c/Users/25717/Desktop/NeuralRecon/NeuralRecon-master/tools/tsdf_fusion && /home/peter/miniconda3/envs/py379/bin/python /mnt/c/Users/25717/Desktop/NeuralRecon/NeuralRecon-master/tools/tsdf_fusion/generate_gt.py --save_name all_tsdf_9 --window_size 9"
Traceback (most recent call last):
File "/mnt/c/Users/25717/Desktop/NeuralRecon/NeuralRecon-master/tools/tsdf_fusion/generate_gt.py", line 285, in
Process finished with exit code 1`
不知道让不让发中文,我个菜鸟,经过我发现,generate_gt.py中现在
for w_idx in range(all_proc):
ray_worker_ids.append(process_with_single_worker.remote(args, files[w_idx]))
# print(ray_worker_ids)
results = ray.get(ray_worker_ids)
print(results)
ray.get是不能运行,前面的会返回数组id,但是ray.get不会返回相应数组,经过我切换版本,也没找到解决方法,先提供思路到这里
补充一下,现在报错是这样:
Traceback (most recent call last):
File "/mnt/c/Users/25717/Desktop/NeuralRecon/NeuralRecon-master/tools/tsdf_fusion/generate_gt.py", line 287, in
Process finished with exit code 1
补充一下,现在报错是这样: Traceback (most recent call last): File "/mnt/c/Users/25717/Desktop/NeuralRecon/NeuralRecon-master/tools/tsdf_fusion/generate_gt.py", line 287, in results = ray.get(ray_worker_ids) File "/home/peter/miniconda3/envs/py379/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper return func(*args, **kwargs) File "/home/peter/miniconda3/envs/py379/lib/python3.7/site-packages/ray/worker.py", line 1625, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(ValueError): ray::process_with_single_worker() (pid=7591, ip=172.22.136.148) File "/mnt/c/Users/25717/Desktop/NeuralRecon/NeuralRecon-master/tools/tsdf_fusion/generate_gt.py", line 214, in process_with_single_worker cam_intr = np.loadtxt(intrinsic_dir, dtype=float, delimiter=' ')[:3, :3] File "/home/peter/miniconda3/envs/py379/lib/python3.7/site-packages/numpy/lib/npyio.py", line 1148, in loadtxt for x in read_data(_loadtxt_chunksize): File "/home/peter/miniconda3/envs/py379/lib/python3.7/site-packages/numpy/lib/npyio.py", line 999, in read_data items = [conv(val) for (conv, val) in zip(converters, vals)] File "/home/peter/miniconda3/envs/py379/lib/python3.7/site-packages/numpy/lib/npyio.py", line 999, in items = [conv(val) for (conv, val) in zip(converters, vals)] File "/home/peter/miniconda3/envs/py379/lib/python3.7/site-packages/numpy/lib/npyio.py", line 736, in floatconv return float(x) ValueError: could not convert string to float: 0%| | 0/7 [00:00<?, ?it/s] (process_with_single_worker pid=7591) read from disk (process_with_single_worker pid=7597) read from disk (process_with_single_worker pid=7596) read from disk 0%| | 0/6 [00:00<?, ?it/s] 0%| | 0/7 [00:00<?, ?it/s] (process_with_single_worker pid=7598) read from disk 0%| | 0/6 [00:00<?, ?it/s]
Process finished with exit code 1
generate_gt.py中 cam_intr = np.loadtxt(intrinsic_dir)[:3, :3] 去掉 delimiter=' '就行,他们源码读数据的numpy版本可能比较老,问题就在这,你读完数据其实已经是一个array了,不需要再“ ”来分开元素了。 PS:你后续也会碰到很多类似的问题,比如文件名,格式,还有pose下名字格式等等。。。。
我回来了,经过多天的运行发现,这个打标签的函数是按照原始文件从0到1开始读取的文件,所以他的id是从0到1,并不是我想象中的从文件读取文件名,所以当我们使用经过处理的文件时,它会提示找不到文件等等错误,所以当我们这种下载处理过的数据的时候,我们需要去项目的tools文件中找到simple_loader.py文件进行修改,防止有人看不懂,下面是我完整的代码,
`class ScanNetDataset(torch.utils.data.Dataset): """Pytorch Dataset for a single scene. getitem loads individual frames"""
def __init__(self, n_imgs, scene, data_path, max_depth, id_list=[]):
"""
Args:
"""
self.n_imgs = n_imgs
self.scene = scene
self.data_path = data_path
self.max_depth = max_depth
# if id_list is None:
#self.id_list = [i for i in range(n_imgs)]
for i in range(n_imgs):
id_list += [str(i).rjust(4, '0') + '00']
# else:
self.id_list = id_list
def __len__(self):
return self.n_imgs
def __getitem__(self, id):
"""
Returns:
dict of meta data and images for a single frame
"""
id =self.id_list[id]
cam_pose = np.loadtxt(os.path.join(self.data_path, self.scene, "pose", str(id) + ".txt"))#, delimiter=' '
# Read depth image and camera pose
depth_im = cv2.imread(os.path.join(self.data_path, self.scene, "depth", str(id) + ".png"), -1).astype(
np.float32)
depth_im /= 1000. # depth is saved in 16-bit PNG in millimeters
depth_im[depth_im > self.max_depth] = 0
# Read RGB image
color_image = cv2.cvtColor(cv2.imread(os.path.join(self.data_path, self.scene, "color", str(id) + ".jpg")),
cv2.COLOR_BGR2RGB)
color_image = cv2.resize(color_image, (depth_im.shape[1], depth_im.shape[0]), interpolation=cv2.INTER_AREA)
return cam_pose, depth_im, color_image
`
After completing the above modification proposal, the following problems occurred in operation:
`/home/chengwen/anaconda3/envs/neucon/bin/python3.7 /home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py --save_name all_tsdf_9 --window_size 9
0%| | 0/48 [00:00<?, ?it/s]
(process_with_single_worker pid=4461) read from disk
0%| | 0/48 [00:00<?, ?it/s]
(process_with_single_worker pid=4451) read from disk
(process_with_single_worker pid=4461) scene0000_02: read frame 0/62
0%| | 0/48 [00:00<?, ?it/s]
(process_with_single_worker pid=4460) read from disk
(process_with_single_worker pid=4451) E0905 09:35:44.144559693 5315 chttp2_transport.cc:1103] Received a GOAWAY with error code ENHANCE_YOUR_CALM and debug data equal to "too_many_pings"
(process_with_single_worker pid=4453) read from disk
0%| | 0/48 [00:00<?, ?it/s]
(process_with_single_worker pid=4451) scene0000_00: read frame 0/56
(process_with_single_worker pid=4460) scene0002_00: read frame 0/52
(process_with_single_worker pid=4453) scene0001_01: read frame 0/13
(process_with_single_worker pid=4453) Initializing voxel volume...
(process_with_single_worker pid=4462) read from disk
0%| | 0/48 [00:00<?, ?it/s]
0%| | 0/48 [00:00<?, ?it/s]
(process_with_single_worker pid=4454) read from disk
(process_with_single_worker pid=4461) Initializing voxel volume...
(process_with_single_worker pid=4452) read from disk
0%| | 0/48 [00:00<?, ?it/s]
(process_with_single_worker pid=4459) read from disk
0%| | 0/48 [00:00<?, ?it/s]
(process_with_single_worker pid=4456) read from disk
(process_with_single_worker pid=4462) scene0000_01: read frame 0/60
0%| | 0/48 [00:00<?, ?it/s]
(process_with_single_worker pid=4451) Initializing voxel volume...
Traceback (most recent call last):
File "/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py", line 284, in
During handling of the above exception, another exception occurred:
ray::process_with_single_worker() (pid=4453, ip=192.168.110.126) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py", line 47, in call_capture_output stderr=PIPE) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/subprocess.py", line 800, in init restore_signals, start_new_session) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'nvcc': 'nvcc'
During handling of the above exception, another exception occurred:
ray::process_with_single_worker() (pid=4453, ip=192.168.110.126) File "/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py", line 226, in process_with_single_worker save_tsdf_full(args, scene, cam_intr, depth_all, cam_pose_all, color_all, save_mesh=False) File "/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py", line 79, in save_tsdf_full tsdf_vol_list.append(TSDFVolume(vol_bnds, voxel_size=args.voxel_size * 2 * l, margin=args.margin)) File "/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/fusion.py", line 142, in init }""") File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py", line 358, in init include_dirs, File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py", line 298, in compile return compile_plain(source, options, keep, nvcc, cache_dir, target) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py", line 93, in compile_plain checksum.update(get_nvcc_version(nvcc).encode("utf-8")) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/init.py", line 700, in wrapper result = func(args) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py", line 16, in get_nvcc_version result, stdout, stderr = call_capture_output(cmdline) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py", line 221, in call_capture_output return forker.call_capture_output(cmdline, cwd, error_on_nonzero) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py", line 58, in call_capture_output raise ExecError("error invoking '{}': {}".format(" ".join(cmdline), e)) pytools.prefork.ExecError: error invoking 'nvcc --version': [Errno 2] No such file or directory: 'nvcc': 'nvcc' 0%| | 0/48 [00:01<?, ?it/s] 0%| | 0/47 [00:00<?, ?it/s] (process_with_single_worker pid=4460) Initializing voxel volume... (process_with_single_worker pid=4454) scene0003_01: read frame 0/17 (process_with_single_worker pid=4678) read from disk (process_with_single_worker pid=4452) scene0001_00: read frame 0/10 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4454) Initializing voxel volume... (process_with_single_worker pid=4459) scene0003_00: read frame 0/18 (process_with_single_worker pid=4678) scene0006_02: read frame 0/27
进程已结束,退出代码1`
我回来了,经过多天的运行发现,这个打标签的函数是按照原始文件从0到1开始读取的文件,所以他的id是从0到1,并不是我想象中的从文件读取文件名,所以当我们使用经过处理的文件时,它会提示找不到文件等等错误,所以当我们这种下载处理过的数据的时候,我们需要去项目的tools文件中找到simple_loader.py文件进行修改,防止有人看不懂,下面是我完整的代码,
`class ScanNetDataset(torch.utils.data.Dataset): """Pytorch Dataset for a single scene. getitem loads individual frames"""
def __init__(self, n_imgs, scene, data_path, max_depth, id_list=[]): """ Args: """ self.n_imgs = n_imgs self.scene = scene self.data_path = data_path self.max_depth = max_depth # if id_list is None: #self.id_list = [i for i in range(n_imgs)] for i in range(n_imgs): id_list += [str(i).rjust(4, '0') + '00'] # else: self.id_list = id_list def __len__(self): return self.n_imgs def __getitem__(self, id): """ Returns: dict of meta data and images for a single frame """ id =self.id_list[id] cam_pose = np.loadtxt(os.path.join(self.data_path, self.scene, "pose", str(id) + ".txt"))#, delimiter=' ' # Read depth image and camera pose depth_im = cv2.imread(os.path.join(self.data_path, self.scene, "depth", str(id) + ".png"), -1).astype( np.float32) depth_im /= 1000. # depth is saved in 16-bit PNG in millimeters depth_im[depth_im > self.max_depth] = 0 # Read RGB image color_image = cv2.cvtColor(cv2.imread(os.path.join(self.data_path, self.scene, "color", str(id) + ".jpg")), cv2.COLOR_BGR2RGB) color_image = cv2.resize(color_image, (depth_im.shape[1], depth_im.shape[0]), interpolation=cv2.INTER_AREA) return cam_pose, depth_im, color_image
`
想问一下,我运行提示ray 库报错
我回来了,经过多天的运行发现,这个打标签的函数是按照原始文件从0到1开始读取的文件,所以他的id是从0到1,并不是我想象中的从文件读取文件名,所以当我们使用经过处理的文件时,它会提示找不到文件等等错误,所以当我们这种下载处理过的数据的时候,我们需要去项目的tools文件中找到simple_loader.py文件进行修改,防止有人看不懂,下面是我完整的代码,
`class ScanNetDataset(torch.utils.data.Dataset): """Pytorch Dataset for a single scene. getitem loads individual frames"""
def __init__(self, n_imgs, scene, data_path, max_depth, id_list=[]): """ Args: """ self.n_imgs = n_imgs self.scene = scene self.data_path = data_path self.max_depth = max_depth # if id_list is None: #self.id_list = [i for i in range(n_imgs)] for i in range(n_imgs): id_list += [str(i).rjust(4, '0') + '00'] # else: self.id_list = id_list def __len__(self): return self.n_imgs def __getitem__(self, id): """ Returns: dict of meta data and images for a single frame """ id =self.id_list[id] cam_pose = np.loadtxt(os.path.join(self.data_path, self.scene, "pose", str(id) + ".txt"))#, delimiter=' ' # Read depth image and camera pose depth_im = cv2.imread(os.path.join(self.data_path, self.scene, "depth", str(id) + ".png"), -1).astype( np.float32) depth_im /= 1000. # depth is saved in 16-bit PNG in millimeters depth_im[depth_im > self.max_depth] = 0 # Read RGB image color_image = cv2.cvtColor(cv2.imread(os.path.join(self.data_path, self.scene, "color", str(id) + ".jpg")), cv2.COLOR_BGR2RGB) color_image = cv2.resize(color_image, (depth_im.shape[1], depth_im.shape[0]), interpolation=cv2.INTER_AREA) return cam_pose, depth_im, color_image
`
想问一下,我运行提示ray
ray版本是1.13.0,你再试试
After completing the above modification proposal, the following problems occurred in operation:
`/home/chengwen/anaconda3/envs/neucon/bin/python3.7 /home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py --save_name all_tsdf_9 --window_size 9 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4461) read from disk 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4451) read from disk (process_with_single_worker pid=4461) scene0000_02: read frame 0/62 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4460) read from disk (process_with_single_worker pid=4451) E0905 09:35:44.144559693 5315 chttp2_transport.cc:1103] Received a GOAWAY with error code ENHANCE_YOUR_CALM and debug data equal to "too_many_pings" (process_with_single_worker pid=4453) read from disk 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4451) scene0000_00: read frame 0/56 (process_with_single_worker pid=4460) scene0002_00: read frame 0/52 (process_with_single_worker pid=4453) scene0001_01: read frame 0/13 (process_with_single_worker pid=4453) Initializing voxel volume... (process_with_single_worker pid=4462) read from disk 0%| | 0/48 [00:00<?, ?it/s] 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4454) read from disk (process_with_single_worker pid=4461) Initializing voxel volume... (process_with_single_worker pid=4452) read from disk 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4459) read from disk 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4456) read from disk (process_with_single_worker pid=4462) scene0000_01: read frame 0/60 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4451) Initializing voxel volume... Traceback (most recent call last): File "/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py", line 284, in results = ray.get(ray_worker_ids) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/ray/_private/client_mode_hook.py", line 105, in wrapper return func(*args, **kwargs) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/ray/worker.py", line 1809, in get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(ExecError): ray::process_with_single_worker() (pid=4453, ip=192.168.110.126) AttributeError: 'function' object has no attribute '_memoize_dic'
During handling of the above exception, another exception occurred:
ray::process_with_single_worker() (pid=4453, ip=192.168.110.126) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py", line 47, in call_capture_output stderr=PIPE) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/subprocess.py", line 800, in init restore_signals, start_new_session) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/subprocess.py", line 1551, in _execute_child raise child_exception_type(errno_num, err_msg, err_filename) FileNotFoundError: [Errno 2] No such file or directory: 'nvcc': 'nvcc'
During handling of the above exception, another exception occurred:
ray::process_with_single_worker() (pid=4453, ip=192.168.110.126) File "/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py", line 226, in process_with_single_worker save_tsdf_full(args, scene, cam_intr, depth_all, cam_pose_all, color_all, save_mesh=False) File "/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py", line 79, in save_tsdf_full tsdf_vol_list.append(TSDFVolume(vol_bnds, voxel_size=args.voxel_size * 2 l, margin=args.margin)) File "/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/fusion.py", line 142, in init }""") File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py", line 358, in init include_dirs, File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py", line 298, in compile return compile_plain(source, options, keep, nvcc, cache_dir, target) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py", line 93, in compile_plain checksum.update(get_nvcc_version(nvcc).encode("utf-8")) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/init*.py", line 700, in wrapper result = func(args) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py", line 16, in get_nvcc_version result, stdout, stderr = call_capture_output(cmdline) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py", line 221, in call_capture_output return forker.call_capture_output(cmdline, cwd, error_on_nonzero) File "/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py", line 58, in call_capture_output raise ExecError("error invoking '{}': {}".format(" ".join(cmdline), e)) pytools.prefork.ExecError: error invoking 'nvcc --version': [Errno 2] No such file or directory: 'nvcc': 'nvcc' 0%| | 0/48 [00:01<?, ?it/s] 0%| | 0/47 [00:00<?, ?it/s] (process_with_single_worker pid=4460) Initializing voxel volume... (process_with_single_worker pid=4454) scene0003_01: read frame 0/17 (process_with_single_worker pid=4678) read from disk (process_with_single_worker pid=4452) scene0001_00: read frame 0/10 0%| | 0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4454) Initializing voxel volume... (process_with_single_worker pid=4459) scene0003_00: read frame 0/18 (process_with_single_worker pid=4678) scene0006_02: read frame 0/27
进程已结束,退出代码1`
请问你这个问题解决了吗?
完成上述修改建议后,在运行中出现了以下问题: '/home/chengwen/anaconda3/envs/neucon/bin/python3.7 /home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py --save_name all_tsdf_9 --window_size 9 0%| |0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4461) 从磁盘读取 0%| |0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4451) 从磁盘读取 (process_with_single_worker pid=4461) scene0000_02:读取帧 0/62 0%| |0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4460) 从磁盘读取 (process_with_single_worker pid=4451) E0905 09:35:44.144559693 5315 chttp2_transport.cc:1103] 收到错误代码 ENHANCE_YOUR_CALM 和调试数据等于“too_many_pings”(process_with_single_worker pid=4453) 从磁盘读取的 GOAWAY0%| |0/48 [00:00<?, ?它/秒] (process_with_single_worker pid=4451) scene0000_00:读取帧 0/56 (process_with_single_worker pid=4460) scene0002_00:读取帧 0/52 (process_with_single_worker pid=4453) scene0001_01:读取帧 0/13 (process_with_single_worker pid=4453) 正在初始化体素体积...(process_with_single_worker pid=4462) 从磁盘读取 0%| |0/48 [00:00<?, ?它/秒] 0%| |0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4454) 从磁盘读取 (process_with_single_worker pid=4461) 正在初始化体素体积...(process_with_single_worker pid=4452) 从磁盘读取 0%| |0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4459) 从磁盘读取 0%| |0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4456) 从磁盘读取 (process_with_single_worker pid=4462) scene0000_01:读取帧 0/60 0%| |0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4451) 正在初始化体素体积...回溯(最近一次调用):文件 “/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py”,第 284 行,在 results = ray.get(ray_worker_ids) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/ray/_private/client_mode_hook.py”,第 105 行,在包装器返回函数(*args, kwargs) 文件中 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/ray/worker.py”, 第 1809 行,在 get raise value.as_instanceof_cause() ray.exceptions.RayTaskError(ExecError): ray::p rocess_with_single_worker() (pid=4453, ip=192.168.110.126) 属性错误:“函数”对象没有属性“_memoize_dic” 在处理上述异常期间,发生了另一个异常: ray::p rocess_with_single_worker() (pid=4453, ip=192.168.110.126) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py”,第 47 行,call_capture_output stderr=PIPE) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/subprocess.py”,第 800 行,在init*restore_signals 中,start_new_session) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/subprocess.py”,第 1551 行, 在_execute_child引发child_exception_type(errno_num、err_msg、err_filename) FileNotFoundError: [Errno 2] 没有这样的文件或目录: 'nvcc': 'nvcc' 在处理上述异常期间,发生了另一个异常: ray::p rocess_with_single_worker() (pid=4453, ip=192.168.110.126) 文件 “/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py”,第 226 行,在 process_with_single_worker save_tsdf_full 中(args, scene, cam_intr, depth_all, cam_pose_all, color_all, save_mesh=False) 文件 “/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/generate_gt.py”,第 79 行,save_tsdf_full tsdf_vol_list.append(TSDFVolume(vol_bnds, voxel_size=args.voxel_size 2 l, margin=args.margin)) 文件 “/home/chengwen/NeuralRecon/NeuralRecon/tools/tsdf_fusion/fusion.py”,第 142 行,in init}“”“) 文件 ”/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py“,第 358 行,在init include_dirs 中,文件 ”/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py“,第 298 行,在编译返回compile_plain(来源, 选项,保持,NVCC,cache_dir,目标) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py”,第 93 行,compile_plain checksum.update(get_nvcc_version(nvcc).encode(“utf-8”)) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/init*.py”, 第 700 行,包装结果 = func(args) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pycuda/compiler.py”, 第 16 行,在 get_nvcc_version 结果中,stdout, stderr = call_capture_output(cmdline) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py”,第 221 行,在 call_capture_output 返回forker.call_capture_output(cmdline、cwd、error_on_nonzero) 文件 “/home/chengwen/anaconda3/envs/neucon/lib/python3.7/site-packages/pytools/prefork.py”,第 58 行,call_capture_output引发 ExecError(“error invoking '{}': {}“.format(” “.join(cmdline), e)) pytools.prefork.ExecError: 错误调用 'nvcc --version': [Errno 2] 没有这样的文件或目录: 'nvcc': 'nvcc' 0%| |0/48 [00:01<?, ?它/秒] 0%| |0/47 [00:00<?, ?it/s] (process_with_single_worker pid=4460) 正在初始化体素体积...(process_with_single_worker pid=4454) scene0003_01:从磁盘读取帧 0/17 (process_with_single_worker pid=4678) (process_with_single_worker pid=4452) scene0001_00:读取帧 0/10 0%| |0/48 [00:00<?, ?it/s] (process_with_single_worker pid=4454) 正在初始化体素体积...(process_with_single_worker pid=4459) scene0003_00:读取帧 0/18 (process_with_single_worker pid=4678) scene0006_02:读取帧 0/27 进程已结束,退出代码1'
请问你这个问题解决了吗?
i can't replay in chinese,because when i reply in chinese ,but i see you messuse,i think your ray have some problem,mybe choose ray==1.13,and torch1.6 and so on ,take a attempt
你不能删去ray,因为那是个图形处理库,你需要改成正确的1.13版本
白日焰火 @.***
------------------ 原始邮件 ------------------ 发件人: "zju3dv/NeuralRecon" @.>; 发送时间: 2022年11月5日(星期六) 中午12:28 @.>; @.>;"State @.>; 主题: Re: [zju3dv/NeuralRecon] problem in running generate_gt.py, (Issue #109)
(py379) @.***:~/code/NeuralRecon/NeuralRecon-master$ python tools/tsdf_fusion/generate_gt.py --data_path /home/peter/code/NeuralRecon/scannet --save_name all_tsdf_9 --window_size 9 Traceback (most recent call last): File "tools/tsdf_fusion/generate_gt.py", line 275, in files = split_list(files, all_proc) File "tools/tsdf_fusion/generate_gt.py", line 231, in split_list assert len(_list) >= n AssertionError
请问你这个问题怎么解决的?
我回来了,我直接把ray删掉了
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you modified the open/close state.Message ID: @.***>
(py379) peter@DESKTOP-A5F108R:~/code/NeuralRecon/NeuralRecon-master$ python tools/tsdf_fusion/generate_gt.py --data_path /home/peter/code/NeuralRecon/scannet --save_name all_tsdf_9 --window_size 9 Traceback (most recent call last): File "tools/tsdf_fusion/generate_gt.py", line 275, in files = split_list(files, all_proc) File "tools/tsdf_fusion/generate_gt.py", line 231, in split_list assert len(_list) >= n AssertionError i also find this problem. how to solve it. can someone give instrunctions
(py379) peter@DESKTOP-A5F108R:~/code/NeuralRecon/NeuralRecon-master$ python tools/tsdf_fusion/generate_gt.py --data_path /home/peter/code/NeuralRecon/scannet --save_name all_tsdf_9 --window_size 9 Traceback (most recent call last): File "tools/tsdf_fusion/generate_gt.py", line 275, in
files = split_list(files, all_proc)
File "tools/tsdf_fusion/generate_gt.py", line 231, in split_list
assert len(_list) >= n
AssertionError