Open DabblerGISer opened 7 months ago
my issue is fixed. I entered the docker container and under /app run pip install -e ./submodules/simple-knn and pip install -e ./submodules/diff-gaussian-rasterization again.
@DabblerGISer what base image did you use? Do you mind sharing your Dockerfile?
@DabblerGISer what base image did you use? Do you mind sharing your Dockerfile?
you can refer this #1018
Hi everyone, I tried to build a docker container locally and an error occurred when running train.py. The descriptions are as follow:
Start training [INFO] START TRAINING AT: 2024-03-25 10:46:14 Optimizing /data/output Output folder: /data/output [25/03 10:46:14] Tensorboard not available: not logging progress [25/03 10:46:14] Reading camera 32/32 [25/03 10:46:14] Loading Training Cameras [25/03 10:46:14] Loading Test Cameras [25/03 10:46:14] Number of points at initialisation : 4142 [25/03 10:46:14] Traceback (most recent call last): File "train.py", line 228, in
training(lp.extract(args), op.extract(args), pp.extract(args), args.test_iterations, args.save_iterations, args.checkpoint_iterations, args.start_checkpoint, args.debug_from)
File "train.py", line 37, in training
scene = Scene(dataset, gaussians)
File "/app/scene/init.py", line 85, in init
self.gaussians.create_from_pcd(scene_info.point_cloud, self.cameras_extent)
File "/app/scene/gaussian_model.py", line 208, in create_from_pcd
dist2 = torch.clamp_min(distCUDA2(torch.from_numpy(np.asarray(pcd.points)).float().cuda()), 0.0000001)
MemoryError: std::bad_alloc: cudaErrorMemoryAllocation: out of memory
Do you have any ideas about this issue? Thank you so much!