Totoro97 / NeuS

Code release for NeuS
MIT License
1.54k stars 208 forks source link

RuntimeError #92

Open wukailu opened 1 year ago

wukailu commented 1 year ago

Seems there is bugs in the code

""" Traceback (most recent call last): File "exp_runner.py", line 392, in runner.train() File "exp_runner.py", line 105, in train data = self.dataset.gen_random_rays_at(image_perm[self.iter_step % len(image_perm)], self.batch_size) File "/home/kailu/NeuS/models/dataset.py", line 118, in gen_random_rays_at color = self.images[img_idx][(pixels_y, pixels_x)] # batch_size, 3 RuntimeError: indices should be either on cpu or on the same device as the indexed tensor (cpu) """

img_idx is a gpu tensor while self.images is a cpu tensor

linkejian123 commented 1 year ago

I had the same issue. Solved by using torch 1.80 as specified in the requirement file.

MvWouden commented 1 year ago

I had the same issue. Solved by using torch 1.80 as specified in the requirement file.

Worked for me as well, thanks!

sszxc commented 1 year ago

For users like me who have to use higher versions of PyTorch due to GPU limitations, removing some ".cpu()" in the code and checking the device of the tensors related to the error message can solve the problem.

winnechan commented 1 year ago

For users like me who have to use higher versions of PyTorch due to GPU limitations, removing some ".cpu()" in the code and checking the device of the tensors related to the error message can solve the problem.

Yes, I have only modified the device of some tensors involved in thegen_random_rays_at function in models/dataset.py to make it work.

fangli333 commented 1 year ago

For users like me who have to use higher versions of PyTorch due to GPU limitations, removing some ".cpu()" in the code and checking the device of the tensors related to the error message can solve the problem.

Yes, I have only modified the device of some tensors involved in thegen_random_rays_at function in models/dataset.py to make it work.

can you provide more details about how to modify the codes?

Earendil-of-Gondor commented 1 year ago

For users like me who have to use higher versions of PyTorch due to GPU limitations, removing some ".cpu()" in the code and checking the device of the tensors related to the error message can solve the problem.

Yes, I have only modified the device of some tensors involved in thegen_random_rays_at function in models/dataset.py to make it work.

can you provide more details about how to modify the codes?

adding .cuda() or to(device) to images and masks in line 118 and 119 worked for me

shuyueW1991 commented 1 year ago

In torch there is a command <.get_device()> to help detect the used device. With this method, I checked the function in dataset.py line-by-line, I managed to run the code by following changes: make sure that the color and mask in the final return line is also cpu'd, and also the self.images and self.makss needs to be sent to cuda via <to.(self.device)> , just like the img_idx, pixels_y, and pixels_x.

ZepSbosnia commented 11 months ago

in my case modifing in this way it works

def gen_random_rays_at(self, img_idx, batch_size): """ Generate random rays at world space from one camera. """ pixels_x = torch.randint(low=0, high=self.W, size=[batch_size])
pixels_y = torch.randint(low=0, high=self.H, size=[batch_size])

img_idx = img_idx.to(self.device)

    self.images = self.images.to(self.device) 
    self.masks = self.masks.to(self.device) 
    color = self.images[img_idx][(pixels_y, pixels_x)] #.to(self.device)  # batch_size, 3
    mask = self.masks[img_idx][(pixels_y, pixels_x)] #.to(self.device)   # batch_size, 3
    p = torch.stack([pixels_x, pixels_y, torch.ones_like(pixels_y)], dim=-1).float()  # batch_size, 3
    p = torch.matmul(self.intrinsics_all_inv[img_idx, None, :3, :3], p[:, :, None]).squeeze() # batch_size, 3
    rays_v = p / torch.linalg.norm(p, ord=2, dim=-1, keepdim=True)    # batch_size, 3
    rays_v = torch.matmul(self.pose_all[img_idx, None, :3, :3], rays_v[:, :, None]).squeeze()  # batch_size, 3
    rays_o = self.pose_all[img_idx, None, :3, 3].expand(rays_v.shape) # batch_size, 3
    return torch.cat([rays_o, rays_v, color, mask[:, :1]], dim=-1).cuda()  
UpsilonYHZ commented 2 months ago

in my case modifing in this way it works

def gen_random_rays_at(self, img_idx, batch_size): """ Generate random rays at world space from one camera. """ pixels_x = torch.randint(low=0, high=self.W, size=[batch_size]) pixels_y = torch.randint(low=0, high=self.H, size=[batch_size]) #img_idx = img_idx.to(self.device) self.images = self.images.to(self.device) self.masks = self.masks.to(self.device) color = self.images[img_idx][(pixels_y, pixels_x)] #.to(self.device) # batch_size, 3 mask = self.masks[img_idx][(pixels_y, pixels_x)] #.to(self.device) # batch_size, 3 p = torch.stack([pixels_x, pixels_y, torch.ones_like(pixels_y)], dim=-1).float() # batch_size, 3 p = torch.matmul(self.intrinsics_all_inv[img_idx, None, :3, :3], p[:, :, None]).squeeze() # batch_size, 3 rays_v = p / torch.linalg.norm(p, ord=2, dim=-1, keepdim=True) # batch_size, 3 rays_v = torch.matmul(self.pose_all[img_idx, None, :3, :3], rays_v[:, :, None]).squeeze() # batch_size, 3 rays_o = self.pose_all[img_idx, None, :3, 3].expand(rays_v.shape) # batch_size, 3 return torch.cat([rays_o, rays_v, color, mask[:, :1]], dim=-1).cuda()

It works! a big thanks to you!