Hangz-nju-cuhk / Rotate-and-Render

Code for Rotate-and-Render: Unsupervised Photorealistic Face Rotation from Single-View Images (CVPR 2020)
Creative Commons Attribution 4.0 International
489 stars 112 forks source link

error when run v100_test.sh #35

Open yapengyu opened 3 years ago

yapengyu commented 3 years ago

dataset [AllFaceDataset] of size 8 was created Testing gpu [0] Network [RotateSPADEGenerator] was created. Total number of parameters: 225.1 million. To see the architecture, do print(network). start prefetching data... Process Process-1: Traceback (most recent call last): File "/ssd/anaconda3/envs/rr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/ssd/anaconda3/envs/rr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, self._kwargs) File "/ssd/experiment/Rotate-and-Render/data/data_utils.py", line 146, in prefetch_data prefetcher = data_prefetcher(dataloader, opt, render_layer) File "/ssd/experiment/Rotate-and-Render/data/data_utils.py", line 99, in init self.preload() File "/ssd/experiment/Rotate-and-Render/data/data_utils.py", line 124, in preload self.next_input = get_multipose_test_input(data, self.render_layer, self.opt.yaw_poses, self.opt.pitch_poses) File "/ssd/experiment/Rotate-and-Render/data/data_utils.py", line 65, in get_multipose_test_input = render.rotate_render(data['param_path'], real_image, data['M'], yaw_pose=pose) File "/ssd/experiment/Rotate-and-Render/models/networks/rotate_render.py", line 80, in rotate_render rendered_images, depths, masks, = self.renderer(vertices_ori_normal, self.faces_use, texs) # rendered_images: batch 3 h w, masks: batch h w File "/ssd/anaconda3/envs/rr/lib/python3.6/site-packages/torch/nn/modules/module.py", line 889, in _call_impl result = self.forward(input, kwargs) File "/ssd/anaconda3/envs/rr/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/renderer.py", line 68, in forward return self.render(vertices, faces, textures) File "/ssd/anaconda3/envs/rr/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/renderer.py", line 148, in render vertices = nr.look(vertices, self.eye, self.camera_direction) File "/ssd/anaconda3/envs/rr/lib/python3.6/site-packages/neural_renderer-1.1.3-py3.6-linux-x86_64.egg/neural_renderer/look.py", line 41, in look if up.ndimension() == 1: AttributeError: 'NoneType' object has no attribute 'ndimension'

yapengyu commented 3 years ago

Then I modify neural_renderer/look.py similar to look_at.py, liking:

def look(vertices, eye, direction=[0, 1, 0], up=None):

def look(vertices, eye, direction=[0, 1, 0], up=[0, 1, 0]): .... if isinstance(up, list) or isinstance(up, tuple): up = torch.tensor(up, dtype=torch.float32, device=device) elif isinstance(up, np.ndarray): up = torch.from_numpy(up).to(device) elif torch.is_tensor(up): up.to(device)

new Error appeared:

dataset [AllFaceDataset] of size 8 was created Testing gpu [0] Network [RotateSPADEGenerator] was created. Total number of parameters: 225.1 million. To see the architecture, do print(network). start prefetching data... Process Process-1: Traceback (most recent call last): File "/ssd/anaconda3/envs/rr/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap self.run() File "/ssd/anaconda3/envs/rr/lib/python3.6/multiprocessing/process.py", line 93, in run self._target(*self._args, *self._kwargs) File "/ssd/experiment/Rotate-and-Render/data/data_utils.py", line 146, in prefetch_data prefetcher = data_prefetcher(dataloader, opt, render_layer) File "/ssd/experiment/Rotate-and-Render/data/data_utils.py", line 99, in init self.preload() File "/ssd/experiment/Rotate-and-Render/data/data_utils.py", line 124, in preload self.next_input = get_multipose_test_input(data, self.render_layer, self.opt.yaw_poses, self.opt.pitch_poses) File "/ssd/experiment/Rotate-and-Render/data/data_utils.py", line 65, in get_multipose_test_input = render.rotate_render(data['param_path'], real_image, data['M'], yaw_pose=pose) File "/ssd/experiment/Rotate-and-Render/models/networks/rotate_render.py", line 80, in rotate_render rendered_images, depths, masks, = self.renderer(vertices_ori_normal, self.faces_use, texs) # rendered_images: batch 3 h w, masks: batch h w ValueError: not enough values to unpack (expected 3, got 1)

yapengyu commented 3 years ago

would you like help to fix the problem/error ? Thanks !

MingzSu commented 3 years ago

same problem,have you been solved?

JunYoungOH97 commented 3 years ago

pip uninstall neural_render pip install git+https://github.com/Oh-JunYoung/neural_renderer.git@at_assert_fix

MingzSu commented 3 years ago

pip uninstall neural_render pip install git+https://github.com/Oh-JunYoung/neural_renderer.git@at_assert_fix

thanks and yes you are right, i have found this problem is the neural_render, because before i couldn't install the original neural_render, i used other's version, but his render function is modified, now i found the right way to install the neural_render, and this demo has been successfully run. the right ref: https://zhuanlan.zhihu.com/p/346139061

JunYoungOH97 commented 3 years ago

I'm glad the problem was solved.