Open yangt1013 opened 3 years ago
if erode:
with torch.cuda.device(self.current_gpu):
rendered_images = self.renderer(vertices_ori_normal, self.faces_use, texs) # rendered_images: batch * 3 * h * w, masks: batch * h * w
rendered_images, depths, masks = self.renderer(vertices_ori_normal, self.faces_use,
texs) # rendered_images: batch * 3 * h * w, masks: batch * h * w
masks_erode = self.generate_erode_mask(masks, kernal_size=15)
rendered_images = rendered_images.cpu()
if grey_background:
rendered_images_erode = masks_erode * rendered_images
else:
inv_masks_erode = (torch.ones_like(masks_erode) - (masks_erode)).float()
if avg_BG:
contentsum = torch.sum(torch.sum(masks_erode * rendered_images, 3), 2)
sumsum = torch.sum(torch.sum(masks_erode, 3), 2)
contentsum[contentsum == 0] = 0.5
sumsum[sumsum == 0] = 1
masked_sum = contentsum / sumsum
masked_BG = masked_sum.unsqueeze(2).unsqueeze(3).expand(rendered_images.size())
else:
masked_BG = 0.5
rendered_images_erode = masks_erode * rendered_images + inv_masks_erode * masked_BG
renderer.py images = nr.rasterize( faces, textures, self.image_size, self.anti_aliasing, self.near, self.far, self.rasterizer_eps, self.background_color) I am waiting for your reply! thans a lot. return images
I have met the same question. Have you solved it?
same problem,have you been solved?
pip uninstall neural_render pip install git+https://github.com/Oh-JunYoung/neural_renderer.git@at_assert_fix
Thanks for sharing the project,I meet some problems in your code! (py37) ty@ty-HP-Z8-G4-Workstation:~/文档/人脸识别/Rotate-and-Render-master$ bash experiments/v100_test.sh ----------------- Options --------------- align: True [default: False] aspect_ratio: 1.0
cache_filelist_read: False
cache_filelist_write: False
checkpoints_dir: ./checkpoints
chunk_size: [1] [default: None] contain_dontcare_label: False
crop_size: 256
dataset: example [default: ms1m,casia] dataset_mode: allface
device_count: 1 [default: 8] display_winsize: 256
erode_kernel: 21
gpu_ids: 0,1 [default: 0] heatmap_size: 2.5 [default: 3] how_many: inf
init_type: xavier
init_variance: 0.02
isTrain: False [default: None] label_mask: True [default: False] label_nc: 5
landmark_align: False
list_end: 10 [default: inf] list_num: 0
list_start: 0
load_from_opt_file: False
load_size: 256
max_dataset_size: 9223372036854775807
model: rotatespade [default: rotate] multi_gpu: True [default: False] nThreads: 3 [default: 1] name: mesh2face
names: rs_model [default: rs_ijba3] nef: 16
netG: rotatespade [default: rotate] ngf: 64
no_flip: True
no_gaussian_landmark: True [default: False] no_instance: True
no_pairing_check: False
norm_D: spectralinstance
norm_E: spectralinstance
norm_G: spectralsyncbatch [default: spectralinstance] output_nc: 3
phase: test
pitch_poses: None
posesrandom: False
preprocess_mode: scale_width_and_crop
render_thread: 1 [default: 2] resnet_initial_kernel_size: 7
resnet_kernel_size: 3
resnet_n_blocks: 9
resnet_n_downsample: 4
results_dir: ./results/
save_path: ./results/
serial_batches: True
trainer: rotate
which_epoch: latest
yaw_poses: [0.0, 30.0] [default: None] ----------------- End ------------------- dataset [AllFaceDataset] of size 8 was created Testing gpu [0] Network [RotateSPADEGenerator] was created. Total number of parameters: 225.1 million. To see the architecture, do print(network). start prefetching data... Process Process-1: Traceback (most recent call last): File "/home/ty/anaconda3/envs/py37/lib/python3.7/multiprocessing/process.py", line 297, in _bootstrap self.run() File "/home/ty/anaconda3/envs/py37/lib/python3.7/multiprocessing/process.py", line 99, in run self._target(*self._args, *self._kwargs) File "/home/ty/文档/人脸识别/Rotate-and-Render-master/data/data_utils.py", line 146, in prefetch_data prefetcher = data_prefetcher(dataloader, opt, render_layer) File "/home/ty/文档/人脸识别/Rotate-and-Render-master/data/data_utils.py", line 99, in init self.preload() File "/home/ty/文档/人脸识别/Rotate-and-Render-master/data/data_utils.py", line 124, in preload self.next_input = get_multipose_test_input(data, self.render_layer, self.opt.yaw_poses, self.opt.pitch_poses) File "/home/ty/文档/人脸识别/Rotate-and-Render-master/data/data_utils.py", line 65, in get_multipose_test_input = render.rotate_render(data['param_path'], real_image, data['M'], yaw_pose=pose) File "/home/ty/文档/人脸识别/Rotate-and-Render-master/models/networks/rotate_render.py", line 80, in rotate_render rendered_images, depths, masks, = self.renderer(vertices_ori_normal, self.faces_use, texs) # rendered_images: batch 3 h w, masks: batch h w ValueError: not enough values to unpack (expected 3, got 1)