Arthur151 / ROMP

Monocular, One-stage, Regression of Multiple 3D People and their 3D positions & trajectories in camera & global coordinates. ROMP[ICCV21], BEV[CVPR22], TRACE[CVPR2023]
https://www.yusun.work/
Apache License 2.0
1.33k stars 229 forks source link

image visualization issue #116

Closed mahsaep closed 2 years ago

mahsaep commented 2 years ago

Thanks so much for sharing the code!

I am trying to test ROMP on demo images and save the output predicted mesh on the image with " CUDA_VISIBLE_DEVICES=0 python -u -m romp.predict.image --configs_yml='configs/image.yml'"

However, it terminates does not process any of the images and terminates. One bug that I found was that in romp/predict/image.py line:49 results_dict['mesh_rendering_orgimgs']['figs'] should be changed to results_dict['org_img']['figs'], but I'm still facing the problem:

CUDA_VISIBLE_DEVICES=0 python -u -m romp.predict.image --configs_yml='configs/image.yml' yaml_timestamp /home/mahsa/Downloads/ROMP-master/active_configs/active_context_2021-12-13_13_25_29.yaml Loading the configurations from configs/image.yml INFO:root:{'tab': 'hrnet_cm64_process_images', 'configs_yml': 'configs/image.yml', 'inputs': 'demo/images', 'output_dir': 'demo/image_results', 'interactive_vis': False, 'show_largest_person_only': False, 'show_mesh_stand_on_image': False, 'soi_camera': 'far', 'make_tracking': False, 'temporal_optimization': False, 'save_dict_results': True, 'save_visualization_on_img': True, 'fps_save': 24, 'character': 'smpl', 'renderer': 'pyrender', 'f': None, 'model_return_loss': False, 'model_version': 1, 'multi_person': True, 'new_training': False, 'perspective_proj': False, 'FOV': 60, 'focal_length': 443.4, 'lr': 0.0003, 'adjust_lr_factor': 0.1, 'weight_decay': 1e-06, 'epoch': 120, 'fine_tune': True, 'GPUS': 0, 'batch_size': 64, 'input_size': 512, 'master_batch_size': -1, 'nw': 4, 'optimizer_type': 'Adam', 'pretrain': 'simplebaseline', 'fix_backbone_training_scratch': False, 'backbone': 'hrnet', 'model_precision': 'fp32', 'deconv_num': 0, 'head_block_num': 2, 'merge_smpl_camera_head': False, 'use_coordmaps': True, 'hrnet_pretrain': '/home/mahsa/Downloads/ROMP-master/trained_models/pretrain_hrnet.pkl', 'resnet_pretrain': '/home/mahsa/Downloads/ROMP-master/trained_models/pretrain_resnet.pkl', 'loss_thresh': 1000, 'max_supervise_num': -1, 'supervise_cam_params': False, 'match_preds_to_gts_for_supervision': False, 'matching_mode': 'all', 'supervise_global_rot': False, 'HMloss_type': 'MSE', 'eval': False, 'eval_datasets': 'pw3d', 'val_batch_size': 4, 'test_interval': 2000, 'fast_eval_iter': -1, 'top_n_error_vis': 6, 'eval_2dpose': False, 'calc_pck': False, 'PCK_thresh': 150, 'calc_PVE_error': False, 'centermap_size': 64, 'centermap_conf_thresh': 0.25, 'collision_aware_centermap': False, 'collision_factor': 0.2, 'center_def_kp': True, 'local_rank': 0, 'distributed_training': False, 'distillation_learning': False, 'teacher_model_path': '/export/home/suny/CenterMesh/trained_models/3dpw_88_57.8.pkl', 'print_freq': 50, 'model_path': 'trained_models/ROMP_HRNet32_V1.pkl', 'log_path': '/home/mahsa/Downloads/log/', 'learn_2dpose': False, 'learn_AE': False, 'learn_kp2doffset': False, 'shuffle_crop_mode': False, 'shuffle_crop_ratio_3d': 0.9, 'shuffle_crop_ratio_2d': 0.1, 'Synthetic_occlusion_ratio': 0, 'color_jittering_ratio': 0.2, 'rotate_prob': 0.2, 'dataset_rootdir': '/home/mahsa/Downloads/dataset/', 'dataset': 'h36m,mpii,coco,aich,up,ochuman,lsp,movi', 'voc_dir': '/home/mahsa/Downloads/dataset/VOCdevkit/VOC2012/', 'max_person': 64, 'homogenize_pose_space': False, 'use_eft': True, 'smpl_mesh_root_align': False, 'Rot_type': '6D', 'rot_dim': 6, 'cam_dim': 3, 'beta_dim': 10, 'smpl_joint_num': 22, 'smpl_model_path': '/home/mahsa/Downloads/ROMP-master/model_data/parameters', 'smpl_J_reg_h37m_path': '/home/mahsa/Downloads/ROMP-master/model_data/parameters/J_regressor_h36m.npy', 'smpl_J_reg_extra_path': '/home/mahsa/Downloads/ROMP-master/model_data/parameters/J_regressor_extra.npy', 'smpl_uvmap': '/home/mahsa/Downloads/ROMP-master/model_data/parameters/smpl_vt_ft.npz', 'wardrobe': '/home/mahsa/Downloads/ROMP-master/model_data/wardrobe', 'mesh_cloth': 'ghostwhite', 'nvxia_model_path': '/home/mahsa/Downloads/ROMP-master/model_data/characters/nvxia', 'track_memory_usage': False, 'adjust_lr_epoch': [], 'kernel_sizes': [5], 'collect_subdirs': False, 'save_mesh': True, 'save_centermap': False} INFO:root:------------------------------------------------------------------ INFO:root:start building model. Using ROMP v1 Confidence: 0.25 INFO:root:using fine_tune model: trained_models/ROMP_HRNet32_V1.pkl WARNING:root:model trained_models/ROMP_HRNet32_V1.pkl not exist! INFO:root:Train all layers, except: ['_result_parser.params_map_parser.smpl_model.betas'] Initialization finished! Processing demo/images, saving to demo/image_results INFO:root:gathering datasets Loading 3 images to process /home/mahsa/anaconda3/envs/rmp/lib/python3.8/site-packages/torch/nn/functional.py:718: UserWarning: Named tensors and all their associated APIs are an experimental feature and subject to change. Please do not use them for anything important until they are released as stable. (Triggered internally at /opt/conda/conda-bld/pytorch_1623448234945/work/c10/core/TensorImpl.h:1156.) return torch.max_pool2d(input, kernel_size, stride, padding, dilation, ceil_mode) /home/mahsa/anaconda3/envs/rmp/lib/python3.8/site-packages/torch/_tensor.py:575: UserWarning: floor_divide is deprecated, and will be removed in a future version of pytorch. It currently rounds toward 0 (like the 'trunc' function NOT 'floor'). This results in incorrect rounding for negative values. To keep the current behavior, use torch.div(a, b, rounding_mode='trunc'), or for actual floor division, use torch.div(a, b, rounding_mode='floor'). (Triggered internally at /opt/conda/conda-bld/pytorch_1623448234945/work/aten/src/ATen/native/BinaryOps.cpp:467.) return torch.floor_divide(self, other) Processed 0 / 3 images

I would appreciate your help!

Arthur151 commented 2 years ago

The bug is reported at warning WARNING:root:model trained_models/ROMP_HRNet32_V1.pkl not exist! results_dict['mesh_rendering_orgimgs']['figs'] is not the bug. Please download the trained model from here

mahsaep commented 2 years ago

thank you very much!