Closed Dipankar1997161 closed 1 year ago
Have you solve this problem?could i ask what is you operating system?and how you do the rebuild thing?
Have you solve this problem?could i ask what is you operating system?and how you do the rebuild thing?
My systems is Arch Linux, you first install the SpConv from the official website, based on your cudnn version. That's all.
If minor errors arise after that, comment back, I will help you.
Thing is, I have written the steps in my laptop, but I am outside, so can't access it. But share your future error if any. We will solve it
@Dipankar1997161 Hi,
Did you succeed to run neural body approach on your custom monocular video ?
@Dipankar1997161 Hi,
Did you succeed to run neural body approach on your custom monocular video ?
Actually my projects includes many other factors. I do not have a monocular video, Hence I am using Multi-view videos.
To generate the smpl parameters for your monocular dataset, use ROMP / VIBE. They will generate the necessary parameters for the video, Rh, Th, Poses, joints and many more.
Then use just follow the remaining steps from preprocessing on this repo.
I did not find videoAvatar much useful, when installing I ran into problems, and plus I did not actually had a monocular video, I didn't dive much into it.
But Romp and vibe can solve your issue. Let me know further
Let me tell you what I am intending to accomplish.
I have a video of my self turning in front of the phone camera just like this woman in {https://zju3dv.github.io/neuralbody/} and I want to get the smpl params shape and pose in order to feed them to the smpl model to get vertices in order to measure some body parts, like length and several circumferences.
I was looking around and finished by reading neural body's paper. My question is: can I accomplish the steps mentioned above with neural body ? and what are the exact steps to do? I followed the install.MD without any problems. Im a bit new in 3d computer vision...
I also dont have a monocular dataset, I intend to find a pretrained model on monocular videos and inference directly...
I also dont have a monocular dataset, I intend to find a pretrained model on monocular videos and inference directly...
I understand what you trying to accomplish here. let me clarify one by one.
This is for the smpl parameters. a Similar to that is https://github.com/mkocabas/VIBE.git
Yes I have the scripts ready for the output of smpl in order to measure these body parts. As I understood from you, neural body has no intersection with smpl params at any stage ? I mean, I cannot come up with smpl params from the 3D reconstruction of neuralbody ?
I will make a look on VIBE and ROMP, do you think they can perform well on monocular video where the person is turning in front of the camera?
Yes I have the scripts ready for the output of smpl in order to measure these body parts. As I understood from you, neural body has no intersection with smpl params at any stage ? I mean, I cannot come up with smpl params from the 3D reconstruction of neuralbody ?
I will make a look on VIBE and ROMP, do you think they can perform well on monocular video where the person is turning in front of the camera?
As I understand, you should read some video-to-smpl papers. Neural Body is a 3D reconstruction work which needs smpl parameters first. You may misunderstand cause and effect :)
Yes I have the scripts ready for the output of smpl in order to measure these body parts. As I understood from you, neural body has no intersection with smpl params at any stage ? I mean, I cannot come up with smpl params from the 3D reconstruction of neuralbody ?
I will make a look on VIBE and ROMP, do you think they can perform well on monocular video where the person is turning in front of the camera?
You misunderstood my point. Neural body or any other 3D reconstruction method, requires SMPL parameters. These parameters are generated using different methods and then fed to 3d reconstruction network for rendering.
Vibe and romp will only give you the necessary SMPL parameters. Taking those values, you can then train Neural Body on you custom dataset.
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate
Rh = self.params['pose'][i][:3]
R = cv2.Rodrigues(Rh)[0].astype(np.float32)
Th = self.params['trans'][i].astype(np.float32)
xyz = np.dot(xyz - Th, R)
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I use "cam_trans", but I get the bad results:
Part of the hand is not rendered.
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture
Could I use Easymocap in image?
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture
Could I use Easymocap in image?
Yess you can.
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture
Could I use Easymocap in image?
Yess you can.
But I just have one camera
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture
Could I use Easymocap in image?
Yess you can.
But I just have one camera
Use the script "mocap.py" and set the --work to internet.
You can get all the tutorials under the Html page of easymocap ( check section , demon for motion capture )
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture
Could I use Easymocap in image?
Yess you can.
But I just have one camera
Use the script "mocap.py" and set the --work to internet.
You can get all the tutorials under the Html page of easymocap ( check section , demon for motion capture )
OK, Thank you very much!
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture
Thank you, but I'm still very confused, why is using ROMP wrong?
It's not fully wrong. I already told you in Humannerf, the smpl was not correct, but it depends on the rendering pipeline how it handles
On Thu, 13 Jul 2023, 09:21 gushengbo, @.***> wrote:
Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows?
# transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R)
I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix
I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody)
Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture
Thank you, but I'm still very confused, why is using ROMP wrong?
— Reply to this email directly, view it on GitHub https://github.com/zju3dv/neuralbody/issues/123#issuecomment-1633703686, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUMNP27C5IJFJCVLWHX5WADXP6OYLANCNFSM6AAAAAAU76SJLU . You are receiving this because you were mentioned.Message ID: @.***>
It's not fully wrong. I already told you in Humannerf, the smpl was not correct, but it depends on the rendering pipeline how it handles … On Thu, 13 Jul 2023, 09:21 gushengbo, @.> wrote: Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows? # transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R) I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody) Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture Thank you, but I'm still very confused, why is using ROMP wrong? — Reply to this email directly, view it on GitHub <#123 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUMNP27C5IJFJCVLWHX5WADXP6OYLANCNFSM6AAAAAAU76SJLU . You are receiving this because you were mentioned.Message ID: @.>
It's not fully wrong. I already told you in Humannerf, the smpl was not correct, but it depends on the rendering pipeline how it handles … On Thu, 13 Jul 2023, 09:21 gushengbo, @.> wrote: Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows? # transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R) I believe you have something either called as "trans" or "cam_trans" in ROMP. Just make sure it is 3,1 matrix I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody) Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture Thank you, but I'm still very confused, why is using ROMP wrong? — Reply to this email directly, view it on GitHub <#123 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUMNP27C5IJFJCVLWHX5WADXP6OYLANCNFSM6AAAAAAU76SJLU . You are receiving this because you were mentioned.Message ID: @.>
I get it. Thank you! In EasyMocap, I use mediapipe_wrapper.py to get keypoints, but I get a error.
-> [Loading config/data/multivideo.yml]: 1.7s
Traceback (most recent call last):
File "apps/fit/fit.py", line 33, in
could I get the intri abd extri
It's not fully wrong. I already told you in Humannerf, the smpl was not correct, but it depends on the rendering pipeline how it handles … On Thu, 13 Jul 2023, 09:21 gushengbo, @._> wrote: Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows? # transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R) I believe you have something either called as "trans" or "camtrans" in ROMP. Just make sure it is 3,1 matrix I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody) Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture Thank you, but I'm still very confused, why is using ROMP wrong? — Reply to this email directly, view it on GitHub <#123 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUMNP27C5IJFJCVLWHX5WADXP6OYLANCNFSM6AAAAAAU76SJLU . You are receiving this because you were mentioned.Message ID: @_._>
It's not fully wrong. I already told you in Humannerf, the smpl was not correct, but it depends on the rendering pipeline how it handles … On Thu, 13 Jul 2023, 09:21 gushengbo, @._> wrote: Hello, could you tell me how to get Th in POMP? I don't not which param in the .npz file from ROMP is Th. The code about Th in neuralbody is as follows? # transform smpl from the world coordinate to the smpl coordinate Rh = self.params['pose'][i][:3] R = cv2.Rodrigues(Rh)[0].astype(np.float32) Th = self.params['trans'][i].astype(np.float32) xyz = np.dot(xyz - Th, R) I believe you have something either called as "trans" or "camtrans" in ROMP. Just make sure it is 3,1 matrix I successfully render my our datasets in Humannerf, but unsuccessfully render in Relighting4D( follow Neuralbody) Try PARE for neuralbody, or else the best would be to use Easymocap for monocular motion capture Thank you, but I'm still very confused, why is using ROMP wrong? — Reply to this email directly, view it on GitHub <#123 (comment)>, or unsubscribe https://github.com/notifications/unsubscribe-auth/AUMNP27C5IJFJCVLWHX5WADXP6OYLANCNFSM6AAAAAAU76SJLU . You are receiving this because you were mentioned.Message ID: @_._>
I get it. Thank you! In EasyMocap, I use mediapipe_wrapper.py to get keypoints, but I get a error.
-> [Loading config/data/multivideo.yml]: 1.7s Traceback (most recent call last): File "apps/fit/fit.py", line 33, in dataset = load_object(cfg_data.module, cfg_data.args) File "/home/shengbo/EasyMocap-master/easymocap/config/baseconfig.py", line 67, in load_object obj = getattr(module, name)(extra_args, module_args) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 518, in init super().init(kwargs) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 403, in init super().init(kwargs) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 240, in init cameras = read_cameras(camera) File "/home/shengbo/EasyMocap-master/easymocap/mytools/camera_utils.py", line 151, in read_cameras cameras = read_camera(join(path, intri), join(path, extri)) File "/home/shengbo/EasyMocap-master/easymocap/mytools/camera_utils.py", line 109, in read_camera assert os.path.exists(intri_name), intri_name
could I get the intri abd extri
For getting keypoints, there is a file called as "extract_keypoints.py" just use that with any mode, Openpose, yolo+hrnet or media pipe.
You will get the keypoints. Then when you use Mocap.py with internet, it will auto-generate you the intri and extri files
Thank you. I have successfully get intri and extri files. When I run mocap.py, I get an error: Traceback (most recent call last): File "apps/fit/fit.py", line 38, in fitter.fit(body_model, dataset) File "/home/shengbo/EasyMocap-master/easymocap/multistage/base.py", line 309, in fit dataset.write(body_model, body_params, data, camera) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 390, in write vis_mesh = self.vis_body(body_model, params, img, camera, scale=self.writer.render.scale, mode=self.writer.render.mode) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 346, in vis_body ret = plot_meshes(vis, meshes, K, camera.R, camera.T, mode=mode) File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 135, in plot_meshes renderer = Renderer() File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 36, in init self.renderer = pyrender.OffscreenRenderer(1024, 1024) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 28, in init self._create() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 101, in _create self._platform.init_context() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/platforms.py", line 101, in init_context self._egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_BAD_PARAMETER, baseOperation = eglGetDisplay, cArguments = ( <OpenGL._opaque.EGLNativeDisplayType_pointer object at 0x7f22971874d0>, ), result = <OpenGL._opaque.EGLDisplay_pointer object at 0x7f229713d440> )
Can you provide me the command line you used for running mocap.py, like which arguments you passed. Also does your display support OpenGL? try to see if it's installed or not
Thank you. I have successfully get intri and extri files. When I run mocap.py, I get an error: Traceback (most recent call last): File "apps/fit/fit.py", line 38, in fitter.fit(body_model, dataset) File "/home/shengbo/EasyMocap-master/easymocap/multistage/base.py", line 309, in fit dataset.write(body_model, body_params, data, camera) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 390, in write vis_mesh = self.vis_body(body_model, params, img, camera, scale=self.writer.render.scale, mode=self.writer.render.mode) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 346, in vis_body ret = plot_meshes(vis, meshes, K, camera.R, camera.T, mode=mode) File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 135, in plot_meshes renderer = Renderer() File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 36, in init self.renderer = pyrender.OffscreenRenderer(1024, 1024) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 28, in init self._create() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 101, in _create self._platform.init_context() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/platforms.py", line 101, in init_context self._egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_BAD_PARAMETER, baseOperation = eglGetDisplay, cArguments = ( <OpenGL._opaque.EGLNativeDisplayType_pointer object at 0x7f22971874d0>, ), result = <OpenGL._opaque.EGLDisplay_pointer object at 0x7f229713d440> )
Can you provide me the command line you used for running mocap.py, like which arguments you passed. Also does your display support OpenGL? try to see if it's installed or not
Thank you. I have successfully get intri and extri files. When I run mocap.py, I get an error: Traceback (most recent call last): File "apps/fit/fit.py", line 38, in fitter.fit(body_model, dataset) File "/home/shengbo/EasyMocap-master/easymocap/multistage/base.py", line 309, in fit dataset.write(body_model, body_params, data, camera) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 390, in write vis_mesh = self.vis_body(body_model, params, img, camera, scale=self.writer.render.scale, mode=self.writer.render.mode) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 346, in vis_body ret = plot_meshes(vis, meshes, K, camera.R, camera.T, mode=mode) File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 135, in plot_meshes renderer = Renderer() File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 36, in init self.renderer = pyrender.OffscreenRenderer(1024, 1024) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 28, in init self._create() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 101, in _create self._platform.init_context() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/platforms.py", line 101, in init_context self._egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_BAD_PARAMETER, baseOperation = eglGetDisplay, cArguments = ( <OpenGL._opaque.EGLNativeDisplayType_pointer object at 0x7f22971874d0>, ), result = <OpenGL._opaque.EGLDisplay_pointer object at 0x7f229713d440> )
Can you provide me the command line you used for running mocap.py, like which arguments you passed. Also does your display support OpenGL? try to see if it's installed or not
![]()
I change the server, and get another error:
Thank you. I have successfully get intri and extri files. When I run mocap.py, I get an error: Traceback (most recent call last): File "apps/fit/fit.py", line 38, in fitter.fit(body_model, dataset) File "/home/shengbo/EasyMocap-master/easymocap/multistage/base.py", line 309, in fit dataset.write(body_model, body_params, data, camera) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 390, in write vis_mesh = self.vis_body(body_model, params, img, camera, scale=self.writer.render.scale, mode=self.writer.render.mode) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 346, in vis_body ret = plot_meshes(vis, meshes, K, camera.R, camera.T, mode=mode) File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 135, in plot_meshes renderer = Renderer() File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 36, in init self.renderer = pyrender.OffscreenRenderer(1024, 1024) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 28, in init self._create() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 101, in _create self._platform.init_context() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/platforms.py", line 101, in init_context self._egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_BAD_PARAMETER, baseOperation = eglGetDisplay, cArguments = ( <OpenGL._opaque.EGLNativeDisplayType_pointer object at 0x7f22971874d0>, ), result = <OpenGL._opaque.EGLDisplay_pointer object at 0x7f229713d440> )
Can you provide me the command line you used for running mocap.py, like which arguments you passed. Also does your display support OpenGL? try to see if it's installed or not
![]()
I change the server, and get another error:
Got it. Soo if you open your camera files. There will a number given to the camera. Eg. "01" or so
Make sure to have the images and annotations in the similar folder name. Image - 01 - ( all your images ) annots- 01 - ( all your keypoints )
Bascially, the code is unable to locate your files, so get the file structure correct Try this
Thank you. I have successfully get intri and extri files. When I run mocap.py, I get an error: Traceback (most recent call last): File "apps/fit/fit.py", line 38, in fitter.fit(body_model, dataset) File "/home/shengbo/EasyMocap-master/easymocap/multistage/base.py", line 309, in fit dataset.write(body_model, body_params, data, camera) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 390, in write vis_mesh = self.vis_body(body_model, params, img, camera, scale=self.writer.render.scale, mode=self.writer.render.mode) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 346, in vis_body ret = plot_meshes(vis, meshes, K, camera.R, camera.T, mode=mode) File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 135, in plot_meshes renderer = Renderer() File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 36, in init self.renderer = pyrender.OffscreenRenderer(1024, 1024) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 28, in init self._create() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 101, in _create self._platform.init_context() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/platforms.py", line 101, in init_context self._egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_BAD_PARAMETER, baseOperation = eglGetDisplay, cArguments = ( <OpenGL._opaque.EGLNativeDisplayType_pointer object at 0x7f22971874d0>, ), result = <OpenGL._opaque.EGLDisplay_pointer object at 0x7f229713d440> )
Can you provide me the command line you used for running mocap.py, like which arguments you passed. Also does your display support OpenGL? try to see if it's installed or not
![]()
I change the server, and get another error:
Got it. Soo if you open your camera files. There will a number given to the camera. Eg. "01" or so
Make sure to have the images and annotations in the similar folder name. Image - 01 - ( all your images ) annots- 01 - ( all your keypoints )
Bascially, the code is unable to locate your files, so get the file structure correct Try this
Thank you! I have solved it. The code for pyrender downloaded through pip install is different from the code for pyrender downloaded through setup.py.
Thank you. I have successfully get intri and extri files. When I run mocap.py, I get an error: Traceback (most recent call last): File "apps/fit/fit.py", line 38, in fitter.fit(body_model, dataset) File "/home/shengbo/EasyMocap-master/easymocap/multistage/base.py", line 309, in fit dataset.write(body_model, body_params, data, camera) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 390, in write vis_mesh = self.vis_body(body_model, params, img, camera, scale=self.writer.render.scale, mode=self.writer.render.mode) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 346, in vis_body ret = plot_meshes(vis, meshes, K, camera.R, camera.T, mode=mode) File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 135, in plot_meshes renderer = Renderer() File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 36, in init self.renderer = pyrender.OffscreenRenderer(1024, 1024) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 28, in init self._create() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 101, in _create self._platform.init_context() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/platforms.py", line 101, in init_context self._egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_BAD_PARAMETER, baseOperation = eglGetDisplay, cArguments = ( <OpenGL._opaque.EGLNativeDisplayType_pointer object at 0x7f22971874d0>, ), result = <OpenGL._opaque.EGLDisplay_pointer object at 0x7f229713d440> )
Can you provide me the command line you used for running mocap.py, like which arguments you passed. Also does your display support OpenGL? try to see if it's installed or not
![]()
I change the server, and get another error:
Got it. Soo if you open your camera files. There will a number given to the camera. Eg. "01" or so Make sure to have the images and annotations in the similar folder name. Image - 01 - ( all your images ) annots- 01 - ( all your keypoints ) Bascially, the code is unable to locate your files, so get the file structure correct Try this
Thank you! I have solved it. The code for pyrender downloaded through pip install is different from the code for pyrender downloaded through setup.py.
Takes time. But the process is working
Thank you. I have successfully get intri and extri files. When I run mocap.py, I get an error: Traceback (most recent call last): File "apps/fit/fit.py", line 38, in fitter.fit(body_model, dataset) File "/home/shengbo/EasyMocap-master/easymocap/multistage/base.py", line 309, in fit dataset.write(body_model, body_params, data, camera) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 390, in write vis_mesh = self.vis_body(body_model, params, img, camera, scale=self.writer.render.scale, mode=self.writer.render.mode) File "/home/shengbo/EasyMocap-master/easymocap/datasets/base.py", line 346, in vis_body ret = plot_meshes(vis, meshes, K, camera.R, camera.T, mode=mode) File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 135, in plot_meshes renderer = Renderer() File "/home/shengbo/EasyMocap-master/easymocap/visualize/pyrender_wrapper.py", line 36, in init self.renderer = pyrender.OffscreenRenderer(1024, 1024) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 28, in init self._create() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/offscreen.py", line 101, in _create self._platform.init_context() File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/pyrender/platforms.py", line 101, in init_context self._egl_display = eglGetDisplay(EGL_DEFAULT_DISPLAY) File "/home/shengbo/anaconda3/envs/relighting4d/lib/python3.7/site-packages/OpenGL/platform/baseplatform.py", line 415, in call return self( *args, **named ) File "src/errorchecker.pyx", line 58, in OpenGL_accelerate.errorchecker._ErrorChecker.glCheckError OpenGL.raw.EGL._errors.EGLError: EGLError( err = EGL_BAD_PARAMETER, baseOperation = eglGetDisplay, cArguments = ( <OpenGL._opaque.EGLNativeDisplayType_pointer object at 0x7f22971874d0>, ), result = <OpenGL._opaque.EGLDisplay_pointer object at 0x7f229713d440> )
Can you provide me the command line you used for running mocap.py, like which arguments you passed. Also does your display support OpenGL? try to see if it's installed or not
![]()
I change the server, and get another error:
Got it. Soo if you open your camera files. There will a number given to the camera. Eg. "01" or so Make sure to have the images and annotations in the similar folder name. Image - 01 - ( all your images ) annots- 01 - ( all your keypoints ) Bascially, the code is unable to locate your files, so get the file structure correct Try this
Thank you! I have solved it. The code for pyrender downloaded through pip install is different from the code for pyrender downloaded through setup.py.
Takes time. But the process is working
Hello, thank you for your help. I have a question about that in Neuralbody, the Rh = self.params['pose'][i][:3]
Rh = self.params['pose'][i][:3]
R = cv2.Rodrigues(Rh)[0].astype(np.float32)
Th = self.params['trans'][i].astype(np.float32)
xyz = np.dot(xyz - Th, R)
In Easymocap, I have Rh and pose in .json file. However, the Rh != self.params['pose'][i][:3] , which one I should use?
Oh! The pose just have (69,3) params
hello @pengsida ,
I checked the Install.md file and found a longer installation procedure of Spconv, however, when building, it was unable to find the cudnn although my system is up-to-date.
So, I installed the spconv directly using pip install spconv-cuda117 from the traveller59 git page and it was installed successfully.
Will this create any issues? or do I have to follow your method of building it from .whl file?
Thank you in advance