RCHI-Lab / BodyMAP

Official implementation of "BodyMAP - Jointly Predicting Body Mesh and 3D Applied Pressure Map for People in Bed", CVPR 2024
https://bodymap3d.github.io/
Other
15 stars 2 forks source link

Request for pretrained weights for depth-only model #5

Open DavidTu21 opened 3 weeks ago

DavidTu21 commented 3 weeks ago

Dear author,

Thank you for your work on the BodyMap project! We are currently working on a project that aims to infer an SMPL model from depth images of in-bed human poses. Unfortunately, we don’t have access to 2D pressure map data, so we're focused on using depth information alone as the input.

We noticed that in your ablation study, the “depth-only” model performed well in predicting 3D body shape and pose. Would it be possible to access any pretrained weights specifically for the depth-only model? These would be highly valuable for our analysis and would allow us to benchmark our results more effectively.

If pretrained weights aren't available, any guidance on training the model from scratch would be greatly appreciated.

Thank you for your work and for considering our request!

David

Tandon-A commented 3 weeks ago

Hi @DavidTu21 ,

We have released the depth-only model. You can download it from link. These model weights are released for only non-commercial purposes, please check license file for details.

This BodyMAP model predicts both SMPL model for 3D human pose as well as the 3D pressure map. For your task, it may benefit you to train a BodyMAP model only for 3D human pose.

To train a model which only predicts 3D human pose from depth modality, you would need to modify the config file:

Hope this helps your work.

DavidTu21 commented 3 weeks ago

Hi @Tandon-A

Thank you so much for your detailed explanation and for uploading the depth-only model. I really appreciate it. I am wondering if I would be able to perform an inference on the depth image (my own depth image) using this uploaded depth-only model?

I saw from the save_inference.py code that batch_mesh_pred, batch_pmappred, , _ = model.infer(batch_depth_images, batch_pressure_images, batch_labels[:, 157:159]), can I replace this model inference input batch_depth_images with my depth image only? If so, are there any requirements on the input depth format? My current depth input data is a depth map in npy format.

And for sure I will also start training a BodyMap model only for 3D human pose too.

Kind regards, David

DavidTu21 commented 3 weeks ago

Hi @Tandon-A ,

Sorry for sending another comment, I think it is related to the previous request.

I have finished training the depth-only model using your suggested config (and thank you for that again!) and got the model training weights up to 100 epochs. While I started model inferencing using python save_inference.py --model_path xxx --opts_path xxx --save_path xxx, I faced an issue as below:

Traceback (most recent call last):
  File "save_inference.py", line 70, in <module>
    batch_pmap_pred *= MAX_PMAP_REAL
TypeError: unsupported operand type(s) for *=: 'NoneType' and 'float'

I suspect that this is because I did not change anything in the inferencing script to only take depth as the input. Do you have suggestions on how I could modify the inferencing the code using only depth as the input? Thank you very much again for your valuable time and detailed suggestions!

Kind regards, David

Tandon-A commented 2 weeks ago

Hi @DavidTu21 ,

batch_pmap_pred *= MAX_PMAP_REAL

You can just set batch_pmap_pred to be a zeros tensor before this line for your model.

batch_pmap_pred = torch.zeros(batch_mesh_pred['out_joint_pos'].shape[0], 6890).to(DEVICE)

This will set the pmpa predictions to be zeros.

Tandon-A commented 2 weeks ago

I am wondering if I would be able to perform an inference on the depth image (my own depth image) using this uploaded depth-only model?

You can certainly do inference on your depth image. Please convert your depth image to the format followed by SLP, and then process it similar to what is being done in the SLPDataset file.

DavidTu21 commented 1 week ago

Hi @Tandon-A

Thank you for your reply. I tried to add batch_pmap_pred = torch.zeros(batch_mesh_pred['out_joint_pos'].shape[0], 6890).to(DEVICE) and it indeed solves the error. However I am wondering that can I also input batch_pressure_images to be dummy numbers or zeros when input to the depth-only trained model? As I don't have the pressure map information when inferencing. Thank you!

Kind regards, David

Tandon-A commented 2 days ago

Hi David,

Glad that it worked for you. The depth only model uses only the depth map and ignores whatever is passed for pressure images. So you could pass random, or zeros, or even None would work, I think. The model first calls this _prep_input function which checks select the input based on the modality used. (link)

Best, Abhishek