climbingdaily / SMPL-Scene-Viewer

A simple and fast GUI tool to visualize and compare the SMPL sequences and scenes in real-time.
Other
47 stars 5 forks source link
3d 3d-viewer human-pose-estimation open3d point-cloud pose-estimation rendering smpl vistool visualization

SMPL Sequences and Scene Visualization Tool

This is a powerful open-source GUI tool designed to quickly and accurately visualize and compare SMPL sequences in large scenes in real-time. Based on Open3D, this tool can be easily run across platforms and on CPU-only computers. However, if you require more advanced rendering capabilities, we recommend using Blender, Unity, or similar software for optimal results.

Features

Logo
Some examples

Requirements

  1. Open3D 15.0+
  2. Download the SMPL model basicModel_neutral_lbs_10_207_0_v1.0.0.pkl, basicModel_f_lbs_10_207_0_v1.0.0.pkl, basicModel_m_lbs_10_207_0_v1.0.0.pkl and J_regressor_extra.npy from http://smpl.is.tue.mpg.de and put them in smpl directory.
  3. (Optional) ffmpeg for video processing

Installation

  1. Clone the repository:

    git clone https://github.com/climbingdaily/SMPL-Viewer.git
  2. Install the required packages:

    conda create --name sviewer python==3.9 -y
    conda activate sviewer
    pip install numpy open3d matplotlib scipy opencv-python torch paramiko chumpy lzf 
    • If your numpy > 1.23.0, there will be a conflict with Chumpy. You can just comment out the line 11 in chumpy/__init__.py, # from numpy import bool, int, float, complex, object, unicode, str, nan, inf
  3. Run

    python GUI_Tool.py

Usage

# Function Button
1 .pcd .ply .obj .xyz .xyzn visualization
2 .pcd sequence visualization
3 SMPL sequence (.pkl) visualization
The data structure is detailed at readme
4 Geometry's material editing
5 Camera load/save
<!-- 6 Rendering and generating the video (with camera automatically saved).
- Start: Toggle on the Render img
- End: Click the Save video
- The video will automatically be saved when the play bar meets the end.
-->

Todos

Contributing

Contributions are welcome and encouraged! If you find a bug or have an idea for a new feature, please open an issue or submit a pull request.

License

The codebase is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. You must attribute the work in the manner specified by the authors, you may not use this work for commercial purposes and if you alter, transform, or build upon this work, you may distribute the resulting work only under the same license.

Contact

If you have any questions or comments about the project, please create an issue.

Acknowledgments

We would like to thank the following individuals for their contributions to this project:

We are also grateful to the broader open-source community for creating and sharing tools and knowledge that make projects like this possible.

Citation

This project is driven by the need for LiDAR-based Human and scene motion capture. If you find this tool useful for your own work, please consider citing the corresponding paper that inspired this project:

@InProceedings{Dai_2022_CVPR,
    author    = {Dai, Yudi and Lin, Yitai and Wen, Chenglu and Shen, Siqi and Xu, Lan and Yu, Jingyi and Ma, Yuexin and Wang, Cheng},
    title     = {HSC4D: Human-Centered 4D Scene Capture in Large-Scale Indoor-Outdoor Space Using Wearable IMUs and LiDAR},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2022},
    pages     = {6792-6802}

@inproceedings{dai2023sloper4d,
    title     = {SLOPER4D: A Scene-Aware Dataset for Global 4D Human Pose Estimation in Urban Environments},
    author    = {Dai, Yudi and Lin, YiTai and Lin, XiPing and Wen, Chenglu and Xu, Lan and Yi, Hongwei and Shen, Siqi and Ma, Yuexin and Wang, Cheng},
    booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2023}
}
}