1Huazhong University of Science and Technology
2S-Lab, Nanyang Technological University
3Great Bay University
4Shanghai AI Laboratory
>**TL;DR**:
WildAvatar is a large-scale dataset from YouTube with 10,000+ human subjects, designed to address the limitations of existing laboratory datasets for avatar creation.
## 🔨 Environments
```bash
conda create -n wildavatar python=3.9
conda activate wildavatar
pip install -r requirements.txt
pip install pyopengl==3.1.4
```
## 📦 Prepare Dataset
1. Download [WildAvatar.zip](#)
2. Put the **WildAvatar.zip** under [./data/WildAvatar/](./data/WildAvatar/).
3. Unzip **WildAvatar.zip**
4. Install [yt-dlp](https://github.com/yt-dlp/yt-dlp)
5. Run the following scripts
+ **If you need key frames** (RGB+MASK+SMPL, needed for [SMPL Visualization](https://github.com/wildavatar/WildAvatar_Toolbox/tree/main?tab=readme-ov-file#-smpl-visualization) and [Creating Wild Avatars](https://github.com/wildavatar/WildAvatar_Toolbox/tree/main?tab=readme-ov-file#-creating-wild-avatars) below),
+ please download and extract images from YouTube on your own, by running
```bash
python prepare_data.py --ytdl ${PATH_TO_YT-DLP}$
```
then you will find images downloaded in [./data/WildAvatar-videos](./data/WildAvatar/xxx/images).
+ **If you need video clips**,
+ please download video clips from YouTube on your own, by running
```bash
python download_video.py --ytdl ${PATH_TO_YT-DLP}$ --output_root "./data/WildAvatar-videos"
```
then you will find video clips in [./data/WildAvatar-videos](./data/WildAvatar-videos).
+ **If you need raw videos** (the original user-updated videos),
+ please download video clips from YouTube on your own, by running
```bash
python download_video.py --ytdl ${PATH_TO_YT-DLP}$ --output_root "./data/WildAvatar-videos-raw" --raw
```
then you will find video clips in [./data/WildAvatar-videos-raw](./data/WildAvatar-videos-raw).
## 📊 SMPL Visualization
1. Put the [SMPL_NEUTRAL.pkl](https://smpl.is.tue.mpg.de/) under [./assets/](./assets/).
2. Run the following script to visualize the smpl overlay of the human subject of ${youtube_ID}
```bash
python vis_smpl.py --subject "${youtube_ID}"
```
3. The SMPL mask and overlay visualization can be found in [data/WildAvatar/\${youtube_ID}/smpl](data/WildAvatar/${youtube_ID}/smpl) and [data/WildAvatar/\${youtube_ID}/smpl_masks](data/WildAvatar/${youtube_ID}/smpl_masks)
For example, if you run
```bash
python vis_smpl.py --subject "__-ChmS-8m8"
```
The SMPL mask and overlay visualization can be found in [data/WildAvatar/__-ChmS-8m8/smpl](data/WildAvatar/__-ChmS-8m8/smpl) and [data/WildAvatar/__-ChmS-8m8/smpl_masks](data/WildAvatar/__-ChmS-8m8/smpl_masks)
## 🎯 Creating Wild Avatars
For training and testing on WildAvatar, we currently provide the adapted code for [HumanNeRF](./lib/humannerf) and [GauHuman](./lib/gauhuman).
## 📝 Citation
If you find our work useful for your research, please cite our paper.
```
@article{huang2024wildavatar,
title={WildAvatar: Web-scale In-the-wild Video Dataset for 3D Avatar Creation},
author={Huang, Zihao and Hu, ShouKang and Wang, Guangcong and Liu, Tianqi and Zang, Yuhang and Cao, Zhiguo and Li, Wei and Liu, Ziwei},
journal={arXiv preprint arXiv:2407.02165},
year={2024}
}
```
## 😃 Acknowledgement
This project is built on source codes shared by [GauHuman](https://github.com/skhu101/GauHuman), [HumanNeRF](https://github.com/chungyiweng/humannerf), and [CLIFF](https://github.com/haofanwang/CLIFF). Many thanks for their excellent contributions!
## 📧 Contact
If you have any questions, please feel free to contact Zihao Huang
(zihaohuang at hust.edu.cn).