athn-nik / teach

Official PyTorch implementation of the paper "TEACH: Temporal Action Compositions for 3D Humans"
https://teach.is.tue.mpg.de
Other
383 stars 40 forks source link

Dataset process #8

Closed HappyPiepie closed 1 year ago

HappyPiepie commented 1 year ago

Hi, I don't konw which kind(SMPL+HG, SMPL+XG,or Render) dataset on AMASS should be down, and how to "in a pickle format"?

athn-nik commented 1 year ago

For the smpl model refer here. For AMASS, you can download the corresponding datasets in SMPLH format from the AMASS website. (SMPLH-G)

athn-nik commented 1 year ago

It is by default in a pickle format. If you follow the instructions in the data section. You don't have to do anything special just SMPLH-G and then run my scripts. For SMPLH model follow the instruction in the link in my previous comment.

HappyPiepie commented 1 year ago

Thank you very much, i will try .

HappyPiepie commented 1 year ago

There is another problem, I get "(teach-env) (base) beibeigh@beibeigh-System-Product-Name:~/Research/Projects/teach$ python scripts/process_amass.py --input-path /path/to/data --output-path path/of/choice/defaultis/babel/babel-smplh-30fps-male --model-type smplh --use-betas --gender male usage: process_amass.py [-h] --input-path INPUT_PATH --output-path OUTPUT_PATH [--use-betas] --gender {male,female,neutral,amass} process_amass.py: error: unrecognized arguments: --model-type smplh " and i don't konw how to address?

athn-nik commented 1 year ago

Sorry, yes I have corrected the instruction the model-type argument should be removed. Use python scripts/process_amass.py --input-path /path/to/amass/data --output-path path/of/choice/default/is/data/babel/babel-smplh-30fps-male --use-betas --gender male.

HappyPiepie commented 1 year ago

i have another problem when i run "python train.py experiment=baseline logger=none, (teach-env) (base) beibeigh@beibeigh-System-Product-Name:~/Research/Projects/teach$ HYDRA_FULL_ERROR=1 python train.py experiment=baseline logger=none [12/10/22 17:35:27][main][INFO] - Training script. The outputs will be stored in: [12/10/22 17:35:27][main][INFO] - The working directory is:/home/beibeigh/Research/Projects/teach/teach/babel-amass/baseline/19qj42vz [12/10/22 17:35:27][main][INFO] - Loading libraries [12/10/22 17:35:28][main][INFO] - Libraries loaded [12/10/22 17:35:28][main][INFO] - Set the seed to 42 [12/10/22 17:35:28][pytorch_lightning.utilities.seed][INFO] - Global seed set to 42 [12/10/22 17:35:28][main][INFO] - Loading data module [12/10/22 17:35:28][main][INFO] - Data module 'babel-amass' loaded [12/10/22 17:35:28][main][INFO] - Loading model [12/10/22 17:35:28][torch.distributed.nn.jit.instantiator][INFO] - Created a temporary directory at /tmp/tmpt0eyfzcj [12/10/22 17:35:28][torch.distributed.nn.jit.instantiator][INFO] - Writing /tmp/tmpt0eyfzcj/_remote_module_non_sriptable.py [12/10/22 17:35:29][main][INFO] - Model 'teach' loaded [12/10/22 17:35:29][main][INFO] - Loading logger [12/10/22 17:35:29][main][INFO] - Logger 'none' ready [12/10/22 17:35:29][main][INFO] - Loading callbacks [12/10/22 17:35:29][OpenGL.acceleratesupport][INFO] - No OpenGL_accelerate module loaded: No module named 'OpenGL_accelerate' [12/10/22 17:35:29][main][INFO] - Callbacks initialized [12/10/22 17:35:29][main][INFO] - Loading trainer [12/10/22 17:35:29][py.warnings][WARNING] - /home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/torch/cuda/init.py:145: UserWarning: NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation. The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70. If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/

warnings.warn(incompatible_device_warn.format(device_name, capability, " ".join(arch_list), device_name))

[12/10/22 17:35:29][pytorch_lightning.utilities.distributed][INFO] - GPU available: True, used: True [12/10/22 17:35:29][pytorch_lightning.utilities.distributed][INFO] - TPU available: False, using: 0 TPU cores [12/10/22 17:35:29][pytorch_lightning.utilities.distributed][INFO] - IPU available: False, using: 0 IPUs [12/10/22 17:35:29][main][INFO] - Trainer initialized [12/10/22 17:35:29][main][INFO] - Fitting the model.. Loading BABEL train: 100%|██████████████████████████████████████████████████████████████████████████████████████████| 113/113 [00:00<00:00, 563.06it/s] [12/10/22 17:35:30][teach.data.babel][INFO] - Processed 113 sequences and found 57 invalid cases based on the datatype. [12/10/22 17:35:30][teach.data.babel][INFO] - 122 sequences -- datatype:separate_pairs. [12/10/22 17:35:30][teach.data.babel][INFO] - 23.27% of the sequences which are rejected by the sampler in total. [12/10/22 17:35:30][teach.data.babel][INFO] - 0.0% of the sequence which are rejected by the sampler, because of the excluded actions. [12/10/22 17:35:30][teach.data.babel][INFO] - 23.27% of the sequence which are rejected by the sampler, because they are too short(<0.5 secs) or too long(>25.0 secs). [12/10/22 17:35:30][teach.data.babel][INFO] - Discard from BML: 0 [12/10/22 17:35:30][teach.data.babel][INFO] - Discard not KIT: 0 Error executing job with overrides: ['experiment=baseline', 'logger=none'] Traceback (most recent call last): File "/home/beibeigh/Research/Projects/teach/train.py", line 140, in _train() File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/hydra/main.py", line 48, in decorated_main _run_hydra( File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/hydra/_internal/utils.py", line 377, in _run_hydra run_and_report( File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/hydra/_internal/utils.py", line 214, in run_and_report raise ex File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/hydra/_internal/utils.py", line 211, in run_and_report return func() File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/hydra/_internal/utils.py", line 378, in lambda: hydra.run( File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/hydra/internal/hydra.py", line 111, in run = ret.return_value File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/hydra/core/utils.py", line 233, in return_value raise self._return_value File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/hydra/core/utils.py", line 160, in run_job ret.return_value = task_function(task_cfg) File "/home/beibeigh/Research/Projects/teach/train.py", line 47, in _train return train(cfg, ckpt_ft) File "/home/beibeigh/Research/Projects/teach/train.py", line 129, in train trainer.fit(model, datamodule=data_module, ckpt_path=ckpt_ft) File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 740, in fit self._call_and_handle_interrupt( File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt return trainer_fn(*args, *kwargs) File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl self._run(model, ckpt_path=ckpt_path) File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1138, in _run self._call_setup_hook() # allow user to setup lightning_module in accelerator environment File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/pytorch_lightning/trainer/trainer.py", line 1438, in _call_setup_hook self.datamodule.setup(stage=fn) File "/home/beibeigh/Research/Projects/teach/teach-env/lib/python3.9/site-packages/pytorch_lightning/core/datamodule.py", line 474, in wrapped_fn fn(args, kwargs) File "/home/beibeigh/Research/Projects/teach/teach/data/base.py", line 83, in setup _ = self.val_dataset File "/home/beibeigh/Research/Projects/teach/teach/data/base.py", line 64, in val_dataset self._val_dataset = self.Dataset(split="val", self.hparams) File "/home/beibeigh/Research/Projects/teach/teach/data/babel.py", line 378, in init self.babel_annots = read_json(Path(datapath) / f'./babel_v2.1/{split}.json') File "/home/beibeigh/Research/Projects/teach/teach/utils/file_io.py", line 80, in read_json json_contents = json.load(fp) File "/home/beibeigh/anaconda3/envs/teach/lib/python3.9/json/init.py", line 293, in load return loads(fp.read(), File "/home/beibeigh/anaconda3/envs/teach/lib/python3.9/json/init.py", line 346, in loads return _default_decoder.decode(s) File "/home/beibeigh/anaconda3/envs/teach/lib/python3.9/json/decoder.py", line 337, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/home/beibeigh/anaconda3/envs/teach/lib/python3.9/json/decoder.py", line 355, in raw_decode raise JSONDecodeError("Expecting value", s, err.value) from None json.decoder.JSONDecodeError: Expecting value: line 1 column 1 (char 0) (teach-env) (base) beibeigh@beibeigh-System-Product-Name:~/Research/Projects/teach$

HappyPiepie commented 1 year ago

Besides, i don't konw how to get "train.json" and "val.json",and “what's the mean of babel_v2.1“(If it can create in run "python scripts/amass_splits_babel.py", however, i just get "train.pth.tar","val.pth.tar","train_tiny.pth.tar" and "val_tiny.pth.tar"). All in all , there are somethins you should do i suggest : Firstly , delete "model_type = args.model_type" in scripts/process_amass.py delete "model_type" and "model_type=body_model_type" in read_data function. Secondly,add "model_type" in get_body_model function, and "eidx-sidx" is batch_size ? and how many?

The last, i suggest you can try your project and sure it can work with your indruction.

athn-nik commented 1 year ago

First: The error is here and clear:

NVIDIA GeForce RTX 3090 with CUDA capability sm_86 is not compatible with the current PyTorch installation.
The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
If you want to use the NVIDIA GeForce RTX 3090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/ 

You should try my code with another version of pytorch that supports your GPU or another GPU. Second: Since you have run the amass preprocessing. The instructions are clear:

Download the data from TEACH website, after signing in. (...) Finally, download the male SMPLH body model from the SMPLX website in pickle format. The run this script and change your paths accordingly inside it extract the different babel splits from amass: python scripts/amass_splits_babel.py

changing the paths would be straightforward and is based to where your data are.

HappyPiepie commented 1 year ago

I have another problem,when i run"python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8" and i can't find "test_set_seqs_nowalk, test_set_seqs_walk" in "teach.utils.inference". (teach-env) (base) beibeigh@beibeigh-System-Product-Name:~/Research/Projects/teach$ python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8 Traceback (most recent call last): File "/home/beibeigh/Research/Projects/teach/sample_seq.py", line 29, in from teach.utils.inference import test_set_seqs_nowalk, test_set_seqs_walk ImportError: cannot import name 'test_set_seqs_nowalk' from 'teach.utils.inference' (/home/beibeigh/Research/Projects/teach/teach/utils/inference.py) (teach-env) (base) beibeigh@beibeigh-System-Product-Name:~/Research/Projects/teach$

athn-nik commented 1 year ago

Sorry I was going to release instruction about running sampling and evaluation. But I made a commit that fixes this you can just delete those imports.

HappyPiepie commented 1 year ago

Thanks for your reply. But,there is another problem when i run "python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8 " (teach-env) (base) beibeigh@beibeigh-System-Product-Name:~/Research/Projects/teach$ python sample_seq.py folder=/path/to/experiment align=full slerp_ws=8 Traceback (most recent call last): File "/home/beibeigh/Research/Projects/teach/sample_seq.py", line 30, in labels = read_json('deps/inference/labels.json') File "/home/beibeigh/Research/Projects/teach/teach/utils/file_io.py", line 79, in read_json with open(p, 'r') as fp: FileNotFoundError: [Errno 2] No such file or directory: 'deps/inference/labels.json' (teach-env) (base) beibeigh@beibeigh-System-Product-Name:~/Research/Projects/teach$