pabloruizponce / in2IN

[CVPRW 2024] Official Implementation of "in2IN: Leveraging individual Information to Generate Human INteractions".
https://pabloruizponce.github.io/in2IN/
Other
41 stars 1 forks source link

The evaluation and training code #1

Open RunqiWang77 opened 4 months ago

RunqiWang77 commented 4 months ago

The evaluation and training code you published seems to have some issues. They both rely on the same file named 'infer', and I'm unable to successfully run the evaluation and training code. Could you please take a look at it?

pabloruizponce commented 4 months ago

Could you provide more details about the issue for reproducing your error? What is the command that you have executed and what is the output?

RunqiWang77 commented 4 months ago

image The error message is roughly like this. I'm not sure if I misunderstood something. For evaluation, following the instructions on the GitHub page, I executed the command: python in2in/scripts/infer.py \ --model configs/models/in2IN.yaml \ --evaluator configs/eval.yaml \ --mode [individual, interaction, dual] \ --out results \ --device 0 \ --mode interaction \

pabloruizponce commented 4 months ago

Ey, there was a typo on the commands in the README 😓. They are now updated:

  python in2in/scripts/eval/interhuman.py \
      --model configs/models/in2IN.yaml \
      --evaluator configs/eval.yaml \
      --mode [individual, interaction, dual] \
      --out results \
      --device 0 \
      --mode  interaction \
  python in2in/scripts/eval/DualMDM.py \
      --model configs/models/DualMDM.yaml \
      --evaluator configs/eval.yaml \
      --device 0 \

Test it and let me know if everything is working as expected.

RunqiWang77 commented 3 months ago

Ey, there was a typo on the commands in the README 😓. They are now updated:

  python in2in/scripts/eval/interhuman.py \
      --model configs/models/in2IN.yaml \
      --evaluator configs/eval.yaml \
      --mode [individual, interaction, dual] \
      --out results \
      --device 0 \
      --mode  interaction \
  python in2in/scripts/eval/DualMDM.py \
      --model configs/models/DualMDM.yaml \
      --evaluator configs/eval.yaml \
      --device 0 \

Test it and let me know if everything is working as expected.

Okay, I will continue to try.

RunqiWang77 commented 3 months ago

172552f4821127d290cc2a61eba1d4f I encountered an error when executing the following command: python in2in/scripts/eval/DualMDM.py --model configs/models/DualMDM.yaml --evaluator configs/eval.yaml --device 0 The error message says: "No such file or directory: './data/HumanML3D/mean_interhuman.npy'". How can I obtain the mean_interhuman.npy file? Is this file the same as the global_mean.npy file from the dataset link?

RunqiWang77 commented 3 months ago

image When I execute the following command: python in2in/scripts/eval/interhuman.py --model configs/models/in2IN.yaml --evaluator configs/eval.yaml --mode [individual, interaction, dual] --out results --device 0 --mode interaction there are still some issues. Is there something I'm misunderstanding? Thank you.

pabloruizponce commented 3 months ago

172552f4821127d290cc2a61eba1d4f I encountered an error when executing the following command: python in2in/scripts/eval/DualMDM.py --model configs/models/DualMDM.yaml --evaluator configs/eval.yaml --device 0 The error message says: "No such file or directory: './data/HumanML3D/mean_interhuman.npy'". How can I obtain the mean_interhuman.npy file? Is this file the same as the global_mean.npy file from the dataset link?

The mean and std for all the normalizers have been updated in the commit 96c38cb728ba61975374c59b0d357c1324470973. They can be found in the in2in/utils/ folder without downloading any additional file.

pabloruizponce commented 3 months ago

image The error message is roughly like this. I'm not sure if I misunderstood something. For evaluation, following the instructions on the GitHub page, I executed the command: python in2in/scripts/infer.py --model configs/models/in2IN.yaml --evaluator configs/eval.yaml --mode [individual, interaction, dual] --out results --device 0 --mode interaction \

In the mode argument, you must select one of the possible modes (interaction or dual). If you want to evaluate in2IN on the InterHuman dataset you will execute:

  python in2in/scripts/eval/interhuman.py \
      --model configs/models/in2IN.yaml \
      --evaluator configs/eval.yaml \
      --mode interaction \
      --out results \
      --device 0 \

On the other hand, If you want to evaluate DualMDM on the InterHuman dataset you will execute:

  python in2in/scripts/eval/interhuman.py \
      --model configs/models/DualMDM.yaml \
      --evaluator configs/eval.yaml \
      --mode dual \
      --out results \
      --device 0 \