huangyangyi / TeCH

[3DV 2024] Official repo of "TeCH: Text-guided Reconstruction of Lifelike Clothed Humans"
https://huangyangyi.github.io/TeCH/
MIT License
379 stars 24 forks source link

Question about model #1

Closed fatbao55 closed 1 year ago

fatbao55 commented 1 year ago

Dear authors,

thanks for the great work! I would like to check if the approach is only generated from text prompts (text-to-3d similar to dreamfusion) or does it also take image as input (image-to-3d with text prompts similar to zero123/ magic123)?

YuliangXiu commented 1 year ago

Thanks for asking, TeCH takes both image and prompt for reconstruction, and the prompt is derived from the input image via VQA model. It's closer to Magic123.

fatbao55 commented 1 year ago

Thanks so much for the clarification! Is there an estimated date for the code release?

huangyangyi commented 1 year ago

Thanks so much for the clarification! Is there an estimated date for the code release?

Thanks for your interests! We plan to release the code in October.

wagnerponciano commented 1 year ago

Muito obrigado pelo esclarecimento! Existe uma data estimada para o lançamento do código?

Obrigado pelo seu interesse! Planejamos lançar o código em outubro.

Can we put it to test on colab?]