Open lalalune opened 1 year ago
Hi @lalalune ,
Makes perfect sense.
Would FBX
be general enough for most use cases ? Seems like a good target (single file, pretty standard).
It's definitely very addable, but I don't know if any of the existing integrations support that. (We don't support diffusers yet)
Tagging @mishig25 for his view on the front end part of such widgets.
GLB (gltf) would probably be best since it is a container for all of the embedded content of the 3D model itself
Is your feature request related to a problem? Please describe. Many models are coming online from several directions which enable users to generate meshes unconditionally, from text guidance or image prior. These projects are harder to coordinate on because they are not well represented in HuggingFace's model hub or inference API, and that affects downstream work like Microsoft's MII inference pipeline which is tightly integrated with HuggingFace.
The goal of this feature request is to, looking at the future, consider adding 3D mesh tasks as a standard task type.
Example of Img2Mesh https://github.com/monniert/unicorn
Example of Text2Mesh https://github.com/ashawkey/stable-dreamfusion
Example of Unconditional Mesh Generation https://nv-tlabs.github.io/GET3D
Example of text-guided animation with motion diffusion https://github.com/GuyTevet/motion-diffusion-model
Describe the solution you'd like Add support for 3D mesh responses. This is similar to images, but the mesh and texture can be separated in some format cases, so this will need to be considered. Some meshes may also have multiple parts or images, although in practice no model has done this.
The popular formats this takes are the following: