MountaintopLotus / braintrust

A Dockerized platform for running Stable Diffusion, on AWS (for now)
Apache License 2.0
1 stars 2 forks source link

DMT Meshes #48

Open JohnTigue opened 1 year ago

JohnTigue commented 1 year ago

DMT Meshes: OpenAI's open sourced Point-E has now been turned into a 3D mesh generator for Blender.

Note that DMT Meshes is licensed under GPL-3.0.

Via Reddit:

Hey guys, DMT Meshes: A 3D generation plugin for Blender

With the current trend of generative models, we wrote a Blender add-on to generate realistic 3D models from simple text or image inputs.

Text/image to 3D are out there but it may be intimidated to to try on because of technical barriers. We wish that by having this add on, everyone having an easy access to the advancement of 3D generation, it would serve as a foundation for people to share their discoveries and make improvements on top of what we built, thus accelerating the process.

On the back, the add-on uses Point-E for text/image to pointcloud and the original method provided by openAI or DMTet for mesh generation.

We would like to integrate more models and features in the future as well as the support and contribution from the communities. If you have any idea or would like to integrate new things, feel free to open issues or create a PR.

JohnTigue commented 1 year ago

Code on GitHub is licensed under GPL3.

JohnTigue commented 1 year ago

Point-E resources:

JohnTigue commented 1 year ago

On TechCrunch:

Point-E doesn’t create 3D objects in the traditional sense. Rather, it generates point clouds, or discrete sets of data points in space that represent a 3D shape — hence the cheeky abbreviation. (The “E” in Point-E is short for “efficiency,” because it’s ostensibly faster than previous 3D object generation approaches.) Point clouds are easier to synthesize from a computational standpoint, but they don’t capture an object’s fine-grained shape or texture — a key limitation of Point-E currently.

To get around this limitation, the Point-E team trained an additional AI system to convert Point-E’s point clouds to meshes. (Meshes — the collections of vertices, edges and faces that define an object — are commonly used in 3D modeling and design.) But they note in the paper that the model can sometimes miss certain parts of objects, resulting in blocky or distorted shapes.

JohnTigue commented 1 year ago

On Engadet:

can produce 3D point clouds directly from text prompts. Whereas existing systems like Google's DreamFusion typically require multiple hours — and GPUs — to generate their images, Point-E only needs one GPU and a minute or two.

JohnTigue commented 1 year ago

While I'm in Blender-land, might as well give this a pop.