Official code for the paper "LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes".
Other
1.37k
stars
104
forks
source link
readme
😴 LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes 😴
[![Project](https://img.shields.io/badge/Project-LucidDreamer-green)](https://luciddreamer-cvlab.github.io/)
[![ArXiv](https://img.shields.io/badge/Arxiv-2311.13384-red)](https://arxiv.org/abs/2311.13384)
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/camenduru/LucidDreamer-Gaussian-colab/blob/main/LucidDreamer_Gaussian_colab.ipynb)
[![Open in Spaces](https://huggingface.co/datasets/huggingface/badges/raw/main/open-in-hf-spaces-sm.svg)](https://huggingface.co/spaces/ironjr/LucidDreamer-mini)
Replace and with the paths to your image and text files.
Other options
--image (-img): Specify the path to the input image for scene generation.
--text (-t): Path to the text file containing the prompt that guides the scene generation.
--neg_text (-nt): Optional. A negative text prompt to refine and constrain the scene generation.
--campath_gen (-cg): Choose a camera path for scene generation (options: lookdown, lookaround, rotate360).
--campath_render (-cr): Select a camera path for video rendering (options: back_and_forth, llff, headbanging).
--model_name: Optional. Name of the inpainting model used for dreaming. Leave blank for default(SD 1.5).
--seed: Set a seed value for reproducibility in the inpainting process.
--diff_steps: Number of steps to perform in the inpainting process.
--save_dir (-s): Directory to save the generated scenes and videos. Specify to organize outputs.
Guideline for the prompting / Troubleshoot
General guides
If your image is indoors with specific scene (and possible character in it), you can just put the most simplest representation of the scene first, like a cozy livingroom for christmas, or a dark garage, etc. Please avoid prompts like 1girl because it will generate many humans for each inpainting task.
If you want to start from already hard-engineered image from e.g., StableDiffusion model, or a photo taken from other sources, you can try using WD14 tagger (huggingface demo) to extract the danbooru tags from an image. Please ensure you remove some comma separated tags if you don't want them to appear multiple times. This include human-related objects, e.g., 1girl, white shirt, boots, smiling face, red eyes, etc. Make sure to specify the objects you want to have multiples of them.
Q. I generate unwanted objects everywhere, e.g., photo frames.
Manipulate negative prompts to set harder constraints for the frame object. You may try adding tags like twitter thumbnail, profile image, instagram image, watermark, text to the negative prompt. In fact, negative prompts are the best thing to try if you want some things not to be appeared in the resulting image.
Try using other custom checkpoint models, which employs different pipeline methods: LaMa inpainting -> ControlNet-inpaint guided image inpainting.
Visualize .ply files
There are multiple available viewers / editors for Gaussian splatting .ply files.
@splinetool's web-based viewer for Gaussian splatting. This is the version we have used in our project page's demo.
🚩 Updates
✅ December 12, 2023: We have precompiled wheels for the CUDA-based submodules and put them in submodules/wheels. The Windows installation guide is revised accordingly!
✅ December 11, 2023: We have updated installation guides for Windows. Thank you @Maoku for your great contribution!
✅ December 8, 2023: HuggingFace Space demo is out. We deeply thank all the HF team for their support!
✅ December 7, 2023: Colab implementation is now available thanks to @camenduru!
✅ December 6, 2023: Code release!
✅ November 22, 2023: We have released our paper, LucidDreamer on arXiv.
🌏 Citation
Please cite us if you find our project useful!
@article{chung2023luciddreamer,
title={LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes},
author={Chung, Jaeyoung and Lee, Suyoung and Nam, Hyeongjin and Lee, Jaerin and Lee, Kyoung Mu},
journal={arXiv preprint arXiv:2311.13384},
year={2023}
}
🤗 Acknowledgement
We deeply appreciate ZoeDepth, [Stability AI](), and Runway for their models.
📧 Contact
If you have any questions, please email robot0321@snu.ac.kr, esw0116@snu.ac.kr, jarin.lee@gmail.com.