luciddreamer-cvlab / LucidDreamer

Official code for the paper "LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes".
Other
1.36k stars 104 forks source link

prompt #16

Closed Tiandishihua closed 11 months ago

Tiandishihua commented 11 months ago

What kind of text prompts and negative prompts should I input to generate the expected results when I input a scene photo I took.

ironjr commented 11 months ago

Thanks for your attention! I would like to recommend trying example codes in the main app.py first, since we have demonstrated different types of text prompts there.

With my experience, as a rule of thumb, you may try this procedure.

  1. If your image is indoors with specific scene (and possible character in it), you can just put the most simplest representation of the scene first, like a cozy livingroom for christmas, or a dark garage, etc. Please avoid prompts like 1girl because it will generate many humans for each inpainting task.
  2. If you want to start from already hard-engineered image from e.g., StableDiffusion model, or a photo taken from other sources, you can try using WD14 tagger to extract the danbooru tags from an image. Please ensure you remove some comma separated tags if you don't want them to appear multiple times. This include human-related objects, e.g., 1girl, white shirt, boots, smiling face, red eyes, etc. Make sure to specify the objects you want to have multiples of them.

These are the main strategy I have demonstrated in my twitter account and the huggingface demo. I hope you find this guide useful. I will also update our README file regarding your question.