Closed Colezwhy closed 6 months ago
Hi, thanks for your focus.
Though it can generate 3D content for unseen text, it still requires the text not to deviate far from the training set. If you use the prompt which is out of training set domain, it cannot generate the semantic-consistent content. Actually, we also attempt to train BrightDreamer using a more abundant prompts set.
I think if you can prepare high-quality and high-richness prompt sets, it can generalize to other domains.
Hi, thanks for your focus.
Though it can generate 3D content for unseen text, it still requires the text not to deviate far from the training set. If you use the prompt which is out of training set domain, it cannot generate the semantic-consistent content. Actually, we also attempt to train BrightDreamer using a more abundant prompts set.
I think if you can prepare high-quality and high-richness prompt sets, it can generalize to other domains.
Thank you for your reply! So when confronting more generalized scenarios, it still needs a bigger dataset to obtain its generalizability. BTW, do you plan to release a model that is trained on a bigger dataset? It might help in the field of reducing time cost for 3D gen. Good job! Congrads!
Sorry. Due to other work I am currently doing, I am unable to provide an accurate date. But we will attempt it in the next few months.
Sorry. Due to other work I am currently doing, I am unable to provide an accurate date. But we will attempt it in the next few months.
Thank you, I will close this.
Hi, thank you for your great work, which is very inspiring to me. But I was wondering if we use prompts that are out of the training set domain(i.e. vehicle, animal, daily life), what will Bright Dreamer produce? Will it generalize to other domains a little bit?