Open yuanxion opened 1 year ago
I see a wonderful application: Designs.ai Webinar
And I have tried it to generate a video for "小蝌蚪找妈妈": https://github.com/yuanxion/Text2Video-Zero/assets/96522341/6f5aeeb0-bf3e-4091-8228-079f467d50e8
So maybe we can also use our t2v to generate short video for each sentence, and then combine all these short videos together to a long video. Though our long video is not time consistence for all frames, but it should be time consistence for these short videos, which is enough for our current stag.
一句话拍大片,导演末日来了?Runway发布文字生成视频模型Gen-2,科幻日系二次元统统拿捏 https://www.8btc.com/article/6810288
Runway Gen-2 科幻级功能 https://hub.baai.ac.cn/view/24989
模式01:文本生成视频
以任何风格合成视频,你可以想象只使用文本提示。能说出来就能看到它。
模式02:图像生成视频
使用驾驶图像和文本提示生成视频
模式03:图像到视频
仅使用驾驶图像生成视频(变体模式)
模式04:风格化
将任何图像或提示的样式传输到视频的每个帧
模式05:故事板
将模型转换为完全风格化和动画渲染。
模式06:面具
隔离视频中的主题,并使用简单的文本提示进行修改。
模式07:渲染
通过应用输入图像或提示,将无纹理渲染转换为逼真的输出。
模式08:个性化自定义
通过定制模型以获得更高的保真度结果,释放Gen-2的全部功能。
Intel China Hackathon 2023 Innovation ideas: (t2v + tts + subtitles)
Animatediff text-video model: https://animatediff.github.io/
Generative Image Dynamics https://generative-dynamics.github.io/#demo
Video Outpainting https://huggingface.co/papers/2309.02119
Rerender A Video temporal consistency across video frames https://github.com/williamyang1991/Rerender_A_Video
kabachuha/kabachuha https://github.com/kabachuha/sd-webui-text2video
The Intel China Hackathon 2023 is coming soon, maybe next week. Please think about what we can do to make our PoC suitable to participate the Hackathon.
Welcome to discuss the ideas here.