Open fallbernana123456 opened 1 month ago
在opensora/serve/gradio_web_server.py 里引用了 text_encoder = MT5EncoderModel.from_pretrained("/storage/ongoing/new/Open-Sora-Plan/cache_dir/mt5-xxl", cache_dir=args.cache_dir, low_cpu_mem_usage=True, torch_dtype=weight_dtype) tokenizer = AutoTokenizer.from_pretrained("/storage/ongoing/new/Open-Sora-Plan/cache_dir/mt5-xxl", cache_dir=args.cache_dir)
text_encoder = MT5EncoderModel.from_pretrained("/storage/ongoing/new/Open-Sora-Plan/cache_dir/mt5-xxl", cache_dir=args.cache_dir, low_cpu_mem_usage=True, torch_dtype=weight_dtype) tokenizer = AutoTokenizer.from_pretrained("/storage/ongoing/new/Open-Sora-Plan/cache_dir/mt5-xxl", cache_dir=args.cache_dir)
这个/storage/ongoing/new/Open-Sora-Plan/cache_dir/mt5-xxl 在哪里获取?是指的 google/mt5-xxl 吗?
是的。
是的。 google/mt5-xxl 有50G。我的gpu只有32G。那1.2的需要gpu大小最小要多少?能用T5-XXL 替代吗?
40G,不能,只能用mt5-xxl。
在opensora/serve/gradio_web_server.py 里引用了
text_encoder = MT5EncoderModel.from_pretrained("/storage/ongoing/new/Open-Sora-Plan/cache_dir/mt5-xxl", cache_dir=args.cache_dir, low_cpu_mem_usage=True, torch_dtype=weight_dtype) tokenizer = AutoTokenizer.from_pretrained("/storage/ongoing/new/Open-Sora-Plan/cache_dir/mt5-xxl", cache_dir=args.cache_dir)
这个/storage/ongoing/new/Open-Sora-Plan/cache_dir/mt5-xxl 在哪里获取?是指的 google/mt5-xxl 吗?