Closed LoserLus closed 3 weeks ago
Could u tell us how to deal with the issiue that use offline model to do inference? Thx !
Could u tell us how to deal with the issiue that use offline model to do inference? Thx !
你可以修改你所使用的config
里面有关模型路径的配置,例如你可以将Open-Sora/configs/opensora-v1-1/inference/samply.py
里面的配置,修改为下面的配置:
# Define model
model = dict(
type="STDiT2-XL/2",
from_pretrained="<path_to_your_model>/OpenSora-STDiT-v2-stage3",
input_sq_size=512,
qk_norm=True,
qk_norm_legacy=True,
enable_flash_attn=True,
enable_layernorm_kernel=True,
)
vae = dict(
type="VideoAutoencoderKL",
from_pretrained="<path_to_your_model>/sd-vae-ft-ema",
cache_dir=None, # "/mnt/hdd/cached_models",
micro_batch_size=4,
)
text_encoder = dict(
type="t5",
from_pretrained="<path_to_your_model>/t5-v1_1-xxl",
cache_dir=None, # "/mnt/hdd/cached_models",
model_max_length=200,
)
这一部分你可以参考issue218。
但是经过我的测试,我发现,对于OpenSora-STDiT-v2-stage3
而言,你需要在from_pretrained="<path_to_your_model>/OpenSora-STDiT-v2-stage3"
所指向的目录中新建一个子目录model
,然后将你下载到的相关文件都移入model
文件夹中,存放模型的目录结构如下所示:
tree models/
models/
|-- OpenSora-STDiT-v2-stage3
| `-- model
| |-- README.md
| |-- config.json
| |-- configuration_stdit2.py
| |-- gitattributes
| |-- layers.py
| |-- model.safetensors
| |-- modeling_stdit2.py
| `-- utils.py
|-- sd-vae-ft-ema
| |-- README.md
| |-- config.json
| |-- diffusion_pytorch_model.bin
| |-- diffusion_pytorch_model.safetensors
| `-- gitattributes
`-- t5-v1_1-xxl
|-- config.json
|-- gitattributes
|-- pytorch_model-00001-of-00002.bin
|-- pytorch_model-00002-of-00002.bin
|-- pytorch_model.bin.index.json
|-- special_tokens_map.json
|-- spiece.model
`-- tokenizer_config.json
你也可以参考[潞晨云部署视频教程],通过ln -s
软连接命令将已下载好的模型权重连接到.cache
目录。
Could u tell us how to deal with the issiue that use offline model to do inference? Thx !
你可以修改你所使用的
config
里面有关模型路径的配置,例如你可以将Open-Sora/configs/opensora-v1-1/inference/samply.py
里面的配置,修改为下面的配置:# Define model model = dict( type="STDiT2-XL/2", from_pretrained="<path_to_your_model>/OpenSora-STDiT-v2-stage3", input_sq_size=512, qk_norm=True, qk_norm_legacy=True, enable_flash_attn=True, enable_layernorm_kernel=True, ) vae = dict( type="VideoAutoencoderKL", from_pretrained="<path_to_your_model>/sd-vae-ft-ema", cache_dir=None, # "/mnt/hdd/cached_models", micro_batch_size=4, ) text_encoder = dict( type="t5", from_pretrained="<path_to_your_model>/t5-v1_1-xxl", cache_dir=None, # "/mnt/hdd/cached_models", model_max_length=200, )
这一部分你可以参考issue218。 但是经过我的测试,我发现,对于
OpenSora-STDiT-v2-stage3
而言,你需要在from_pretrained="<path_to_your_model>/OpenSora-STDiT-v2-stage3"
所指向的目录中新建一个子目录model
,然后将你下载到的相关文件都移入model
文件夹中,存放模型的目录结构如下所示:tree models/ models/ |-- OpenSora-STDiT-v2-stage3 | `-- model | |-- README.md | |-- config.json | |-- configuration_stdit2.py | |-- gitattributes | |-- layers.py | |-- model.safetensors | |-- modeling_stdit2.py | `-- utils.py |-- sd-vae-ft-ema | |-- README.md | |-- config.json | |-- diffusion_pytorch_model.bin | |-- diffusion_pytorch_model.safetensors | `-- gitattributes `-- t5-v1_1-xxl |-- config.json |-- gitattributes |-- pytorch_model-00001-of-00002.bin |-- pytorch_model-00002-of-00002.bin |-- pytorch_model.bin.index.json |-- special_tokens_map.json |-- spiece.model `-- tokenizer_config.json
你也可以参考[潞晨云部署视频教程],通过
ln -s
软连接命令将已下载好的模型权重连接到.cache
目录。
Got, thank u very much!
Due to certain reasons, my machine cannot access the internet, so I cannot download the pre-trained model weights online. I have to utilize the pre-trained model weight locally. I referenced issue218, modified the location of the pre-trained model in
configs/opensora-v1-1/inference/sample.py
to point to a local folder, and commented out theassert
statements in the relevant files, but it still unable to resolve my issue. Myconfigs/opensora-v1-1/inference/sample.py
now looks like:When I try to inference with command
python scripts/inference.py configs/opensora-v1-1/inference/sample.py --ckpt-path CKPT_PATH --prompt "A beautiful sunset over the city" --num-frames 16 --image-size 480 854
, I got the output: