Open Rocky77JHxu opened 4 months ago
It seems you are using Vicuna-v1.5-13B
. However, our version was trained using Vicuna-v0-7B
as the foundational model. You need to follow the instructions here to download the 7B-delta model and apply it to Llama to obtain the Vicuna-7B model.
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. Use HF_HOME
instead.
warnings.warn(
Setting ds_accelerator to cuda (auto detect)
/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead.
warnings.warn(
/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead.
warnings.warn(
Using pretrained model path: /root/autodl-tmp/ModaVerse/Model/ModaVerse-7b
Initializing ModaVerseAPI with model_path: /root/autodl-tmp/ModaVerse/Model/ModaVerse-7b
Configs being passed to ModaVerse: <pjtools.configurator.configurator.PyConfigurator object at 0x7ff7c95b3a00>
Type of llm_path: <class 'str'>, value: /root/autodl-tmp/ModaVerse/Model/7b_v0
Type of sp_model_path: <class 'str'>, value: /root/autodl-tmp/ModaVerse/Model/7b_v0/tokenizer.model
Contents of the model path (/root/autodl-tmp/ModaVerse/Model/7b_v0): ['tokenizer_config.json', 'special_tokens_map.json', 'tokenizer.model', 'tokenizer.json', 'config.json', 'generation_config.json', 'model-00001-of-00003.safetensors', 'model-00002-of-00003.safetensors', 'model-00003-of-00003.safetensors', 'model.safetensors.index.json']
Loading LlamaTokenizer from path: /root/autodl-tmp/ModaVerse/Model/7b_v0 and SentencePiece model from path: /root/autodl-tmp/ModaVerse/Model/7b_v0/tokenizer.model
LlamaTokenizer loaded successfully.
Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.02s/it]
Forcing training on layers:
base_model.model.model.embed_tokens.weight
base_model.model.lm_head.weight
trainable params: 295,747,584 || all params: 6,772,019,200 || trainable%: 4.3672
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 6.19it/s]
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 8.82it/s]
It seems like you have activated model offloading by calling enable_model_cpu_offload
, but are now manually moving the pipeline to GPU. It is strongly recommended against doing so as memory gains from offloading are likely to be lost. Offloading automatically takes care of moving the individual components vae, text_encoder, tokenizer, unet, scheduler to GPU when needed. To make sure offloading works as expected, you should consider moving the pipeline back to CPU: pipeline.to('cpu')
or removing the move altogether if you use offloading.
unet/diffusion_pytorch_model.safetensors not found
Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:01<00:00, 3.97it/s]
ModaVerseAPI initialized successfully.
Launching ModaVerse Demo...
Running on local URL: http://127.0.0.1:7860
IMPORTANT: You are using gradio version 3.50.2, however version 4.29.0 is available, please upgrade.Running on public URL: https://bb1cb0dabf89a83aa3.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy
from Terminal to deploy to Spaces (https://huggingface.co/spaces)
gradio deploy
Traceback (most recent call last):
File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/routes.py", line 534, in predict
output = await route_utils.call_process_api(
File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/route_utils.py", line 226, in call_process_api
output = await app.get_blocks().process_api(
File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/blocks.py", line 1550, in process_api
result = await self.call_function(
File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/blocks.py", line 1185, in call_function
prediction = await anyio.to_thread.run_sync(
File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync
return await get_async_backend().run_sync_in_worker_thread(
File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread
return await future
File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 859, in run
result = context.run(func, args)
File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/utils.py", line 661, in wrapper
response = f(args, **kwargs)
File "/root/autodl-tmp/ModaVerse/demo.py", line 43, in process_input
meta_response, final_responses = ModaVerse(instruction, media)
NameError: name 'ModaVerse' is not defined
NameError: name 'ModaVerse' is not defined when I test something in the gradio
(modaverse) root@autodl-container-433d4a8b09-28f47cc9:~/autodl-tmp/ModaVerse# python demo.py Traceback (most recent call last): File "/root/autodl-tmp/ModaVerse/demo.py", line 6, in from modaverse import ModaVerse ImportError: cannot import name 'ModaVerse' from 'modaverse' (/root/autodl-tmp/ModaVerse/modaverse/init.py) (modaverse) root@autodl-container-433d4a8b09-28f47cc9:~/autodl-tmp/ModaVerse# python demo.py /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/transformers/utils/hub.py:124: FutureWarning: Using
TRANSFORMERS_CACHE
is deprecated and will be removed in v5 of Transformers. UseHF_HOME
instead. warnings.warn( Setting ds_accelerator to cuda (auto detect) /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead. warnings.warn( /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead. warnings.warn( Using pretrained model path: /root/autodl-tmp/ModaVerse/Model/ModaVerse-7b Initializing ModaVerseAPI with model_path: /root/autodl-tmp/ModaVerse/Model/ModaVerse-7b Configs being passed to ModaVerse: <pjtools.configurator.configurator.PyConfigurator object at 0x7ff7c95b3a00> Type of llm_path: <class 'str'>, value: /root/autodl-tmp/ModaVerse/Model/7b_v0 Type of sp_model_path: <class 'str'>, value: /root/autodl-tmp/ModaVerse/Model/7b_v0/tokenizer.model Contents of the model path (/root/autodl-tmp/ModaVerse/Model/7b_v0): ['tokenizer_config.json', 'special_tokens_map.json', 'tokenizer.model', 'tokenizer.json', 'config.json', 'generation_config.json', 'model-00001-of-00003.safetensors', 'model-00002-of-00003.safetensors', 'model-00003-of-00003.safetensors', 'model.safetensors.index.json'] Loading LlamaTokenizer from path: /root/autodl-tmp/ModaVerse/Model/7b_v0 and SentencePiece model from path: /root/autodl-tmp/ModaVerse/Model/7b_v0/tokenizer.model LlamaTokenizer loaded successfully. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.02s/it] Forcing training on layers: base_model.model.model.embed_tokens.weight base_model.model.lm_head.weight trainable params: 295,747,584 || all params: 6,772,019,200 || trainable%: 4.3672 Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 6.19it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 8.82it/s] It seems like you have activated model offloading by callingenable_model_cpu_offload
, but are now manually moving the pipeline to GPU. It is strongly recommended against doing so as memory gains from offloading are likely to be lost. Offloading automatically takes care of moving the individual components vae, text_encoder, tokenizer, unet, scheduler to GPU when needed. To make sure offloading works as expected, you should consider moving the pipeline back to CPU:pipeline.to('cpu')
or removing the move altogether if you use offloading. unet/diffusion_pytorch_model.safetensors not found Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:01<00:00, 3.97it/s] ModaVerseAPI initialized successfully. Launching ModaVerse Demo... Running on local URL: http://127.0.0.1:7860 IMPORTANT: You are using gradio version 3.50.2, however version 4.29.0 is available, please upgrade. Running on public URL: https://bb1cb0dabf89a83aa3.gradio.liveThis share link expires in 72 hours. For free permanent hosting and GPU upgrades, run
gradio deploy
from Terminal to deploy to Spaces (https://huggingface.co/spaces) gradio deploy Traceback (most recent call last): File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/routes.py", line 534, in predict output = await route_utils.call_process_api( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/route_utils.py", line 226, in call_process_api output = await app.get_blocks().process_api( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/blocks.py", line 1550, in process_api result = await self.call_function( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/blocks.py", line 1185, in call_function prediction = await anyio.to_thread.run_sync( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, args) File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/utils.py", line 661, in wrapper response = f(args, **kwargs) File "/root/autodl-tmp/ModaVerse/demo.py", line 43, in process_input meta_response, final_responses = ModaVerse(instruction, media) NameError: name 'ModaVerse' is not defined
Did you follow the installation steps to run pip install -e .
?
is this?I absoluately run this
------------------ 原始邮件 ------------------
发件人: "xinke-wang/ModaVerse" @.>;
发送时间: 2024年7月8日(星期一) 下午4:31
@.>;
@.**@.>;
主题: Re: [xinke-wang/ModaVerse] I can't find 7b_v0
weight (Issue #2)
NameError: name 'ModaVerse' is not defined when I test something in the gradio
(modaverse) @.:/autodl-tmp/ModaVerse# python demo.py Traceback (most recent call last): File "/root/autodl-tmp/ModaVerse/demo.py", line 6, in from modaverse import ModaVerse ImportError: cannot import name 'ModaVerse' from 'modaverse' (/root/autodl-tmp/ModaVerse/modaverse/init.py) (modaverse) @.:/autodl-tmp/ModaVerse# python demo.py /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/transformers/utils/hub.py:124: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. warnings.warn( Setting ds_accelerator to cuda (auto detect) /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead. warnings.warn( /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead. warnings.warn( Using pretrained model path: /root/autodl-tmp/ModaVerse/Model/ModaVerse-7b Initializing ModaVerseAPI with model_path: /root/autodl-tmp/ModaVerse/Model/ModaVerse-7b Configs being passed to ModaVerse: <pjtools.configurator.configurator.PyConfigurator object at 0x7ff7c95b3a00> Type of llm_path: <class 'str'>, value: /root/autodl-tmp/ModaVerse/Model/7b_v0 Type of sp_model_path: <class 'str'>, value: /root/autodl-tmp/ModaVerse/Model/7b_v0/tokenizer.model Contents of the model path (/root/autodl-tmp/ModaVerse/Model/7b_v0): ['tokenizer_config.json', 'special_tokens_map.json', 'tokenizer.model', 'tokenizer.json', 'config.json', 'generation_config.json', 'model-00001-of-00003.safetensors', 'model-00002-of-00003.safetensors', 'model-00003-of-00003.safetensors', 'model.safetensors.index.json'] Loading LlamaTokenizer from path: /root/autodl-tmp/ModaVerse/Model/7b_v0 and SentencePiece model from path: /root/autodl-tmp/ModaVerse/Model/7b_v0/tokenizer.model LlamaTokenizer loaded successfully. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.02s/it] Forcing training on layers: base_model.model.model.embed_tokens.weight base_model.model.lm_head.weight trainable params: 295,747,584 || all params: 6,772,019,200 || trainable%: 4.3672 Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 6.19it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 8.82it/s] It seems like you have activated model offloading by calling enable_model_cpu_offload, but are now manually moving the pipeline to GPU. It is strongly recommended against doing so as memory gains from offloading are likely to be lost. Offloading automatically takes care of moving the individual components vae, text_encoder, tokenizer, unet, scheduler to GPU when needed. To make sure offloading works as expected, you should consider moving the pipeline back to CPU: pipeline.to('cpu') or removing the move altogether if you use offloading. unet/diffusion_pytorch_model.safetensors not found Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:01<00:00, 3.97it/s] ModaVerseAPI initialized successfully. Launching ModaVerse Demo... Running on local URL: http://127.0.0.1:7860 IMPORTANT: You are using gradio version 3.50.2, however version 4.29.0 is available, please upgrade. Running on public URL: https://bb1cb0dabf89a83aa3.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces) gradio deploy Traceback (most recent call last): File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/routes.py", line 534, in predict output = await route_utils.call_process_api( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/route_utils.py", line 226, in call_process_api output = await app.get_blocks().process_api( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/blocks.py", line 1550, in process_api result = await self.call_function( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/blocks.py", line 1185, in call_function prediction = await anyio.to_thread.run_sync( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, args) File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/utils.py", line 661, in wrapper response = f(args, **kwargs) File "/root/autodl-tmp/ModaVerse/demo.py", line 43, in process_input meta_response, final_responses = ModaVerse(instruction, media) NameError: name 'ModaVerse' is not defined
Did you follow the installation steps to run pip install -e .?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
another question please:
this code is for vicuna-7b-delta-v1.1, then which code is for vicuna-7b-delta-v0?
and which llama should I install,llama-7b or llama-2-7b?
please answer my question,I will appreciate you very much!
------------------ 原始邮件 ------------------
发件人: "xinke-wang/ModaVerse" @.>;
发送时间: 2024年7月8日(星期一) 下午4:31
@.>;
@.**@.>;
主题: Re: [xinke-wang/ModaVerse] I can't find 7b_v0
weight (Issue #2)
NameError: name 'ModaVerse' is not defined when I test something in the gradio
(modaverse) @.:/autodl-tmp/ModaVerse# python demo.py Traceback (most recent call last): File "/root/autodl-tmp/ModaVerse/demo.py", line 6, in from modaverse import ModaVerse ImportError: cannot import name 'ModaVerse' from 'modaverse' (/root/autodl-tmp/ModaVerse/modaverse/init.py) (modaverse) @.:/autodl-tmp/ModaVerse# python demo.py /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/transformers/utils/hub.py:124: FutureWarning: Using TRANSFORMERS_CACHE is deprecated and will be removed in v5 of Transformers. Use HF_HOME instead. warnings.warn( Setting ds_accelerator to cuda (auto detect) /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/torchvision/transforms/_functional_video.py:6: UserWarning: The 'torchvision.transforms._functional_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms.functional' module instead. warnings.warn( /root/miniconda3/envs/modaverse/lib/python3.9/site-packages/torchvision/transforms/_transforms_video.py:22: UserWarning: The 'torchvision.transforms._transforms_video' module is deprecated since 0.12 and will be removed in the future. Please use the 'torchvision.transforms' module instead. warnings.warn( Using pretrained model path: /root/autodl-tmp/ModaVerse/Model/ModaVerse-7b Initializing ModaVerseAPI with model_path: /root/autodl-tmp/ModaVerse/Model/ModaVerse-7b Configs being passed to ModaVerse: <pjtools.configurator.configurator.PyConfigurator object at 0x7ff7c95b3a00> Type of llm_path: <class 'str'>, value: /root/autodl-tmp/ModaVerse/Model/7b_v0 Type of sp_model_path: <class 'str'>, value: /root/autodl-tmp/ModaVerse/Model/7b_v0/tokenizer.model Contents of the model path (/root/autodl-tmp/ModaVerse/Model/7b_v0): ['tokenizer_config.json', 'special_tokens_map.json', 'tokenizer.model', 'tokenizer.json', 'config.json', 'generation_config.json', 'model-00001-of-00003.safetensors', 'model-00002-of-00003.safetensors', 'model-00003-of-00003.safetensors', 'model.safetensors.index.json'] Loading LlamaTokenizer from path: /root/autodl-tmp/ModaVerse/Model/7b_v0 and SentencePiece model from path: /root/autodl-tmp/ModaVerse/Model/7b_v0/tokenizer.model LlamaTokenizer loaded successfully. Loading checkpoint shards: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████| 3/3 [00:03<00:00, 1.02s/it] Forcing training on layers: base_model.model.model.embed_tokens.weight base_model.model.lm_head.weight trainable params: 295,747,584 || all params: 6,772,019,200 || trainable%: 4.3672 Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 7/7 [00:01<00:00, 6.19it/s] Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 5/5 [00:00<00:00, 8.82it/s] It seems like you have activated model offloading by calling enable_model_cpu_offload, but are now manually moving the pipeline to GPU. It is strongly recommended against doing so as memory gains from offloading are likely to be lost. Offloading automatically takes care of moving the individual components vae, text_encoder, tokenizer, unet, scheduler to GPU when needed. To make sure offloading works as expected, you should consider moving the pipeline back to CPU: pipeline.to('cpu') or removing the move altogether if you use offloading. unet/diffusion_pytorch_model.safetensors not found Loading pipeline components...: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████| 6/6 [00:01<00:00, 3.97it/s] ModaVerseAPI initialized successfully. Launching ModaVerse Demo... Running on local URL: http://127.0.0.1:7860 IMPORTANT: You are using gradio version 3.50.2, however version 4.29.0 is available, please upgrade. Running on public URL: https://bb1cb0dabf89a83aa3.gradio.live
This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces) gradio deploy Traceback (most recent call last): File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/routes.py", line 534, in predict output = await route_utils.call_process_api( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/route_utils.py", line 226, in call_process_api output = await app.get_blocks().process_api( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/blocks.py", line 1550, in process_api result = await self.call_function( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/blocks.py", line 1185, in call_function prediction = await anyio.to_thread.run_sync( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 2177, in run_sync_in_worker_thread return await future File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/anyio/_backends/_asyncio.py", line 859, in run result = context.run(func, args) File "/root/miniconda3/envs/modaverse/lib/python3.9/site-packages/gradio/utils.py", line 661, in wrapper response = f(args, **kwargs) File "/root/autodl-tmp/ModaVerse/demo.py", line 43, in process_input meta_response, final_responses = ModaVerse(instruction, media) NameError: name 'ModaVerse' is not defined
Did you follow the installation steps to run pip install -e .?
— Reply to this email directly, view it on GitHub, or unsubscribe. You are receiving this because you commented.Message ID: @.***>
I follow the following link( https://huggingface.co/lmsys/vicuna-13b-v1.5-16k )Download the corresponding model weights. But it seems to be inconsistent with the directory structure displayed in the README file. I encountered the following error: