modelscope / facechain

FaceChain is a deep-learning toolchain for generating your Digital-Twin.
Apache License 2.0
8.86k stars 834 forks source link

Couldn't find a dataset script at cv_portrait_model/rb-meimei_labeled/rb-meimei_labeled.py or any data file in the same directory. #502

Closed billxiang2012 closed 3 months ago

billxiang2012 commented 7 months ago

This share link expires in 72 hours. For free permanent hosting and GPU upgrades, run gradio deploy from Terminal to deploy to Spaces (https://huggingface.co/spaces) 显存足够 Setting base model to SD1.5 --------uuid: qw ----------work_dir: /workspace/facechain/worker_data/qw/ly261666/cv_portrait_model/rb-meimei 2024-01-15 10:52:21,362 - modelscope - INFO - Use user-specified model revision: v1.0.0 /home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py:65: UserWarning: Specified provider 'CUDAExecutionProvider' is not in available provider names.Available providers: 'CPUExecutionProvider' warnings.warn( 2024-01-15 10:52:22,289 - modelscope - INFO - Use user-specified model revision: v1.0.0 2024-01-15 10:52:23,096 - modelscope - INFO - Use user-specified model revision: v1.0.0 2024-01-15 10:52:23,833 - modelscope - INFO - Use user-specified model revision: v1.0.0 2024-01-15 10:52:30,497 - modelscope - INFO - PyTorch version 2.1.2+cu118 Found. 2024-01-15 10:52:30,499 - modelscope - INFO - Loading ast index from /home/bill/.cache/modelscope/ast_indexer 2024-01-15 10:52:30,558 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 407d4f9c9ea2e6d66553d789ca5ec7f4 and a total number of 946 components indexed /workspace/facechain/app.py:1275: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. output_images = gr.Gallery(label='Output', show_label=False).style(columns=3, rows=2, height=600, [['/workspace/facechain/resources/inpaint_template/1.jpg'], ['/workspace/facechain/resources/inpaint_template/2.jpg'], ['/workspace/facechain/resources/inpaint_template/3.jpg'], ['/workspace/facechain/resources/inpaint_template/4.jpg'], ['/workspace/facechain/resources/inpaint_template/5.jpg']] /workspace/facechain/app.py:1378: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. output_images = gr.Gallery( [['resources/tryon_garment/garment1.png'], ['resources/tryon_garment/garment2.png'], ['resources/tryon_garment/garment3.png'], ['resources/tryon_garment/garment4.png']] /workspace/facechain/app.py:1529: GradioDeprecationWarning: The style method is deprecated. Please set these arguments in the constructor instead. output_images = gr.Gallery( 2024-01-15 10:52:36,647 - modelscope - INFO - Use user-specified model revision: v4.0 2024-01-15 10:52:38,714 - modelscope - INFO - Use user-specified model revision: v1.0.1 Process Process-1: Traceback (most recent call last): File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 450, in _get_module requires(module_name_full, requirements) File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 353, in requires raise ImportError(''.join(failed)) ImportError: modelscope.models.nlp.chatglm2.tokenization requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones that match your environment.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/home/bill/miniconda3/envs/facechain/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap self.run() File "/home/bill/miniconda3/envs/facechain/lib/python3.10/multiprocessing/process.py", line 108, in run self._target(*self._args, **self._kwargs) File "/workspace/facechain/facechain/inference.py", line 25, in _data_process_fn_process Blipv2()(input_img_dir) File "/workspace/facechain/facechain/data_process/preprocessing.py", line 205, in init self.skin_retouching = pipeline('skin-retouching-torch', model='damo/cv_unet_skin_retouching_torch', model_revision='v1.0.1') File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 163, in pipeline clear_llm_info(kwargs) File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/modelscope/pipelines/builder.py", line 227, in clear_llm_info from .nlp.llm_pipeline import ModelTypeHelper File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/modelscope/pipelines/nlp/llm_pipeline.py", line 15, in from modelscope.models.nlp import ChatGLM2Tokenizer, Llama2Tokenizer File "", line 1075, in _handle_fromlist File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 435, in getattr value = getattr(module, name) File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 434, in getattr module = self._get_module(self._class_to_module[name]) File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/modelscope/utils/import_utils.py", line 453, in _get_module raise RuntimeError( RuntimeError: Failed to import modelscope.models.nlp.chatglm2.tokenization because of the following error (look up to see its traceback):

modelscope.models.nlp.chatglm2.tokenization requires the SentencePiece library but it was not found in your environment. Checkout the instructions on the installation page of its repo: https://github.com/google/sentencepiece#installation and follow the ones that match your environment.

instance_data_dir /workspace/facechain/worker_data/qw/training_data/ly261666/cv_portrait_model/rb-meimei project dir: /workspace/facechain params: >base_model_path:ly261666/cv_portrait_model, >revision:v2.0, >sub_path:film/film, >output_img_dir:/workspace/facechain/worker_data/qw/training_data/ly261666/cv_portrait_model/rb-meimei, >work_dir:/workspace/facechain/worker_data/qw/ly261666/cv_portrait_model/rb-meimei, >lora_r:4, >lora_alpha:32 The following values were not passed to accelerate launch and had defaults used instead: --num_processes was set to a value of 1 --num_machines was set to a value of 1 --mixed_precision was set to a value of 'no' --dynamo_backend was set to a value of 'no' To avoid this warning pass in values for each of the problematic parameters or run accelerate config. 2024-01-15 10:52:49,990 - modelscope - INFO - PyTorch version 2.1.2+cu118 Found. 2024-01-15 10:52:49,994 - modelscope - INFO - Loading ast index from /home/bill/.cache/modelscope/ast_indexer 2024-01-15 10:52:50,053 - modelscope - INFO - Loading done! Current index file version is 1.10.0, with md5 407d4f9c9ea2e6d66553d789ca5ec7f4 and a total number of 946 components indexed /home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/accelerate/accelerator.py:393: UserWarning: log_with=tensorboard was passed but no supported trackers are currently installed. warnings.warn(f"log_with={log_with} was passed but no supported trackers are currently installed.") 01/15/2024 10:52:52 - INFO - main - Distributed environment: NO Num processes: 1 Process index: 0 Local process index: 0 Device: cuda

Mixed precision type: no

2024-01-15 10:52:52,608 - modelscope - INFO - Use user-specified model revision: v2.0 {'dynamic_thresholding_ratio', 'clip_sample_range', 'thresholding', 'sample_max_value', 'rescale_betas_zero_snr', 'variance_type'} was not found in config. Values will be initialized to default values. {'force_upcast'} was not found in config. Values will be initialized to default values. {'dropout', 'reverse_transformer_layers_per_block', 'attention_type'} was not found in config. Values will be initialized to default values. Traceback (most recent call last): File "/workspace/facechain/facechain/train_text_to_image_lora.py", line 1222, in main() File "/workspace/facechain/facechain/train_text_to_image_lora.py", line 791, in main dataset = load_dataset( File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/datasets/load.py", line 2523, in load_dataset builder_instance = load_dataset_builder( File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/datasets/load.py", line 2195, in load_dataset_builder dataset_module = dataset_module_factory( File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/datasets/load.py", line 1848, in dataset_module_factory raise FileNotFoundError( FileNotFoundError: Couldn't find a dataset script at /workspace/facechain/worker_data/qw/training_data/ly261666/cv_portrait_model/rb-meimei_labeled/rb-meimei_labeled.py or any data file in the same directory. Traceback (most recent call last): File "/home/bill/miniconda3/envs/facechain/bin/accelerate", line 8, in sys.exit(main()) File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/accelerate/commands/accelerate_cli.py", line 47, in main args.func(args) File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/accelerate/commands/launch.py", line 1023, in launch_command simple_launcher(args) File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/accelerate/commands/launch.py", line 643, in simple_launcher raise subprocess.CalledProcessError(returncode=process.returncode, cmd=cmd) subprocess.CalledProcessError: Command '['/home/bill/miniconda3/envs/facechain/bin/python3.10', '/workspace/facechain/facechain/train_text_to_image_lora.py', '--pretrained_model_name_or_path=ly261666/cv_portrait_model', '--revision=v2.0', '--sub_path=film/film', '--output_dataset_name=/workspace/facechain/worker_data/qw/training_data/ly261666/cv_portrait_model/rb-meimei', '--caption_column=text', '--resolution=512', '--random_flip', '--train_batch_size=1', '--num_train_epochs=200', '--checkpointing_steps=5000', '--learning_rate=1.5e-04', '--lr_scheduler=cosine', '--lr_warmup_steps=0', '--seed=42', '--output_dir=/workspace/facechain/worker_data/qw/ly261666/cv_portrait_model/rb-meimei', '--lora_r=4', '--lora_alpha=32', '--lora_text_encoder_r=32', '--lora_text_encoder_alpha=32', '--resume_from_checkpoint=fromfacecommon']' returned non-zero exit status 1. Traceback (most recent call last): File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/gradio/queueing.py", line 407, in call_prediction output = await route_utils.call_process_api( File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/gradio/route_utils.py", line 226, in call_process_api output = await app.get_blocks().process_api( File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/gradio/blocks.py", line 1550, in process_api result = await self.call_function( File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/gradio/blocks.py", line 1185, in call_function prediction = await anyio.to_thread.run_sync( File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/anyio/to_thread.py", line 56, in run_sync return await get_async_backend().run_sync_in_worker_thread( File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 2134, in run_sync_in_worker_thread return await future File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/anyio/_backends/_asyncio.py", line 851, in run result = context.run(func, args) File "/home/bill/miniconda3/envs/facechain/lib/python3.10/site-packages/gradio/utils.py", line 661, in wrapper response = f(args, **kwargs) File "/workspace/facechain/app.py", line 803, in run train_lora_fn(base_model_path=base_model_path, File "/workspace/facechain/app.py", line 207, in train_lora_fn raise gr.Error("训练失败 (Training failed)") gradio.exceptions.Error: '训练失败 (Training failed)'

请问这个错误要怎么处理,有人碰到过吗? conda 环境

thangnn1010 commented 3 months ago

Have you solved it yet? Can you give me some suggestions? thanks

sunbaigui commented 3 months ago

please try out the newest train-free, 10s inference version facechain-fact.