THUDM / CogCoM

Other
142 stars 9 forks source link

Hugging Face Model #4

Open Kikzter opened 6 months ago

Kikzter commented 6 months ago

I am not able to find the model on Hugging Face. Can you help me out how to find the model in Hugging Face? Also Gradio is not working for me as well

qijimrc commented 6 months ago

I am not able to find the model on Hugging Face. Can you help me out how to find the model in Hugging Face? Also Gradio is not working for me as well

Hi, please get the updated links of our HuggingFace models in the README. For the Gradio demo, as I answered in this Issue, you may follow the requirements.txt to ensure the proper versions of pydantic and gradio (can use --no-deps to ignore conflicts).

Kikzter commented 6 months ago

I am not able to find the model on Hugging Face. Can you help me out how to find the model in Hugging Face? Also Gradio is not working for me as well

Hi, please get the updated links of our HuggingFace models in the README. For the Gradio demo, as I answered in this Issue, you may follow the requirements.txt to ensure the proper versions of pydantic and gradio (can use --no-deps to ignore conflicts).

I checked the readme. where I got this line of code "python cli_demo_hf.py --from_pretrained THUDM/cogcom-base-17b-hf --bf16 --local_tokenizer path/to/tokenizer --bf16 --english". This I believe it is for inference. Is there any way I could finetune the model in INT4 like calling the model from hugging face hub. If I copy paste the model ID, I am not able to find the model card in hugging face. It will be helpful if you could guide me on it.

qijimrc commented 6 months ago

I am not able to find the model on Hugging Face. Can you help me out how to find the model in Hugging Face? Also Gradio is not working for me as well

Hi, please get the updated links of our HuggingFace models in the README. For the Gradio demo, as I answered in this Issue, you may follow the requirements.txt to ensure the proper versions of pydantic and gradio (can use --no-deps to ignore conflicts).

I checked the readme. where I got this line of code "python cli_demo_hf.py --from_pretrained THUDM/cogcom-base-17b-hf --bf16 --local_tokenizer path/to/tokenizer --bf16 --english". This I believe it is for inference. Is there any way I could finetune the model in INT4 like calling the model from hugging face hub. If I copy paste the model ID, I am not able to find the model card in hugging face. It will be helpful if you could guide me on it.

Hi, the trained models can be manually downloaded using these links, and you may finetune or inference based on these models by specifying the argument of --from_pretrained=/path/to/unzipped_model_folder. Currently, we don't support automatic model loading from the Transformers library, but we'll work on supporting it soon. You can easily use the SAT for model training, and all configurations (model parallelism, depspped optimization, LoRA, INT2/4 Quantization for inference etc.) can be set via arguments (try the finetune.sh I have prepared in our repo).