Open minhduc01168 opened 1 month ago
按照README.md说明转换模型为gguf
@1694439208 Can you tell me more clearly? I haven't determined what I need to do? I am very grateful to you
Step 1. Convert to gguf Please install python 3.10 and llama.cpp first
conda create -n llama.cpp python=3.10 -y
conda activate llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
pip install -r llama.cpp/requirements.txt
pip uninstall gguf
cd llama.cpp/gguf-py
pip install --editable .
cd [to another folder]
git clone https://github.com/1694439208/GOT-OCR-Inference.git
pip install tiktoken
python GOT-OCR-Inference/convert_hf_to_gguf.py --model module-GOT-OCR2_0_modelscope.cn --outfile got_ocr2_16.gguf --outtype f16
Step 2: try to use model in CPU
pip install torch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 --index-url https://download.pytorch.org/whl/cu118
pip install llama-cpp-python
after installing all the packages, enter GOT-OCR-Inference and run :
python llama2.py
Hope these steps helps you.
@joeqi0370 Do you know how I can input the model with a png,jpg format image? I am so grateful and grateful to you
After fine tuning the GPT-OCR 2.0 model in Vietnamese language, I obtained a model deployed on the GPU. Can you tell me where to start with the steps to implement it on CPU? I thank you very much.