AI4Finance-Foundation / FinGPT

FinGPT: Open-Source Financial Large Language Models! Revolutionize 🔥 We release the trained model on HuggingFace.
https://ai4finance.org
MIT License
12.75k stars 1.81k forks source link

A ImportError when I run the program "FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb" #165

Open YRookieBoy opened 4 months ago

YRookieBoy commented 4 months ago

Hi, When I try to run "FinGPT_Training_LoRA_with_ChatGLM2_6B_for_Beginners.ipynb" in google colab, I came aross a problem. The code is

model_name = "THUDM/chatglm2-6b" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModel.from_pretrained( model_name, quantization_config=q_config, trust_remote_code=True, device='cuda' )

and, the error is

ImportError: Using load_in_8bit=True requires Accelerate: pip install accelerate and the latest version of bitsandbytes pip install -i https://test.pypi.org/simple/ bitsandbytes or pip install bitsandbytes` model = prepare_model_for_int8_training(model, use_gradient_checkpointing=True)

Last, I program the code in Gcolab pro and I am sure both packages is installed. Please help me solve the problem, thank you so much!

llk010502 commented 4 months ago

Hi, based on my experience, you can try to reinstall these two packages when this error shows, then restart your kernel to run your code. Hope this works.

YRookieBoy commented 4 months ago

Thank you very much! I have already run the code successfully.

Siddharth-Latthe-07 commented 1 week ago

The error indicates that the necessary packages for 8-bit training, specifically accelerate and bitsandbytes, are either not installed correctly or not recognized by the environment. Here's how you can troubleshoot and resolve the issue:

  1. Ensure correct installation
  2. Restart the runtime: Sometimes, after installing new packages, you need to restart the runtime for the changes to take effect.
  3. Check for correct version and Load the packages before model definition sample snippet
    
    # Install the necessary packages
    !pip install accelerate
    !pip install -i https://test.pypi.org/simple/ bitsandbytes

Restart runtime after installing the packages (manual step in the Colab interface)

Import the required libraries

from transformers import AutoTokenizer, AutoModel from accelerate import Accelerator

Ensure the runtime is using GPU

import torch device = 'cuda' if torch.cuda.is_available() else 'cpu'

Load the model with the necessary configuration

model_name = "THUDM/chatglm2-6b" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModel.from_pretrained( model_name, quantization_config=q_config, trust_remote_code=True, device=device )

Prepare model for 8-bit training

from transformers import prepare_model_for_int8_training model = prepare_model_for_int8_training(model, use_gradient_checkpointing=True)


and also check the gpu settings

hope this will help, let me know the further updates
Thanks