ggerganov / ggml

Tensor library for machine learning
MIT License
11.18k stars 1.03k forks source link

Instruct GPT-J #50

Open loretoparisi opened 1 year ago

loretoparisi commented 1 year ago

Someone fine-tuned GPT-J on the Alpaca instruction dataset using PETF:

peft_model_id = "crumb/Instruct-GPT-J"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map='auto', revision='sharded')
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id)

# This example is in the alpaca training set
batch = tokenizer("Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: How can we reduce air pollution? ### Response:", return_tensors='pt')

Recently I have successfully tried GPT-J model itself on GGML, using converted binary provided, so I suppose InstructGPT-J it should work off the shelf converting the checkpoint and then doing quantization.

Model adapter is here

ggerganov commented 1 year ago

Yes, I just merged the quantization stuff into master so one can try using it