Closed blevlabs closed 1 year ago
Can you share your yml config file?
@winglian
base_model: Blevlabs/alpaca-7b
model_type: LlamaForCausalLM
tokenizer_type: LlamaTokenizer
load_in_8bit: true
datasets:
- path: data/searchQA.jsonl
type: alpaca
dataset_prepared_path: last_run_prepared
val_set_size: 0.04
adapter:
lora_model_dir:
sequence_len: 2048
lora_r:
lora_alpha:
lora_dropout:
lora_target_modules:
lora_fan_in_fan_out:
wandb_project:
wandb_watch:
wandb_run_id:
wandb_log_model: checkpoint
output_dir: ./alpaca-search
batch_size: 4
micro_batch_size: 2
num_epochs: 3
learning_rate: 0.00001
train_on_inputs: false
group_by_length: false
bf16: true
tf32: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
I am very new to this repo, so this may not be the best format for training an Alpaca7B model with axolotl. Advice on the config would be appreciated as well, trying to fine-tune the model with a set of 50k examples for Question-Answering
@blevlabs , may I ask if you have pip install following Readme? Would it be possible to create a new environment in py39 or try the same in the docker image?
@NanoCode012 Hello, yes I followed the LambdaLabs setup in the README exactly. I ensured it was in python 3.9, and still encountered the issue. I can try a docker instance to see if it helps
@blevlabs Hello, may I ask if you managed to solve this?
I am trying to run a finetuning script for an Alpaca-7B model, and getting the following: