Closed yitzshapiro closed 3 weeks ago
you should have a config.yml generated, could you share it please?
This is what's generated:
backend: local
base_model: microsoft/Phi-3-mini-128k-instruct
data:
column_mapping:
text: text
path: data/formatted_train.csv
train_split: train
valid_split: null
hub:
push_to_hub: true
token: ${{HF_TOKEN}}
username: ${{HF_USERNAME}}
log: tensorboard
params:
batch_size: 2
block_size: 1024
chat_template: none
epochs: 3
gradient_accumulation: 4
lr: 3.0e-05
mixed_precision: fp16
model_max_length: 2048
optimizer: adamw_torch
peft: true
scheduler: linear
target_modules: all-linear
unsloth: false
project_name: Slottify-Phi-3-mini-128k-instruct
task: llm
Same here
backend: local base_model: openai-community/gpt2 data: column_mapping: text: text path: data/train.csv train_split: train.csv valid_split: null hub: push_to_hub: true token: ${{HF_TOKEN}} username: ${{HF_USERNAME}} log: tensorboard params: batch_size: 2 block_size: 1024 chat_template: none epochs: 3 gradient_accumulation: 4 lr: 3.0e-05 mixed_precision: fp16 model_max_length: 2048 optimizer: adamw_torch peft: true scheduler: linear target_modules: all-linear unsloth: false project_name: autotrain-f954p-3o4yn task: llm
And these are the first two observations in train.csv
Prerequisites
Backend
Colab
Interface Used
UI
CLI Command
No response
UI Screenshots & Parameters
{ "block_size": 1024, "model_max_length": 2048, "padding": "right", "use_flash_attention_2": false, "disable_gradient_checkpointing": false, "logging_steps": -1, "evaluation_strategy": "epoch", "save_total_limit": 1, "auto_find_batch_size": false, "mixed_precision": "fp16", "lr": 3e-05, "epochs": 3, "batch_size": 2, "warmup_ratio": 0.1, "gradient_accumulation": 4, "optimizer": "adamw_torch", "scheduler": "linear", "weight_decay": 0.0, "max_grad_norm": 1.0, "seed": 42, "chat_template": "none", "quantization": "int4", "target_modules": "all-linear", "merge_adapter": false, "peft": true, "lora_r": 16, "lora_alpha": 32, "lora_dropout": 0.05, "unsloth": false, "push_to_hub": true }
Error Logs
INFO | 2024-06-13 20:31:54 | autotrain.cli.autotrain:main:56 - Using AutoTrain configuration: config.yml INFO | 2024-06-13 20:31:54 | autotrain.parser:post_init__:124 - Running task: lm_training INFO | 2024-06-13 20:31:54 | autotrain.parser:post_init:125 - Using backend: local Traceback (most recent call last): File "/usr/local/bin/autotrain", line 8, in
sys.exit(main())
File "/usr/local/lib/python3.10/dist-packages/autotrain/cli/autotrain.py", line 57, in main
cp = AutoTrainConfigParser(args.config)
File "", line 4, in init
File "/usr/local/lib/python3.10/dist-packages/autotrain/parser.py", line 127, in post_init__
self.parsed_config = self._parse_config()
File "/usr/local/lib/python3.10/dist-packages/autotrain/parser.py", line 142, in _parse_config
params["chat_template"] = self.config["data"]["chat_template"]
KeyError: 'chat_template'
An error occurred while starting training!
Error: Training failed with exit code: 1
Additional Information
Tried using all the different chat_templates, and still am getting the same error above