axolotl-ai-cloud / axolotl

Go ahead and axolotl questions
https://axolotl-ai-cloud.github.io/axolotl/
Apache License 2.0
7.87k stars 866 forks source link

Missing YAML mlflow #1307

Open l3utterfly opened 8 months ago

l3utterfly commented 8 months ago

Please check that this issue hasn't been reported before.

Expected Behavior

Works

Current behaviour

Installing from the instructions in the ReadMe.md, get this error when starting training:

mlflow.exceptions.MissingConfigException: Yaml file '/home/layla/src/axolotl/mlruns/0/meta.yaml' does not exist.

Steps to reproduce

  1. Install from source
  2. start training with any example

Config yaml

No response

Possible solution

No response

Which Operating Systems are you using?

Python Version

3.11

axolotl branch-commit

main

Acknowledgements

NanoCode012 commented 8 months ago

can you share config

l3utterfly commented 8 months ago

Here is the config:

base_model: /home/layla/src/text-generation-webui/models/Mistral-7B-v0.1
base_model_config: /home/layla/src/text-generation-webui/models/Mistral-7B-v0.1
model_type: MistralForCausalLM
tokenizer_type: LlamaTokenizer
is_mistral_derived_model: true

load_in_8bit: false
load_in_4bit: false
strict: false

datasets:
  - path: /data.jsonl
    ds_type: json # see other options below
    type: sharegpt
    conversation: vicuna_v1.1

dataset_prepared_path: last_run_prepared
val_set_size: 0.002
output_dir: ./out

sequence_len: 8192
sample_packing: true
pad_to_sequence_len: true

wandb_project:
wandb_entity:
wandb_watch:
wandb_run_id:
wandb_log_model:

gradient_accumulation_steps: 4
micro_batch_size: 2
num_epochs: 1
optimizer: adamw_bnb_8bit
lr_scheduler: cosine
learning_rate: 0.0000005

train_on_inputs: false
group_by_length: false
bf16: true
fp16: false
tf32: false

gradient_checkpointing: true
early_stopping_patience:
resume_from_checkpoint:
local_rank:
logging_steps: 1
xformers_attention:
flash_attention: true

warmup_ratio: 0.05
eval_steps: 0.1
eval_table_size:
eval_table_max_new_tokens: 128
eval_sample_packing: false
save_steps: 100
debug:
deepspeed: zero3_cpuoffload.json # multi-gpu only
weight_decay: 0.0
fsdp:
fsdp_config:
special_tokens:
  bos_token: "<s>"
  eos_token: "</s>"
  unk_token: "<unk>"