Open prashantnayak-edu opened 3 months ago
Getting this error running a local fine-tune using mlx. Unclear where to set these parameters
Training beginning: Adaptor will be saved as: /Users/pnayak/.transformerlab/workspace/plugins/mlx_lora_trainer/my-adaptor-1.npz Writing logs to: /Users/pnayak/.transformerlab/workspace/tensorboards/job5/20240404-181024 Loading pretrained model Fetching 7 files: 0%| | 0/7 [00:00<?, ?it/s] Fetching 7 files: 100%|██████████| 7/7 [00:00<00:00, 84368.18it/s] Traceback (most recent call last): File "/Users/pnayak/.transformerlab/workspace/plugins/mlx_lora_trainer/mlx-examples/lora/lora.py", line 321, in <module> model, tokenizer, _ = lora_utils.load(args.model) ^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/Users/pnayak/.transformerlab/workspace/plugins/mlx_lora_trainer/mlx-examples/lora/utils.py", line 171, in load model.load_weights(list(weights.items())) File "/Users/pnayak/.transformerlab/envs/transformerlab/lib/python3.11/site-packages/mlx/nn/layers/base.py", line 206, in load_weights raise ValueError(f"Missing parameters: {missing}.") ValueError: Missing parameters: lm_head.biases lm_head.scales. Finished training.
What model are you training against? Maybe post the config from your train run and I can give it a shot. I can check to see if this is a known issue on the MLX side or possibly fixed in a newer release.
Getting this error running a local fine-tune using mlx. Unclear where to set these parameters