Open AustinScola opened 1 day ago
I think I may have found a solution with https://github.com/ml-explore/mlx/discussions/1507#discussioncomment-11039570 for converting from MLX to PEFT.
I've never tried an MLX adapter. Could you please provide us one for testing?
Name and Version
version: 4179 (25669aa9) built with Apple clang version 15.0.0 (clang-1500.3.9.4) for arm64-apple-darwin23.4.0
Operating systems
Mac
Which llama.cpp modules do you know to be affected?
No response
Problem description & steps to reproduce
I'm trying to convert a lora adapter created with MLX to GGML using
convert_lora_to_gguf.py
but I'm running into a problem.First Bad Commit
No response
Relevant log output