jazelly / FinetuneLLMs

Easy and Fast LLM finetuning on GPU or CPU.
MIT License
40 stars 8 forks source link
finetune llama llm lora train typescript webui

FinetuneLLMs (Work in Progress, Actively! 🔥)

Finetune an LLM, within a few clicks!

Discord License

How to run

The best way to do this is via docker,

docker compose up -d

In browser, visit http://localhost:8080

For detailed explanation on the structure, see BUILD.md

🔥Goal & Roadmap🔥

The main objective of this project is to lower the barrier to training large language models, especially for startup companies that have hardware in hands.

Roadmap

Contributing

Thank you very very much for your contributions. You can follow the contribution guide to get started.