Open ulhaqi12 opened 3 months ago
@winglian Any thoughts on this?
I think that similarly to vLLM providing multiple OpenAI inference endpoints, it would also be a useful target to emulate the OpenAI fine tuning endpoint with axolotl backing it.
Would give you much closer to full coverage of emulating their major api endpoints with open source alternatives.
Just my $0.02.
β οΈ Please check that this feature request hasn't been suggested before.
π Feature description
My proposal is to make Axolotl more production-friendly by adding a RESTful web server layer. This server will serve as a gateway, allowing users to interact with Axolotl through specific URLs like
https://axolotl-server/train
andhttps://axolotl-server/finetune
. Users will be able to send in configurations through simple JSON payloads, specifying exactly what they need Axolotl to do. This method makes it straightforward to communicate with the system and also offers flexible options for saving the trained models, either locally or on cloud storage β though we can figure out the details of cloud storage later.βοΈ Solution
The goal is to make Axolotl ready for real-world use, whether on cloud platforms like Kubernetes or on remote machines with GPUs. By setting up a RESTful server, teams can easily put Axolotl to work in their own systems and start training or tweaking models with just a few clicks. This setup removes the technical hurdles and lets anyone manage machine learning tasks through simple web requests. For this, I am open to using any web server, whether Django, FastAPi or Flask.
β Alternatives
I haven't seen any alternative to axolotl, so making a wrapper on top of axolotl is the only way to make it a webserver. I am open to suggestions.
π Additional Context
I would be happy to contribute in this enhancement.
Acknowledgements