This project showcases an LLMOps pipeline that fine-tunes a small-size LLM model to prepare for the outage of the service LLM.
286
stars
29
forks
source link
scripts to fine-tune and batch inference on dstack #12
Closed
deep-diver closed 6 months ago
@sayakpaul
you don't have to be involved into this PR. Just for FYI, I tagged you since dstack is somewhat interesting platform.