Closed brainlid closed 4 months ago
Zepher-7b Beta does NOT support function calling. It doesn't understand how to do it and has not been trained for it.
There are alternate models that have fine-tuned Zephyr for function calling, but those have licensing problems. They trained the model using OpenAI, which is a violation of the terms of use.
Zepher-7b Beta does NOT support function calling. It doesn't understand how to do it and has not been trained for it.
There are alternate models that have fine-tuned Zephyr for function calling, but those have licensing problems. They trained the model using OpenAI, which is a violation of the terms of use.
@brainlid
Have you thought about any ways to support Functions
through rolling a custom dispatching? My thought is using something like Instructor to coerce the LLM into categorizing the task thats being asked into a set of function. You add the task description as one of the possible outputs. Then you map all of the tasks to their respective functions. Huggingface has a diagram that sort of shows what I'm referring to here. You could also coerce the parameters using instructor as well.
I haven't tried this yet, but just wanted to throw the idea out there.
@acalejos Yes! As you probably know by now, I interviewed Thomas Millar about InstructorEx in the episode that came out today.
The challenge is that Instructor doesn't work with Bumblebee yet, and relies on a llamacpp ability to restrict the output grammar, forcing it into a compliant JSON structure.
I'm very interested in the work going on there and this direction. It's very cool.
This is for running the model directly on hardware using Nx and Bumblebee.
The Zephyr 7B beta LLM doesn't have all the capabilities of ChatGPT, nor the safeguards.
What works:
What doesn't work:
Closes #26