Open irthomasthomas opened 10 months ago
Instead of prompting an llm to pick one word from a list, I could use logprobs, or logit_bias
Logprobs can be used used to return a value between 0 and 1. Our router would consist of a number of, lets call them binary-filter-prompts. eg:
Then we can hit the api in parallel for all the labels or classes we want probabilities for.
I want to experiment with a reliable way to use a simple mix of experts, of a sort. I thought a pipeline starting with a classification. Some way put the request into a definite category. 5-10 Categories at most. Write a classifier prompt that demands a single word from a list, using function calling or not. Possible algorithm:
Prompt: Here is a an incoming request that requires classification. It has been truncated to 200 words. System: Valid responses: Terminal, Bash, Web, Github, ...................