Closed steve8708 closed 2 months ago
added it - its pretty dumb/unreliable. highly recommend gpt-4o instead
Curious - what were you seeing that was dumb/unreliable?
Did you try a variety of models? I'm seeing a lot of specialized models for code, like codellama, starcoder2, or codegemma.
This line indicates maybe not.
I tried only llama3 and phi, models i've had good success with in the past or heard the most good things about, but definitely would be game for help testing other alternatives
the main "dumb" thing was no matter what, it woudl always output stuff likei this
function parse(str: string) {
const parsed = ts.parse(str);
// rest of code here
}
it would never output the full code and would only output parts of it with "rest of code here" type comments throughout
@steve8708 if you're hardware is good enough you can get some neat results. For example, the mixtral:8x22b with Ollama might be used to override the openai endpoints locally.
endpoint=http://localhost:11434/v1/
model=mixtral:8x22b
key=ollama
Alternatively, Groq's mixtral-8x7b-32768 has amazing speeds and a huge context window.
endpoint=https://api.groq.com/openai/v1
model=mixtral-8x7b-32768
key=gsk_KEY
My code generated generated perfectly but failed at GET /openai/v1/assistants. 👎
ah! just pushed a fix for that. try version 0.1.1 and use micro-agent config
to change USE_ASSISTANT
to false
perhaps in a future version we should just automatically turn assistants off if a custom endpoint is used 🤔
yeah i'll just turn assistant off anytime an endpoint is used ✅
@steve8708 you're moving at lightning speeds. It's working now and thank you for this project.
yeah i'll just turn assistant off anytime an endpoint is used ✅
Hey @steve8708 have you already updated this behavior in the package?
Right now config set USE_ASSISTANT=false works only when playing with micro-agent project locally:
But doesn't recognize that config when using micro-agent package:
ah, i may have not released recently enough! I'll push a release now
release done, try 0.1.3
(e.g. via running micro-agent update
). still see the issue with that?
The issue is fixed, thanks!
integrate with ollama-js for fully local development
note: in the code there is a
USE_ASSISTANT
flag, just treat that as always off (false
) for ollama as it has no assistants API (also fwiw i don't think the assistants direction is necessary or essential anyway)in short, you would make sure any call to openai for chat completions goes to ollama-js instead