I'm just curious if you would be interested in a pull-request to make omega work with models on github marketplace (gpt-4o, llama3.1 405B, Phi 3.5, Mistral, Command R+, ...). These can be used for free, but are quite limited. Most models allow 8k input and 4k output tokens. Does this make sense or is it anyway a too small context window size?
If you like the idea, I would modify this code to make it work in Omega and send a PR.
Hi @royerloic ,
I'm just curious if you would be interested in a pull-request to make omega work with models on github marketplace (gpt-4o, llama3.1 405B, Phi 3.5, Mistral, Command R+, ...). These can be used for free, but are quite limited. Most models allow 8k input and 4k output tokens. Does this make sense or is it anyway a too small context window size?
If you like the idea, I would modify this code to make it work in Omega and send a PR.
Cheers, Robert