Closed SeungyounShin closed 11 months ago
Hey there, @SeungyounShin!
Have you tried loading your model into Open Interpreter to see how it performs?
Since Open Interpreter now integrates LiteLLM, you can use the --model
argument to set the model to yours and play around with it.
Example:
interpreter -—model “ Seungyoun/codellama-7b-instruct-pad”
Always looking for capable models to recommend to folks.
Going to close this one for now, but please feel encouraged to reopen it if you want to test it out and discuss this further.
Is your feature request related to a problem? Please describe.
I've observed that certain challenges you've faced might not be solely attributed to the open-interpreter as referenced in:
Issue 308 Issue 166
Moreover, I noticed potential alignment issues in the Llama model's approach to code interpretation.
Describe the solution you'd like
I suggest a collaboration to integrate the model I've developed onLlama2-Code-Interpreter, HuggingFace, tailored uniquely for tasks like code generation, execution, and debugging trajectories. This model, finetuned from GPT4, integrates specialized tokens to guide its behavior. Here are some essential insights to consider:
Describe alternatives you've considered
I tried a lot but prefix, prompting , etc... method without finetuning the weight doesn't work
Additional context
I'm eager to discuss potential avenues of collaboration. If this proposal aligns with the project's vision, perhaps we could arrange a Zoom meeting to delve deeper into the specifics.