Open 6rz6 opened 6 months ago
Instructions for Setting Up a Model File on OLLAMA:
You can address some of the current system issues by creating and using a custom model file. Here’s a simple example of how to set up a model file using OLLAMA. This script should be run in your terminal:
# Pull your chosen base model from OLLAMA
ollama pull <base_model_name>
# Create a Modelfile specifying the base model and setting up system attributes
echo "FROM <base_model_name>" > Modelfile
echo "SYSTEM You are a friendly assistant. You embody the characteristics of a helpful, knowledgeable assistant with a strong emphasis on user interaction and problem-solving." >> Modelfile
# Create the new model using the Modelfile
ollama create -f Modelfile <your_model_name>
# Push the new model to the repository
ollama push <your_model_name>
Customization Instructions:
<base_model_name>
with the name of the model you wish to use as your base (e.g., llava
or wizard-vicuna
).<your_model_name>
with the name you choose for your newly created model.This approach helps bypass issues related to system message length and dependency errors by allowing you to configure a simplified, efficient setup that's tailored to your needs. Running these commands in the terminal will create a more manageable environment, potentially reducing operational costs and enhancing performance stability.
Additional Tips:
By following these steps, you can create a more streamlined and effective model setup that addresses specific bugs and improves overall system responsiveness.
package-lock.json
I see now the working formart is :parameter: its good to know, and the size if the prompt looks totally rational. I wish the base came with an extra example ontop of the fast,empty ,OS and way too long ones it comes with. Would you be able to post your's as text ? Even if its just the infrastructure template i will be grateful 🙏
Instructions for Setting Up a Model File on OLLAMA:
You can address some of the current system issues by creating and using a custom model file. Here’s a simple example of how to set up a model file using OLLAMA. This script should be run in your terminal:
# Pull your chosen base model from OLLAMA ollama pull <base_model_name> # Create a Modelfile specifying the base model and setting up system attributes echo "FROM <base_model_name>" > Modelfile echo "SYSTEM You are a friendly assistant. You embody the characteristics of a helpful, knowledgeable assistant with a strong emphasis on user interaction and problem-solving." >> Modelfile # Create the new model using the Modelfile ollama create -f Modelfile <your_model_name> # Push the new model to the repository ollama push <your_model_name>
Customization Instructions:
- Replace
<base_model_name>
with the name of the model you wish to use as your base (e.g.,llava
orwizard-vicuna
).- Replace
<your_model_name>
with the name you choose for your newly created model.This approach helps bypass issues related to system message length and dependency errors by allowing you to configure a simplified, efficient setup that's tailored to your needs. Running these commands in the terminal will create a more manageable environment, potentially reducing operational costs and enhancing performance stability.
Additional Tips:
- Ensure your terminal is set up to connect to OLLAMA’s repositories.
- Verify that all placeholders are correctly replaced with actual values before executing the commands.
By following these steps, you can create a more streamlined and effective model setup that addresses specific bugs and improves overall system responsiveness.
Honestly ollama is the only provider which im able to use with CLI call using -ab ip:11434 -o -m=ollama/model I wish lite llm would accept all other models like that or at least the formatting for lite llm would be done on our side once the op-in gets the params. I see no reason why any model shouldn't utilize exactly the same syntax which work perfect with openai and ollama. Try to use COHERE or Huggingfacehub or mistralai in the same format in CLI and it breaks. 🤔 the config files help but even with them i find myself struggling for hours instead of just using open intererpreter as an extention of myself like it used to be.
When called from python for example activated by crewAI it work way better than the way i use it from CLI, maybe the solution will be instead of the aliases i have for everymode in CLI to have py code for each although if i have py code for each then instead of me distributing it we should merge it to the base model as initially suggested
I wish the code was strict as the rules ))
Absolutely, I resonate with your points on CLI usability across different models. I primarily use oh-my-zsh and spaceship for OLLAMA models, which simplifies things, but I’ve also noticed inconsistencies when trying to apply similar CLI formats with COHERE or Hugging Face. It often feels like instead of leveraging the interpreter as an extension of my capabilities, I'm stuck configuring endlessly.
I completely agree that having a uniform syntax for model interaction could vastly improve usability. It would reduce the learning curve and make tool integration more seamless across different platforms.
Regarding config files, they sometimes help but often add another layer of complexity. Simplifying CLI interaction to make it as intuitive as using Python scripts directly might be a better approach for consistency and efficiency. Perhaps advocating for integrating these scripts into the base model, as you suggested, could be a step towards standardizing model interactions.
I wish the CLI interactions were as strict and standardized as coding best practices, making our work much more straightforward!
When called from python for example activated by crewAI it work way better than the way i use it from CLI, maybe the solution will be instead of the aliases i have for everymode in CLI to have py code for each although if i have py code for each then instead of me distributing it we should merge it to the base model as initially suggested
`** please run a test command for me
Certainly! I'd be happy to run a test command for you. Let's start with a simple Python command to print a "Hello, World!" message. This will help us confirm that the code execution is working correctly.
Here's the plan:
1 Write a simple Python script to print "Hello, World!" 2 Execute the script using the provided execute function 3 Observe the output
Let's proceed with the plan:
execute("python", "print('Hello, World!')")
This command should execute a Python script that prints "Hello, World!" to the console. Let's see the output.
Now, let's recap the plan:
1 We wrote a simple Python script to print "Hello, World!" 2 We executed the script using the provided execute function 3 We should now observe the output
If you see "Hello, World!" printed in the output, it means the test command was successful, and the code execution is working as expected. Is there anything else you'd like me to test or any other commands you'd like me to run?
didn't work, make sure you execute the code
I apologize for the oversight. You're absolutely right, and I thank you for pointing that out. Let's correct this and actually execute the code.
Here's our revised plan:
1 Write a simple Python script to print "Hello, World!" 2 Actually execute the script using the provided execute function 3 Observe the output
Now, let's execute the code:
execute("python", "print('Hello, World!')")
Let's recap our plan:
1 We've written a simple Python script to print "Hello, World!" 2 We've now actually executed the script using the execute function 3 We should now be able to observe the output
You should see "Hello, World!" printed in the output. This test command should confirm that code execution is working correctly.
If you see the expected output, great! If not, or if you'd like me to try something else, please let me know, and I'll be happy to assist further.
**`
Describe the bug
Reproduce
Expected behavior
Screenshots
No response
Open Interpreter version
0.2.0 and above
Python version
3.11
Operating System name and version
Wsl2 on win10
Additional context
I can gladly help, i dug very deep into all the core files to try fix these issues. Thank you. @6rz6