OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
50.58k stars 4.41k forks source link

Adding the 'Computer' destroyed open intererpreter which was the best product i used #1255

Open 6rz6 opened 2 months ago

6rz6 commented 2 months ago

Describe the bug

  1. The default system msg is irrationally long and no model except gpt4 preview can even load it. It cost a $1 per minute of use sending this 10,000 token msg each time.
  2. After a full install using complete still when doing first thing --profiles there is a dependency errror related to opening files.
  3. Complete inability to integrate any model without litellm 'crying' about the format it expects using the CLI with the (model=huggingface.. ) even when i do it 1 to 1 as it wants using hugging face hub instead of hugging face , it does recognize the hub key, not in the env and not using -ak in the CLI.
  4. The rapid change to the format of the config files frorm version to version makes it neary impossible to adapt previous version system,profile any ymal which was working perfectly fine before the update and worst of all it deletes the content of the config while sarcastically claiming the content will not be effected during the migration to the new format.

Reproduce

  1. Install on a fresh machine a complete install and run --profiles. Bug.
  2. Try to use a default.yaml from version 0.21.0 and see it deleted instead of migrated to the new 0.25.0 version.
  3. Try to use an ollama,huggingface HUB model and see litellm unhandled errors which also not handeled by open-interpreter (its a simple string manipulation, handle it dont raise a kernel error)
  4. Use a model like gpt 3.5 turbo and see how from being a super capable tool in v0.19.0 it become a lazy bordering stupid and un reliable with zero memory which keeps saying task is done after every command while task is not even stated, just even after it recap and change directory as step 1 he already forgets (context windows full) the second item in it recap.

Expected behavior

  1. Fully operational system and profile default files. Not exceed 1000 to 2000 tokens.
  2. Add Option 4 in the install fully adapted to local / custom non openai models.
  3. Fix the yaml migration of the config files to really migrate not deleting and without backing up the old version overwriting(!) It. Which is un restoreable.

Screenshots

No response

Open Interpreter version

0.2.0 and above

Python version

3.11

Operating System name and version

Wsl2 on win10

Additional context

I can gladly help, i dug very deep into all the core files to try fix these issues. Thank you. @6rz6

i-hodl commented 1 month ago

Instructions for Setting Up a Model File on OLLAMA:

You can address some of the current system issues by creating and using a custom model file. Here’s a simple example of how to set up a model file using OLLAMA. This script should be run in your terminal:

# Pull your chosen base model from OLLAMA
ollama pull <base_model_name>

# Create a Modelfile specifying the base model and setting up system attributes
echo "FROM <base_model_name>" > Modelfile
echo "SYSTEM You are a friendly assistant. You embody the characteristics of a helpful, knowledgeable assistant with a strong emphasis on user interaction and problem-solving." >> Modelfile

# Create the new model using the Modelfile
ollama create -f Modelfile <your_model_name>

# Push the new model to the repository
ollama push <your_model_name>

Customization Instructions:

This approach helps bypass issues related to system message length and dependency errors by allowing you to configure a simplified, efficient setup that's tailored to your needs. Running these commands in the terminal will create a more manageable environment, potentially reducing operational costs and enhancing performance stability.

Additional Tips:

By following these steps, you can create a more streamlined and effective model setup that addresses specific bugs and improves overall system responsiveness.

hossain666 commented 1 month ago

https://github.com/github-linguist/linguist/issues/6811#issue-2279273576

hossain666 commented 1 month ago

hossain666 commented 1 month ago

package-lock.json

6rz6 commented 1 month ago

comparison.gif

I see now the working formart is :parameter: its good to know, and the size if the prompt looks totally rational. I wish the base came with an extra example ontop of the fast,empty ,OS and way too long ones it comes with. Would you be able to post your's as text ? Even if its just the infrastructure template i will be grateful 🙏

6rz6 commented 1 month ago

Instructions for Setting Up a Model File on OLLAMA:

You can address some of the current system issues by creating and using a custom model file. Here’s a simple example of how to set up a model file using OLLAMA. This script should be run in your terminal:

# Pull your chosen base model from OLLAMA
ollama pull <base_model_name>

# Create a Modelfile specifying the base model and setting up system attributes
echo "FROM <base_model_name>" > Modelfile
echo "SYSTEM You are a friendly assistant. You embody the characteristics of a helpful, knowledgeable assistant with a strong emphasis on user interaction and problem-solving." >> Modelfile

# Create the new model using the Modelfile
ollama create -f Modelfile <your_model_name>

# Push the new model to the repository
ollama push <your_model_name>

Customization Instructions:

  • Replace <base_model_name> with the name of the model you wish to use as your base (e.g., llava or wizard-vicuna).
  • Replace <your_model_name> with the name you choose for your newly created model.

This approach helps bypass issues related to system message length and dependency errors by allowing you to configure a simplified, efficient setup that's tailored to your needs. Running these commands in the terminal will create a more manageable environment, potentially reducing operational costs and enhancing performance stability.

Additional Tips:

  • Ensure your terminal is set up to connect to OLLAMA’s repositories.
  • Verify that all placeholders are correctly replaced with actual values before executing the commands.

By following these steps, you can create a more streamlined and effective model setup that addresses specific bugs and improves overall system responsiveness.

Honestly ollama is the only provider which im able to use with CLI call using -ab ip:11434 -o -m=ollama/model I wish lite llm would accept all other models like that or at least the formatting for lite llm would be done on our side once the op-in gets the params. I see no reason why any model shouldn't utilize exactly the same syntax which work perfect with openai and ollama. Try to use COHERE or Huggingfacehub or mistralai in the same format in CLI and it breaks. 🤔 the config files help but even with them i find myself struggling for hours instead of just using open intererpreter as an extention of myself like it used to be.

6rz6 commented 1 month ago

When called from python for example activated by crewAI it work way better than the way i use it from CLI, maybe the solution will be instead of the aliases i have for everymode in CLI to have py code for each although if i have py code for each then instead of me distributing it we should merge it to the base model as initially suggested

6rz6 commented 1 month ago

github-linguist/linguist#6811 (comment)

I wish the code was strict as the rules ))

i-hodl commented 1 month ago

Absolutely, I resonate with your points on CLI usability across different models. I primarily use oh-my-zsh and spaceship for OLLAMA models, which simplifies things, but I’ve also noticed inconsistencies when trying to apply similar CLI formats with COHERE or Hugging Face. It often feels like instead of leveraging the interpreter as an extension of my capabilities, I'm stuck configuring endlessly.

I completely agree that having a uniform syntax for model interaction could vastly improve usability. It would reduce the learning curve and make tool integration more seamless across different platforms.

Regarding config files, they sometimes help but often add another layer of complexity. Simplifying CLI interaction to make it as intuitive as using Python scripts directly might be a better approach for consistency and efficiency. Perhaps advocating for integrating these scripts into the base model, as you suggested, could be a step towards standardizing model interactions.

I wish the CLI interactions were as strict and standardized as coding best practices, making our work much more straightforward!

When called from python for example activated by crewAI it work way better than the way i use it from CLI, maybe the solution will be instead of the aliases i have for everymode in CLI to have py code for each although if i have py code for each then instead of me distributing it we should merge it to the base model as initially suggested

hossain666 commented 1 week ago

`** please run a test command for me

Certainly! I'd be happy to run a test command for you. Let's start with a simple Python command to print a "Hello, World!" message. This will help us confirm that the code execution is working correctly.

Here's the plan:

1 Write a simple Python script to print "Hello, World!" 2 Execute the script using the provided execute function 3 Observe the output

Let's proceed with the plan:

execute("python", "print('Hello, World!')")

This command should execute a Python script that prints "Hello, World!" to the console. Let's see the output.

Now, let's recap the plan:

1 We wrote a simple Python script to print "Hello, World!" 2 We executed the script using the provided execute function 3 We should now observe the output

If you see "Hello, World!" printed in the output, it means the test command was successful, and the code execution is working as expected. Is there anything else you'd like me to test or any other commands you'd like me to run?

didn't work, make sure you execute the code

I apologize for the oversight. You're absolutely right, and I thank you for pointing that out. Let's correct this and actually execute the code.

Here's our revised plan:

1 Write a simple Python script to print "Hello, World!" 2 Actually execute the script using the provided execute function 3 Observe the output

Now, let's execute the code:

execute("python", "print('Hello, World!')")

Let's recap our plan:

1 We've written a simple Python script to print "Hello, World!" 2 We've now actually executed the script using the execute function 3 We should now be able to observe the output

You should see "Hello, World!" printed in the output. This test command should confirm that code execution is working correctly.

If you see the expected output, great! If not, or if you'd like me to try something else, please let me know, and I'll be happy to assist further.

**`