OpenInterpreter / open-interpreter

A natural language interface for computers
http://openinterpreter.com/
GNU Affero General Public License v3.0
55.71k stars 4.84k forks source link

Cannot be used in venv python when moved. #271

Closed jackfood closed 8 months ago

jackfood commented 1 year ago

Describe the bug

I am using a Python virtual environment. I've successfully installed all the required packages and have it working in an internet-connected environment on my laptop. However, when I copied the (codellama models 7B and 13B) along with the entire Python virtual environment files to my non-internet-connected desktop using a portable HDD, I noticed that it requires online checking. Since the online checking didn't work properly, it resulted in a blank screen.

I would like to clarify whether this setup requires an online handshake to function, or if there are any other files besides the model stored in the local AppData folder, specifically in the 'Open Interpreter' folder. I'm trying to figure out what I might have missed.

I really want to make it work on my non-internet-connected PC. Do note that my non-internet-connected PC does not allows to install any software.

thanks

Reproduce

  1. Set up a virtual environment in Python.
  2. Download a ZIP file from GitHub and place it into the 'script' folder within your virtual environment.
  3. Open your command prompt, activate the virtual environment, and run the command 'pip install open-interpreter'.
  4. Install 'llama-cpp-python' using 'pip', ensuring it installs successfully without any errors.
  5. Copy the entire Python folder along with the locally stored model files.
  6. Paste these files onto a non-internet-connected desktop, making sure the model files are correctly placed.
  7. When you open the interpreter, you encounter a blank black screen with a host error that retries handshaking five times.

Expected behavior

Should able to work using local llama.

Screenshots

No response

Open Interpreter version

0.1.3

Python version

3.10

Operating System name and version

Windows 11

Additional context

Traceback (most recent call last): File "D:\Python\lib\runpy.py", line 196, in _run_module_as_main return _run_code(code, main_globals, None, File "D:\Python\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "D:\Python\CodeInterpreter\Scripts\interpreter.exe__main.py", line 4, in File "D:\Python\CodeInterpreter\lib\site-packages\interpreter\init.py", line 1, in from .interpreter import Interpreter File "D:\Python\CodeInterpreter\lib\site-packages\interpreter\interpreter.py", line 35, in import litellm File "D:\Python\CodeInterpreter\lib\site-packages\litellm\init__.py", line 40, in from .budget_manager import BudgetManager File "D:\Python\CodeInterpreter\lib\site-packages\litellm\budget_manager.py", line 2, in from litellm.utils import ModelResponse File "D:\Python\CodeInterpreter\lib\site-packages\litellm\utils.py", line 11, in encoding = tiktoken.get_encoding("cl100k_base") File "D:\Python\CodeInterpreter\lib\site-packages\tiktoken\registry.py", line 63, in get_encoding enc = Encoding(constructor()) File "D:\Python\CodeInterpreter\lib\site-packages\tiktoken_ext\openai_public.py", line 64, in cl100k_base mergeable_ranks = load_tiktoken_bpe( File "D:\Python\CodeInterpreter\lib\site-packages\tiktoken\load.py", line 116, in load_tiktoken_bpe contents = read_file_cached(tiktoken_bpe_file) File "D:\Python\CodeInterpreter\lib\site-packages\tiktoken\load.py", line 48, in read_file_cached contents = read_file(blobpath) File "D:\Python\CodeInterpreter\lib\site-packages\tiktoken\load.py", line 24, in read_file resp = requests.get(blobpath) File "D:\Python\CodeInterpreter\lib\site-packages\requests\api.py", line 73, in get return request("get", url, params=params, kwargs) File "D:\Python\CodeInterpreter\lib\site-packages\requests\api.py", line 59, in request return session.request(method=method, url=url, kwargs) File "D:\Python\CodeInterpreter\lib\site-packages\requests\sessions.py", line 589, in request resp = self.send(prep, send_kwargs) File "D:\Python\CodeInterpreter\lib\site-packages\requests\sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "D:\Python\CodeInterpreter\lib\site-packages\requests\adapters.py", line 513, in send raise ProxyError(e, request=request) requests.exceptions.ProxyError: HTTPSConnectionPool(host='openaipublic.blob.core.windows.net', port=443): Max retries exceeded with url: /encodings/cl100k_base.tiktoken (Caused by ProxyError('Unable to connect to proxy', NameResolutionError("<urllib3.connection.HTTPSConnection object at 0x00000299B1F37D30>: Failed to resolve 'proxyx01.(masked).com.net' ([Errno 11001] getaddrinfo failed)")))

jordanbtucker commented 1 year ago

Thanks for reporting.

Related to #281

jackfood commented 1 year ago

Any workaround while waiting for a fix to skip check for huggingface llm assuming I have local llm guff saved. Can someone help me to hardcode the model paththe directory and skip the online check? Greatly appreciated.

MikeBirdTech commented 8 months ago

Closing this stale issue. Offline mode should work successfully now. Please create a new issue if the problem is not resolved or explained in the documentation. Thanks!