continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
18.29k stars 1.48k forks source link

Issue running codellama/phind-codellama based model with FastChat OpenAI API #491

Closed brandonbiggs closed 1 year ago

brandonbiggs commented 1 year ago

Describe the bug When attempting to follow the FastChat OpenAI API setup instructions, I get the following error: openai.error.APIConnectionError: Error communicating with OpenAI

To Reproduce

  1. Setup a codellama model with fastchat
  2. Follow the fastchat-api instructions
  3. Change your configuration per the documentation
  4. See error

Environment

Config Changes Per Documetation

config = ContinueConfig(
    ...
    models=Models(
        default=OpenAI(
                         model="CodeLlama-7b-Instruct-hf",
                         openai_server_info={'api_base': 'http://localhost:8000/v1'},
                         context_length=4096,
                         timeout=5000,
            prompt_templates={},
            api_key=""
        ),
    ....

Logs

Traceback (most recent call last):
  File "aiohttp/connector.py", line 980, in _wrap_create_connection
  File "asyncio/base_events.py", line 1112, in create_connection
  File "asyncio/base_events.py", line 1145, in _create_connection_transport
  File "asyncio/futures.py", line 287, in __await__
  File "asyncio/tasks.py", line 339, in __wakeup
  File "asyncio/futures.py", line 203, in result
  File "asyncio/sslproto.py", line 575, in _on_handshake_complete
  File "asyncio/sslproto.py", line 557, in _do_handshake
  File "ssl.py", line 979, in do_handshake
ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)
The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "openai/api_requestor.py", line 588, in arequest_raw
  File "aiohttp/client.py", line 536, in _request
  File "aiohttp/connector.py", line 540, in connect
  File "aiohttp/connector.py", line 901, in _create_connection
  File "aiohttp/connector.py", line 1209, in _create_direct_connection
  File "aiohttp/connector.py", line 1178, in _create_direct_connection
  File "aiohttp/connector.py", line 982, in _wrap_create_connection
aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host api.openai.com:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: unable to get local issuer certificate (_ssl.c:1002)')]
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
  File "continuedev/src/continuedev/core/autopilot.py", line 387, in _run_singular_step
    observation = await step(self.continue_sdk)
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "continuedev/src/continuedev/core/main.py", line 368, in __call__
    return await self.run(sdk)
           ^^^^^^^^^^^^^^^^^^^
  File "continuedev/src/continuedev/plugins/steps/chat.py", line 106, in run
    async for chunk in generator:
  File "continuedev/src/continuedev/libs/llm/__init__.py", line 322, in stream_chat
    async for chunk in self._stream_chat(messages=messages, options=options):
  File "continuedev/src/continuedev/libs/llm/openai.py", line 130, in _stream_chat
    async for chunk in await openai.ChatCompletion.acreate(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "openai/api_resources/chat_completion.py", line 45, in acreate
  File "openai/api_resources/abstract/engine_api_resource.py", line 217, in acreate
  File "openai/api_requestor.py", line 300, in arequest
  File "openai/api_requestor.py", line 605, in arequest_raw
openai.error.APIConnectionError: Error communicating with OpenAI
sestinj commented 1 year ago

Looks like our documentation is out of date, sorry about that! The new format for you config should look like this:

config = ContinueConfig(
    ...
    models=Models(
        default=OpenAI(
                         model="CodeLlama-7b-Instruct-hf",
                         api_base='http://localhost:8000/v1'},
                         context_length=4096,
                         timeout=5000,
            prompt_templates={},
            api_key=""
        ),
    ....

Also, just out of curiosity what led you to choose FastChat as the model provider? Had you already been using it for other purposes? If you're not tied to it, there are other options that tend to have easier setup, for example Ollama and LMStudio will both work on Mac

brandonbiggs commented 1 year ago

I had just barely figured that out from a different issue and was coming to add that. Thank you! That format seems to work great.

Also, just out of curiosity what led you to choose FastChat as the model provider? Had you already been using it for other purposes? If you're not tied to it, there are other options that tend to have easier setup, for example Ollama and LMStudio will both work on Mac

Good question. My setup is actually a little more complex. I'm running my LLMs on a linux system with Fastchat and I'm setting up port forwarding from my Mac to the linux system that is running fastchat. So while Continue is running on my Mac, my "local" LLM is actually running on a different server. I'm able to point the Continue config to localhost because of the port forwarding.

sestinj commented 1 year ago

Nice! Just updating the documentation and then seems this is resolved, but I'll keep the issue open for a couple days just in case anything else pops up.

Makes sense. Were there any alternatives you considered among which FastChat was the best? Good to know in case anyone asks about a similar scenario.

brandonbiggs commented 1 year ago

I looked at a couple of other tools. I've had the best luck with FastChat. Seems to scale pretty well, flexible with what I've needed. Overall been super happy with it. But there are so many tools and everything is changing so fast, it's hard to keep up.

sestinj commented 1 year ago

Nice, makes a lot of sense. Definitely a dynamic space!

sestinj commented 1 year ago

Closing since nothing new came up, but please stay in touch if anything goes wrong or if you have questions! Always welcome to ask in our Discord if it's easier