continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
19.65k stars 1.72k forks source link

SSL self signed cert failure #453

Closed tonydavis629 closed 1 year ago

tonydavis629 commented 1 year ago

Describe the bug After an update approximately 2 weeks ago, I can't access continue server from a corporate network. I have the correct certs on my machine, I only need to know how to allow continue to use them.

To Reproduce Steps to reproduce the behavior:

  1. Connect to continue through a corporate network with SSL

Environment

Logs

[2023-09-05 14:08:46,045] [ERROR] Error while running step: 
Traceback (most recent call last):

  File "aiohttp/connector.py", line 980, in _wrap_create_connection

  File "asyncio/base_events.py", line 1112, in create_connection

  File "asyncio/base_events.py", line 1145, in _create_connection_transport

  File "asyncio/futures.py", line 287, in __await__

  File "asyncio/tasks.py", line 339, in __wakeup

  File "asyncio/futures.py", line 203, in result

  File "asyncio/sslproto.py", line 575, in _on_handshake_complete

  File "asyncio/sslproto.py", line 557, in _do_handshake

  File "ssl.py", line 979, in do_handshake

ssl.SSLCertVerificationError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

  File "continuedev/src/continuedev/libs/util/create_async_task.py", line 21, in callback
    future.result()

  File "asyncio/futures.py", line 203, in result

  File "asyncio/tasks.py", line 269, in __step

  File "continuedev/src/continuedev/core/autopilot.py", line 524, in create_title
    title = await self.continue_sdk.models.medium.complete(
            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "continuedev/src/continuedev/libs/llm/__init__.py", line 209, in complete
    completion = await self._complete(prompt=prompt, options=options)
                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "continuedev/src/continuedev/libs/llm/maybe_proxy_openai.py", line 34, in _complete
    return await self.llm._complete(prompt, options)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

  File "continuedev/src/continuedev/libs/llm/proxy_server.py", line 53, in _complete
    async with self._client_session.post(

  File "aiohttp/client.py", line 1141, in __aenter__

  File "aiohttp/client.py", line 536, in _request

  File "aiohttp/connector.py", line 540, in connect

  File "aiohttp/connector.py", line 901, in _create_connection

  File "aiohttp/connector.py", line 1209, in _create_direct_connection

  File "aiohttp/connector.py", line 1178, in _create_direct_connection

  File "aiohttp/connector.py", line 982, in _wrap_create_connection

aiohttp.client_exceptions.ClientConnectorCertificateError: Cannot connect to host proxy-server-l6vsfbzhba-uw.a.run.app:443 ssl:True [SSLCertVerificationError: (1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self-signed certificate in certificate chain (_ssl.c:1002)')]

To get the Continue server logs:

  1. cmd+shift+p (MacOS) / ctrl+shift+p (Windows)
  2. Search for and then select "Continue: View Continue Server Logs"
  3. Scroll to the bottom of continue.log and copy the last 100 lines or so

To get the VS Code console logs:

  1. cmd+shift+p (MacOS) / ctrl+shift+p (Windows)
  2. Search for and then select "Developer: Toggle Developer Tools"
  3. Select Console
  4. Read the console logs

If the problem is related to LLM prompting:

  1. Hover the problematic response in the Continue UI
  2. Click the "magnifying glass" icon
  3. Copy the contents of the continue_logs.txt file that opens

Screenshots If applicable, add screenshots to help explain your problem.

Additional context Add any other context about the problem here.

sestinj commented 1 year ago

Hey Tony,

You can update the ca_bundle_path option for the model in ~/.continue/config.py (can input /config to open it). Right now we don't have this parameter for MaybeProxyOpenAI (the class that allows temporary free usage), but if you tried this with your own OpenAI key and it worked, I would gladly add that parameter. Here's what you can do:

from continuedev.src.continuedev.libs.llm.openai import OpenAI

config = ContinueConfig(
    ...
    models=Models(
        default=OpenAI(
            api_key="<API_KEY>",
            model="gpt-4",
            ca_bundle_path="<PATH_TO_CA_BUNDLE>"
        )
    )
)
tonydavis629 commented 1 year ago

Sorry but I have the same error after adding the ca_bundle_path. The certificates work elsewhere.

sestinj commented 1 year ago

Ok, thanks for letting me know. This is really weird but it looks like openai has this parameter but simply doesn't pay attention to it (https://github.com/openai/openai-python/blob/1be14ee34a0f8e42d3f9aa5451aa4cb161f1781f/openai/__init__.py#L60)

I have an idea for how to work around this, I'll make an update later today and then share

sestinj commented 1 year ago

@tonydavis629 to get around this you can setup your default model in the config like this:

default=GGML(server_url="https://api.openai.com", api_key="API_KEY", model="gpt-4", ca_bundle_path="...", template_messages=None)

The GGML class will use aiohttp requests manually instead of going through the OpenAI python library, allowing ca_bundle_path to actually take effect

sestinj commented 1 year ago

Hi @tonydavis629, just wanted to check in on this. Any luck with GGML? Let me know what else I can do to help if there are other problems!

dan-nadler commented 1 year ago

@sestinj I am having the same issue and tried GGML - I've tried to set it up to hit Azure, but I'm not getting any response, and also no error message, unfortunately.

I'm guessing it's not properly formatting the Azure URLs, but I'm not sure.

sestinj commented 1 year ago

@dan-nadler could you share what your config.py looks like (of course leaving out any private details)?

dan-nadler commented 1 year ago

@sestinj Here you go:


"""
This is the Continue configuration file.

See https://continue.dev/docs/customization to for documentation of the available options.
"""
from continuedev.libs.llm.openai import OpenAI
from continuedev.libs.llm.ggml import GGML

from continuedev.core.models import Models
from continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig
from continuedev.libs.llm import OpenAIFreeTrial

from continuedev.plugins.context_providers import (
    DiffContextProvider,
    TerminalContextProvider,
    URLContextProvider,
    GitHubIssuesContextProvider,
)
from continuedev.plugins.steps import (
    ClearHistoryStep,
    CommentCodeStep,
    EditHighlightedCodeStep,
    GenerateShellCommandStep,
    OpenConfigStep,
)
from continuedev.plugins.steps.share_session import ShareSessionStep

config = ContinueConfig(
    allow_anonymous_telemetry=True,
    models=Models(
        default=GGML(
            title='ggml-test',
            server_url="https://****.openai.azure.com", 
            api_key="****",
            api_base="https://****.openai.azure.com/",
            api_type="azure",
            api_version="2023-07-01-preview",
            engine="gpt-4",
            model="gpt-4",
            ca_bundle_path="/path/to/ca-bundle.crt", 
            template_messages=None),
        saved=[
            OpenAIFreeTrial(model="gpt-4", system_message=""),
            OpenAIFreeTrial(model="gpt-3.5-turbo", system_message=""),
        ],
    ),
    system_message="",
    temperature=0.5,
    custom_commands=[
        CustomCommand(
            name="test",
            description="Write unit tests for highlighted code",
            prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
        )
    ],
    slash_commands=[
        SlashCommand(
            name="edit",
            description="Edit highlighted code",
            step=EditHighlightedCodeStep,
        ),
        SlashCommand(
            name="config",
            description="Customize Continue",
            step=OpenConfigStep,
        ),
        SlashCommand(
            name="comment",
            description="Write comments for the highlighted code",
            step=CommentCodeStep,
        ),
        SlashCommand(
            name="clear",
            description="Clear step history",
            step=ClearHistoryStep,
        ),
        SlashCommand(
            name="share",
            description="Download and share this session",
            step=ShareSessionStep,
        ),
        SlashCommand(
            name="cmd",
            description="Generate a shell command",
            step=GenerateShellCommandStep,
        ),
    ],
    context_providers=[
        # GitHubIssuesContextProvider(
        #     repo_name="<your github username or organization>/<your repo name>",
        #     auth_token="<your github auth token>"
        # ),
        DiffContextProvider(),
        URLContextProvider(
            preset_urls=[
                # Add any common urls you reference here so they appear in autocomplete
            ]
        ),
        TerminalContextProvider(),
    ],
)

For what it's worth - I've used this at home and I think it's a great project 👍

sestinj commented 1 year ago

Thanks! I think I see the problem, but it's on our end - the OpenAI python client completely ignores the ca_bundle_path parameter even though they give the option, hence having to go through the GGML class which makes the requests manually, but it doesn't know what to do with the engine param. I'll see if I can get a quick fix today

dan-nadler commented 1 year ago

That's great, thank you!

sestinj commented 1 year ago

@dan-nadler I just uploaded a new version (it's a pre-release, version v0.1.11) that will allow you do use GGML for Azure OpenAI. Here's what your config.py could look like for example (no need for server_url):

GGML(
            title='ggml-test',
            api_key="****",
            api_base="https://****.openai.azure.com/",
            api_type="azure",
            api_version="2023-07-01-preview",
            engine="gpt-4",
            model="gpt-4",
            ca_bundle_path="/path/to/ca-bundle.crt", 
            template_messages=None)
dan-nadler commented 1 year ago

Success! Thanks a lot! @sestinj

sestinj commented 1 year ago

fantastic!

sestinj commented 1 year ago

I'm going to close this issue since it looks like the problems have been verified to be solved, but @tonydavis629 if you are still experiencing the issue after using the GGML class please let me know and feel free to re-open!

surak commented 1 month ago

Is there a way to let insecure connections go through? I'm trying a server and I'm still to get the certificate for it....