continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
15.74k stars 1.19k forks source link

The previous method has become ineffective, I’m asking again how to customize the address for https://api.openai.com/. #331

Open aston314 opened 1 year ago

aston314 commented 1 year ago

302

Thanks.

sestinj commented 1 year ago

We've since updated some details of the config file. Here's an example if you want to change the base to Azure OpenAI Service: https://continue.dev/docs/customization#azure-openai-service

But if you're just looking to change the base, you would do something like this:

from continuedev.libs.llm.openai import OpenAI, OpenAIServerInfo

config = ContinueConfig(
    ...
    models=Models(
        default=OpenAI(model="gpt-4", server_info=OpenAIServerInfo(
            api_base="https://my-api-base.com/"
        ))
    )
)
aston314 commented 1 year ago

not work

aston314 commented 1 year ago

Traceback (most recent call last):

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/aiohttp/connector.py", line 980, in _wrap_create_connection return await self._loop.create_connection(*args, **kwargs) # type: ignore[return-value] # noqa ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1112, in create_connection transport, protocol = await self._create_connection_transport( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/base_events.py", line 1145, in _create_connection_transport await waiter

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 287, in await yield self # This tells Task to wait for completion. ^^^^^^^^^^

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/tasks.py", line 339, in __wakeup future.result()

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/futures.py", line 203, in result raise self._exception.with_traceback(self._exception_tb)

File "/Library/Frameworks/Python.framework/Versions/3.11/lib/python3.11/asyncio/sslproto.py", line 575, in _on_handshake_complete raise handshake_exc

ConnectionResetError

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/openai/api_requestor.py", line 668, in arequest_raw result = await session.request(**request_kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/aiohttp/client.py", line 536, in _request conn = await self._connector.connect( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/aiohttp/connector.py", line 540, in connect proto = await self._create_connection(req, traces, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/aiohttp/connector.py", line 901, in _createconnection , proto = await self._create_direct_connection(req, traces, timeout) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/aiohttp/connector.py", line 1209, in _create_direct_connection raise last_exc

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/aiohttp/connector.py", line 1178, in _create_direct_connection transp, proto = await self._wrap_create_connection( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/aiohttp/connector.py", line 988, in _wrap_create_connection raise client_error(req.connection_key, exc) from exc

aiohttp.client_exceptions.ClientConnectorError: Cannot connect to host api.openai.com:443 ssl:default [None]

The above exception was the direct cause of the following exception:

Traceback (most recent call last):

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/continuedev/core/autopilot.py", line 245, in _run_singular_step observation = await step(self.continue_sdk) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/continuedev/core/main.py", line 319, in call return await self.run(sdk) ^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/continuedev/plugins/steps/chat.py", line 35, in run async for chunk in generator:

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/continuedev/libs/llm/openai.py", line 125, in stream_chat async for chunk in await openai.ChatCompletion.acreate( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/openai/api_resources/chat_completion.py", line 45, in acreate return await super().acreate(*args, **kwargs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/openai/api_resources/abstract/engine_apiresource.py", line 217, in acreate response, , api_key = await requestor.arequest( ^^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/openai/api_requestor.py", line 372, in arequest result = await self.arequest_raw( ^^^^^^^^^^^^^^^^^^^^^^^^

File "/Users/aston/.continue/server/env/lib/python3.11/site-packages/openai/api_requestor.py", line 685, in arequest_raw raise error.APIConnectionError("Error communicating with OpenAI") from e

openai.error.APIConnectionError: Error communicating with OpenAI

sestinj commented 1 year ago

Try this exact config.py, except replacing '< API BASE HERE >', and make sure that you set the OPENAI_API_KEY in VS Code settings. <-- If this is not set, then I would expect the error you are seeing.

"""
This is the Continue configuration file.

If you aren't getting strong typing on these imports,
be sure to select the Python interpreter in ~/.continue/server/env.
"""

import subprocess

from continuedev.core.main import Step
from continuedev.core.sdk import ContinueSDK
from continuedev.core.models import Models
from continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig
from continuedev.plugins.context_providers.github import GitHubIssuesContextProvider
from continuedev.plugins.context_providers.google import GoogleContextProvider
from continuedev.libs.llm.maybe_proxy_openai import MaybeProxyOpenAI
from continuedev.libs.llm.openai import OpenAI, OpenAIServerInfo
from continuedev.plugins.policies.default import DefaultPolicy

from continuedev.plugins.steps.open_config import OpenConfigStep
from continuedev.plugins.steps.clear_history import ClearHistoryStep
from continuedev.plugins.steps.feedback import FeedbackStep
from continuedev.plugins.steps.comment_code import CommentCodeStep
from continuedev.plugins.steps.main import EditHighlightedCodeStep

class CommitMessageStep(Step):
    """
    This is a Step, the building block of Continue.
    It can be used below as a slash command, so that
    run will be called when you type '/commit'.
    """
    async def run(self, sdk: ContinueSDK):

        # Get the root directory of the workspace
        dir = sdk.ide.workspace_directory

        # Run git diff in that directory
        diff = subprocess.check_output(
            ["git", "diff"], cwd=dir).decode("utf-8")

        # Ask the LLM to write a commit message,
        # and set it as the description of this step
        self.description = await sdk.models.default.complete(
            f"{diff}\n\nWrite a short, specific (less than 50 chars) commit message about the above changes:")

config = ContinueConfig(

    # If set to False, we will not collect any usage data
    # See here to learn what anonymous data we collect: https://continue.dev/docs/telemetry
    allow_anonymous_telemetry=True,

    models=Models(
        default=OpenAI(
            openai_server_info=OpenAIServerInfo(
                api_base="<API BASE HERE>",
            ),
            model="gpt-3.5-turbo"
        ),
    ),

    # Set a system message with information that the LLM should always keep in mind
    # E.g. "Please give concise answers. Always respond in Spanish."
    system_message=None,

    # Set temperature to any value between 0 and 1. Higher values will make the LLM
    # more creative, while lower values will make it more predictable.
    temperature=0.5,

    # Custom commands let you map a prompt to a shortened slash command
    # They are like slash commands, but more easily defined - write just a prompt instead of a Step class
    # Their output will always be in chat form
    custom_commands=[CustomCommand(
        name="test",
        description="This is an example custom command. Use /config to edit it and create more",
        prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
    )],

    # Slash commands let you run a Step from a slash command
    slash_commands=[
        # SlashCommand(
        #     name="commit",
        #     description="This is an example slash command. Use /config to edit it and create more",
        #     step=CommitMessageStep,
        # )
        SlashCommand(
            name="edit",
            description="Edit code in the current file or the highlighted code",
            step=EditHighlightedCodeStep,
        ),
        SlashCommand(
            name="config",
            description="Open the config file to create new and edit existing slash commands",
            step=OpenConfigStep,
        ),
        SlashCommand(
            name="comment",
            description="Write comments for the current file or highlighted code",
            step=CommentCodeStep,
        ),
        SlashCommand(
            name="feedback",
            description="Send feedback to improve Continue",
            step=FeedbackStep,
        ),
        SlashCommand(
            name="clear",
            description="Clear step history",
            step=ClearHistoryStep,
        )
    ],

    # Context providers let you quickly select context by typing '@'
    # Uncomment the following to
    # - quickly reference GitHub issues
    # - show Google search results to the LLM
    context_providers=[
        # GitHubIssuesContextProvider(
        #     repo_name="<your github username or organization>/<your repo name>",
        #     auth_token="<your github auth token>"
        # ),
        # GoogleContextProvider(
        #     serper_api_key="<your serper.dev api key>"
        # )
    ],

    # Policies hold the main logic that decides which Step to take next
    # You can use them to design agents, or deeply customize Continue
    policy=DefaultPolicy()
)

I tested with this setup and received 404 instead of openai.error.APIConnectionError, so I've not been able to verify that this works.

aston314 commented 1 year ago

base_url="https://ai.fakeopen.com/" default_model="gpt-3.5-turbo", apikey: pk-this-is-a-real-free-pool-token-for-everyone Try it.

sestinj commented 1 year ago

Sorry if this wasn't clear—I tried these exact credentials and received an error.

If there is an error in the config file, this should be displayed. Otherwise, this might happen if your OPENAI_API_KEY is not set in VS Code settings.

Petemir commented 1 year ago

@sestinj Do you think it would be possible to customize the URL of other model types, too? I'd like using the vs code extension in a low-performant network, but point to a server running the GGML model somewhere else. From what I see, the URL is a constant at the moment.

Sorry, I just saw the next paragraph :)

sestinj commented 1 year ago

@Petemir you shouldn't actually have to wire up your own class for this. As of a couple minutes ago, I've changed that hardcoded URL to be a parameter (server_url: str) in the GGML class (the hardcoding was a bit ridiculous...). This should solve your need.

You can see the commit that changes this here: https://github.com/continuedev/continue/commit/da9caea0a9232e43668dff6ad233183f4dbe740a

The new version will hit the vscode marketplace in ~10 minutes

Petemir commented 1 year ago

@Petemir you shouldn't actually have to wire up your own class for this. As of a couple minutes ago, I've changed that hardcoded URL to be a parameter (server_url: str) in the GGML class (the hardcoding was a bit ridiculous...). This should solve your need.

You can see the commit that changes this here: da9caea

The new version will hit the vscode marketplace in ~10 minutes

That's great, thank you very much!

I was running into several problems (server not starting and config not being created, outdated explanation in the 5-minute example and wrong import in the doc) in parallel and wasn't understanding the setup, but I finally got it running :) .

sestinj commented 1 year ago

@Petemir Well done working through all of this 😬 , and thanks for the PR! Curious from this experience if there's anything else clarity-wise that we could improve.

Petemir commented 1 year ago

@Petemir Well done working through all of this 😬 , and thanks for the PR! Curious from this experience if there's anything else clarity-wise that we could improve.

Sorry for the delay. No, I cannot think of anything. I have to say that I skimmed over the documentation/release notes straight to the installing part, that's why I may have had a hard time setting up everything (I was oblivious to the architecture). Because of that, too, I didn't realize that it was a chatbot interface -- I actually expected some kind of autocomplete. Therefore, I didn't keep using it long :(.

sestinj commented 1 year ago

@Petemir quite alright, thanks for the feedback. We'll get to autocomplete eventually, just not quite yet.

sestinj commented 11 months ago

@aston314 I'm sorry there hasn't been anything we could do to solve this error for you yet, but I wanted to check in again. I understand that this is probably quite frustrating and I'd love to find a solution, but unfortunately there isn't enough information yet to know the exact problem.

I've tried the credentials as you shared above - is there anything else that I need to know in order to test this? Please let me know, and I'd love to get Continue working for you!