Closed tonydavis629 closed 1 year ago
Hey Tony,
You can update the ca_bundle_path
option for the model in ~/.continue/config.py
(can input /config to open it). Right now we don't have this parameter for MaybeProxyOpenAI (the class that allows temporary free usage), but if you tried this with your own OpenAI key and it worked, I would gladly add that parameter. Here's what you can do:
from continuedev.src.continuedev.libs.llm.openai import OpenAI
config = ContinueConfig(
...
models=Models(
default=OpenAI(
api_key="<API_KEY>",
model="gpt-4",
ca_bundle_path="<PATH_TO_CA_BUNDLE>"
)
)
)
Sorry but I have the same error after adding the ca_bundle_path. The certificates work elsewhere.
Ok, thanks for letting me know. This is really weird but it looks like openai has this parameter but simply doesn't pay attention to it (https://github.com/openai/openai-python/blob/1be14ee34a0f8e42d3f9aa5451aa4cb161f1781f/openai/__init__.py#L60)
I have an idea for how to work around this, I'll make an update later today and then share
@tonydavis629 to get around this you can setup your default model in the config like this:
default=GGML(server_url="https://api.openai.com", api_key="API_KEY", model="gpt-4", ca_bundle_path="...", template_messages=None)
The GGML class will use aiohttp requests manually instead of going through the OpenAI python library, allowing ca_bundle_path to actually take effect
Hi @tonydavis629, just wanted to check in on this. Any luck with GGML? Let me know what else I can do to help if there are other problems!
@sestinj I am having the same issue and tried GGML - I've tried to set it up to hit Azure, but I'm not getting any response, and also no error message, unfortunately.
I'm guessing it's not properly formatting the Azure URLs, but I'm not sure.
@dan-nadler could you share what your config.py looks like (of course leaving out any private details)?
@sestinj Here you go:
"""
This is the Continue configuration file.
See https://continue.dev/docs/customization to for documentation of the available options.
"""
from continuedev.libs.llm.openai import OpenAI
from continuedev.libs.llm.ggml import GGML
from continuedev.core.models import Models
from continuedev.core.config import CustomCommand, SlashCommand, ContinueConfig
from continuedev.libs.llm import OpenAIFreeTrial
from continuedev.plugins.context_providers import (
DiffContextProvider,
TerminalContextProvider,
URLContextProvider,
GitHubIssuesContextProvider,
)
from continuedev.plugins.steps import (
ClearHistoryStep,
CommentCodeStep,
EditHighlightedCodeStep,
GenerateShellCommandStep,
OpenConfigStep,
)
from continuedev.plugins.steps.share_session import ShareSessionStep
config = ContinueConfig(
allow_anonymous_telemetry=True,
models=Models(
default=GGML(
title='ggml-test',
server_url="https://****.openai.azure.com",
api_key="****",
api_base="https://****.openai.azure.com/",
api_type="azure",
api_version="2023-07-01-preview",
engine="gpt-4",
model="gpt-4",
ca_bundle_path="/path/to/ca-bundle.crt",
template_messages=None),
saved=[
OpenAIFreeTrial(model="gpt-4", system_message=""),
OpenAIFreeTrial(model="gpt-3.5-turbo", system_message=""),
],
),
system_message="",
temperature=0.5,
custom_commands=[
CustomCommand(
name="test",
description="Write unit tests for highlighted code",
prompt="Write a comprehensive set of unit tests for the selected code. It should setup, run tests that check for correctness including important edge cases, and teardown. Ensure that the tests are complete and sophisticated. Give the tests just as chat output, don't edit any file.",
)
],
slash_commands=[
SlashCommand(
name="edit",
description="Edit highlighted code",
step=EditHighlightedCodeStep,
),
SlashCommand(
name="config",
description="Customize Continue",
step=OpenConfigStep,
),
SlashCommand(
name="comment",
description="Write comments for the highlighted code",
step=CommentCodeStep,
),
SlashCommand(
name="clear",
description="Clear step history",
step=ClearHistoryStep,
),
SlashCommand(
name="share",
description="Download and share this session",
step=ShareSessionStep,
),
SlashCommand(
name="cmd",
description="Generate a shell command",
step=GenerateShellCommandStep,
),
],
context_providers=[
# GitHubIssuesContextProvider(
# repo_name="<your github username or organization>/<your repo name>",
# auth_token="<your github auth token>"
# ),
DiffContextProvider(),
URLContextProvider(
preset_urls=[
# Add any common urls you reference here so they appear in autocomplete
]
),
TerminalContextProvider(),
],
)
For what it's worth - I've used this at home and I think it's a great project 👍
Thanks! I think I see the problem, but it's on our end - the OpenAI python client completely ignores the ca_bundle_path parameter even though they give the option, hence having to go through the GGML class which makes the requests manually, but it doesn't know what to do with the engine param. I'll see if I can get a quick fix today
That's great, thank you!
@dan-nadler I just uploaded a new version (it's a pre-release, version v0.1.11) that will allow you do use GGML for Azure OpenAI. Here's what your config.py could look like for example (no need for server_url):
GGML(
title='ggml-test',
api_key="****",
api_base="https://****.openai.azure.com/",
api_type="azure",
api_version="2023-07-01-preview",
engine="gpt-4",
model="gpt-4",
ca_bundle_path="/path/to/ca-bundle.crt",
template_messages=None)
Success! Thanks a lot! @sestinj
fantastic!
I'm going to close this issue since it looks like the problems have been verified to be solved, but @tonydavis629 if you are still experiencing the issue after using the GGML class please let me know and feel free to re-open!
Is there a way to let insecure connections go through? I'm trying a server and I'm still to get the certificate for it....
Describe the bug After an update approximately 2 weeks ago, I can't access continue server from a corporate network. I have the correct certs on my machine, I only need to know how to allow continue to use them.
To Reproduce Steps to reproduce the behavior:
Environment
Logs
To get the Continue server logs:
continue.log
and copy the last 100 lines or soTo get the VS Code console logs:
If the problem is related to LLM prompting:
continue_logs.txt
file that opensScreenshots If applicable, add screenshots to help explain your problem.
Additional context Add any other context about the problem here.