gpt-engineer-org / gpt-engineer

Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
MIT License
52.19k stars 6.8k forks source link

using with deepseek api problem #1196

Closed ivan-vilches closed 2 months ago

ivan-vilches commented 2 months ago

Hello i installed using pip install then exported the api key of deepseek : export DEEPSEEK_API_KEY=sk-mysecretkey

after that i run (venv) ivanvilches@MacBook-Air-de-Ivan gpt engineer rails % gpte --model deepseek/deepseek-coder --improve proyecto ╭─────────────────────────────── Traceback (most recent call last) ────────────────────────────────╮ │ /Users/ivanvilches/proyectos mios/gpt engineer/gpt engineer │ │ rails/venv/lib/python3.12/site-packages/gpt_engineer/applications/cli/main.py:402 in main │ │ │ │ 399 │ if llm_via_clipboard: │ │ 400 │ │ ai = ClipboardAI() │ │ 401 │ else: │ │ ❱ 402 │ │ ai = AI( │ │ 403 │ │ │ model_name=model, │ │ 404 │ │ │ temperature=temperature, │ │ 405 │ │ │ azure_endpoint=azure_endpoint, │ │ │ │ ╭────────────────────── locals ──────────────────────╮ │ │ │ azure_endpoint = '' │ │ │ │ clarify_mode = False │ │ │ │ debug = False │ │ │ │ entrypoint_prompt_file = '' │ │ │ │ image_directory = '' │ │ │ │ improve_mode = True │ │ │ │ lite_mode = False │ │ │ │ llm_via_clipboard = False │ │ │ │ model = 'deepseek/deepseek-coder' │ │ │ │ no_execution = False │ │ │ │ project_path = 'proyecto' │ │ │ │ prompt_file = 'prompt' │ │ │ │ self_heal_mode = False │ │ │ │ temperature = 0.1 │ │ │ │ use_cache = False │ │ │ │ use_custom_preprompts = False │ │ │ │ verbose = False │ │ │ ╰────────────────────────────────────────────────────╯ │ │ │ │ /Users/ivanvilches/proyectos mios/gpt engineer/gpt engineer │ │ rails/venv/lib/python3.12/site-packages/gpt_engineer/core/ai.py:115 in init │ │ │ │ 112 │ │ │ or ("gpt-4-turbo" in model_name and "preview" not in model_name) │ │ 113 │ │ │ or ("claude" in model_name) │ │ 114 │ │ ) │ │ ❱ 115 │ │ self.llm = self._create_chat_model() │ │ 116 │ │ self.token_usage_log = TokenUsageLog(model_name) │ │ 117 │ │ │ │ 118 │ │ logger.debug(f"Using model {self.model_name}") │ │ │ │ ╭───────────────────────────── locals ─────────────────────────────╮ │ │ │ azure_endpoint = '' │ │ │ │ model_name = 'deepseek/deepseek-coder' │ │ │ │ self = <gpt_engineer.core.ai.AI object at 0x11a78f590> │ │ │ │ streaming = True │ │ │ │ temperature = 0.1 │ │ │ │ vision = False │ │ │ ╰──────────────────────────────────────────────────────────────────╯ │ │ │ │ /Users/ivanvilches/proyectos mios/gpt engineer/gpt engineer │ │ rails/venv/lib/python3.12/site-packages/gpt_engineer/core/ai.py:372 in _create_chat_model │ │ │ │ 369 │ │ │ │ max_tokens=4096, # vision models default to low max token limits │ │ 370 │ │ │ ) │ │ 371 │ │ else: │ │ ❱ 372 │ │ │ return ChatOpenAI( │ │ 373 │ │ │ │ model=self.model_name, │ │ 374 │ │ │ │ temperature=self.temperature, │ │ 375 │ │ │ │ streaming=self.streaming, │ │ │ │ ╭──────────────────────── locals ────────────────────────╮ │ │ │ self = <gpt_engineer.core.ai.AI object at 0x11a78f590> │ │ │ ╰────────────────────────────────────────────────────────╯ │ │ │ │ /Users/ivanvilches/proyectos mios/gpt engineer/gpt engineer │ │ rails/venv/lib/python3.12/site-packages/pydantic/v1/main.py:341 in init │ │ │ │ 338 │ │ # Uses something other than self the first arg to allow "self" as a settable a │ │ 339 │ │ values, fields_set, validation_error = validate_model(pydanticself.class │ │ 340 │ │ if validation_error: │ │ ❱ 341 │ │ │ raise validation_error │ │ 342 │ │ try: │ │ 343 │ │ │ object_setattr(__pydantic_self, 'dict', values) │ │ 344 │ │ except TypeError as e: │ │ │ │ ╭─────────────────────────────────────────── locals ───────────────────────────────────────────╮ │ │ │ data = { │ │ │ │ │ 'model': 'deepseek/deepseek-coder', │ │ │ │ │ 'temperature': 0.1, │ │ │ │ │ 'streaming': True, │ │ │ │ │ 'callbacks': [ │ │ │ │ │ │ │ │ │ │ <langchain_core.callbacks.streaming_stdout.StreamingStdOutCallbackHandler │ │ │ │ object at 0x11a78eed0> │ │ │ │ │ ], │ │ │ │ │ 'model_kwargs': {} │ │ │ │ } │ │ │ │ fields_set = {'model_kwargs', 'temperature', 'model_name', 'streaming', 'callbacks'} │ │ │ │ validation_error = ValidationError( │ │ │ │ │ model='ChatOpenAI', │ │ │ │ │ errors=[ │ │ │ │ │ │ { │ │ │ │ │ │ │ 'loc': ('root',), │ │ │ │ │ │ │ 'msg': 'Did not find openai_api_key, please add an │ │ │ │ environment variable OPENAI_API_KEY'+66, │ │ │ │ │ │ │ 'type': 'value_error' │ │ │ │ │ │ } │ │ │ │ │ ] │ │ │ │ ) │ │ │ │ values = { │ │ │ │ │ 'name': None, │ │ │ │ │ 'cache': None, │ │ │ │ │ 'verbose': False, │ │ │ │ │ 'callbacks': [ │ │ │ │ │ │ │ │ │ │ <langchain_core.callbacks.streaming_stdout.StreamingStdOutCallbackHandler │ │ │ │ object at 0x11a78eed0> │ │ │ │ │ ], │ │ │ │ │ 'tags': None, │ │ │ │ │ 'metadata': None, │ │ │ │ │ 'custom_get_token_ids': None, │ │ │ │ │ 'callback_manager': None, │ │ │ │ │ 'rate_limiter': None, │ │ │ │ │ 'client': None, │ │ │ │ │ ... +29 │ │ │ │ } │ │ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────╯ │ ╰──────────────────────────────────────────────────────────────────────────────────────────────────╯ ValidationError: 1 validation error for ChatOpenAI root Did not find openai_api_key, please add an environment variable OPENAI_API_KEY which contains it, or pass openai_api_key as a named parameter. (type=value_error)

No sure what i do wrong it keep asking me for the open ai key and i want to use deepseek

zigabrencic commented 2 months ago

Hey

Try to set:

export OPENAI_API_BASE="https://api.deepseek.com/v1"
export OPENAI_API_KEY="sk-xxx"

You don't need DEEPSEEK_API_KEY since gpte uses Open AI API interface and DeepSeek supports that.

For more see DeepSeek docs.

Let me know if this resolves the issue.

viborc commented 2 months ago

Hey @ivan-vilches, can you let us know if @zigabrencic's comment solved your issue?

viborc commented 2 months ago

Hey @ivan-vilches, I'll be closing this issue now due to inactivity, but feel free to reopen if needed.