continuedev / continue

⏩ Continue is the leading open-source AI code assistant. You can connect any models and any context to build custom autocomplete and chat experiences inside VS Code and JetBrains
https://docs.continue.dev/
Apache License 2.0
17.74k stars 1.38k forks source link

Change "Async Timeout" for local models #542

Closed lunamidori5 closed 12 months ago

lunamidori5 commented 12 months ago

Before submitting your bug report

Relevant environment info

- OS: windows 11
- Continue: yes (vscode)

Description

I have a 60T model running on super computer and when it takes longer than 300s just flat out crashes Continue, is there a way to set the timeout in the config.py?

ERROR MESSAGE:

Traceback (most recent call last):

  File "continuedev\src\continuedev\core\autopilot.py", line 432, in _run_singular_step
    observation = await step(self.continue_sdk)

  File "continuedev\src\continuedev\core\main.py", line 376, in __call__
    return await self.run(sdk)

  File "continuedev\src\continuedev\plugins\steps\chat.py", line 106, in run
    async for chunk in generator:

  File "continuedev\src\continuedev\libs\llm\__init__.py", line 363, in stream_chat
    async for chunk in self._stream_chat(messages=messages, options=options):

  File "continuedev\src\continuedev\libs\llm\openai.py", line 132, in _stream_chat
    async for chunk in await openai.ChatCompletion.acreate(

  File "openai\api_resources\abstract\engine_api_resource.py", line 230, in <genexpr>

  File "openai\api_requestor.py", line 319, in wrap_resp

  File "openai\api_requestor.py", line 634, in <genexpr>

  File "openai\api_requestor.py", line 114, in parse_stream_async

  File "aiohttp\streams.py", line 35, in __anext__

  File "aiohttp\streams.py", line 311, in readline

  File "aiohttp\streams.py", line 343, in readuntil

  File "aiohttp\streams.py", line 303, in _wait

  File "aiohttp\helpers.py", line 721, in __exit__

asyncio.exceptions.TimeoutError

To reproduce

No response

Log output

No response

lunamidori5 commented 12 months ago

Again I did not add the bug label...

sestinj commented 12 months ago

@lunamidori5

Yes! we have a param that you can use on any LLM class like OpenAI(..., timeout=None) to make there be no timeout, or OpenAI(..., timeout=3600) to have an hour for example.

Though I just realized it isn't being passed through properly in the OpenAI class, so there's a new pre-release version (0.1.15) that will be ready in ~20min where this will now work.

lunamidori5 commented 12 months ago

@sestinj thank you! Ill close the issue if this fixes it!

sestinj commented 12 months ago

@lunamidori5 I believe the new version is ready, but it's actually 0.1.14

lunamidori5 commented 12 months ago

umm... @sestinj do you mean pre release? or is my setup bugged out

lunamidori5 commented 12 months ago

image

lunamidori5 commented 12 months ago

Or I am dumb that could also be true

lunamidori5 commented 12 months ago

Cool I think its working thank you!

sestinj commented 12 months ago

Yup sorry meant pre-release. Glad it worked!

I'm curious, do you really mean 70Trillion?

@lunamidori5

lunamidori5 commented 12 months ago

@sestinj well its a 62T Photo SD model with a token program to make photos into text but yea!

sestinj commented 12 months ago

@lunamidori5 very cool! What kinds of things do you use it for with Continue?

lunamidori5 commented 12 months ago

@sestinj programing, note taking, chating ect, just a little chatgpt of my own <3

sestinj commented 12 months ago

@lunamidori5 nice, just a 'little' supercomputer : )

lunamidori5 commented 12 months ago

@sestinj thank you for all your hard work! Keep it up! (Ill be opening a new issue as the link you use for localai maybe a bit hard on users, but ill do that in its own issue)