Open zine999 opened 11 months ago
@zine999 (2) was a quick fix and will be available in 0.6.18. To solve (1) try changing "request_options" to "requestOptions". We recently switched to camelCase for the config file. If there's a documentation page you found this on let me know and I'll make sure that gets updated.
Otherwise, I notice that the 8x7 is a fairly large model, but do you expect such a long wait time? Just want to make sure that isn't a Continue issue
To solve (1) try changing "request_options" to "requestOptions".
Did not help. :-\
If there's a documentation page you found this on let me know and I'll make sure that gets updated.
Yes, I found it here: https://continue.dev/docs/walkthroughs/config-file-migration#request_options
Otherwise, I notice that the 8x7 is a fairly large model, but do you expect such a long wait time? Just want to make sure that isn't a Continue issue
Yes I prefer this model even though it is quite large, and don't mind waiting.
Also, I noticed that the model isn't listed here, I hope that's not an issue.
I also have problems with this. Running the model via ollama, and everything works fine using a local prompt or ui for it. But my laptop does not have a graphic card that the model can utilize, and therefore most models run fairly slow, and often takes more than 10 seconds to start answering, when provided some context. When this is the case, the intelliJ plugin tells me "Error streaming response: Error". However for me it does not keep spinning, but gets ready for the next input.
It does work well enough if I ask it something simple, that makes it start answering in less than 10 seconds, and looks pretty certain that 10 seconds is the cutoff between working and not working.
I tried setting the timeout attribute both with request_options
and requestOptions
but none of them seen to take effect. I also tried modifying the config.js file with this:
function modifyConfig(config) {
if (!config.requestOptions) {
config.requestOptions = {};
}
config.requestOptions.timeout = 999999;
return config;
}
export {
modifyConfig
};
And a corresponding request_options version, but none of them seem to make any difference.
@zine999 In pre-release version 0.9.1 I've upgraded the default timeout to 2 hours, from what was previously 5 minutes.
I also found out that nodejs measures in milliseconds, so until the version that I will push later today, if you are going to use the "timeout" option, make sure to multiply by 1000
as for the error happening in JetBrains: this is separate, but I will be addressing it today as well. Will send another update when I have a fix
Before submitting your bug report
Relevant environment info
Description
I'm using Ollama with this config:
I then open the Continue prompt, attach a file as the context, and ask:
Explain what this file does.
The computer thinks hard for about 5 minutes, and finishes thinking, but in VSCodium I see this error:
And the Continue text input prompt field still has the rainbow "thinking" animation that never stops.
So this is 2 bugs:
To reproduce
See above.
Log output
No response