Open AlexanderHanff opened 2 weeks ago
What model do you use in Bolt? Since different models have different context window sizes, and are of different quality, maybe you can try out another model with a bigger context Window, like Claude 3.5.
I use Ollama with deepseek-coder-v2:16b and have now set the context size to 128k using a Modelfile - this seems to have fixed it (although my testing has been limited so until I generate more code, I can't confirm for certain) and I have now managed to have several new files added to the source tree with additional prompts.
Previously (when I posted this issue) I was using Ollama's default context which I believe is only 2048 now I am using a context of 131072.
Any coding done with opensource models will fail and bug a lot , only GPT4 / Sonnet are reliable
Any coding done with opensource models will fail and bug a lot , only GPT4 / Sonnet are reliable
This is quite frankly, false. I have access to GPT4 in my enterprise and I find deepseek-coder-v2:16b and llama3.1:70b both out perform it with regards to accuracy.
I have just pulled qwen2.5 32b and 72b models as well, which I am yet to test, but also seem to be leading the pack on coding tasks (with a great deal of excitement around the upcoming qwen2.5-coder:32b slated to soon be released).
You couldn't pay me enough to use any closed model by Anthropic or OpenAI especially given I have ongoing legal action against OpenAI. I will only use models in the public domain.
As for bugs, anyone using an LLM for anything without checking the results for accuracy, shouldn't have access to a device capable of connecting to the Internet, period.
Describe the bug
When I send my initial prompt to setup a project, the code editor opens up, the files are created appropriately and the nodejs server starts in WC. However, any future prompts simply provide code in the chat window instead of updating the code editor/source tree with new files.
Link to the Bolt URL that caused the error
http://localhost:8788
Steps to reproduce
Expected behavior
Stages 5-7 fail and simply reply with code in the chat window with no context to the project setup in the original prompt.
Screen Recording / Screenshot
No response
Platform
Additional context
I would expect Bolt to retain context throughout the entire workflow and add files to the project instead of just losing context and responding with code in chat.