coleam00 / bolt.new-any-llm

Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
https://bolt.new
MIT License
4.04k stars 1.65k forks source link

Bolt loses context #202

Open AlexanderHanff opened 2 weeks ago

AlexanderHanff commented 2 weeks ago

Describe the bug

When I send my initial prompt to setup a project, the code editor opens up, the files are created appropriately and the nodejs server starts in WC. However, any future prompts simply provide code in the chat window instead of updating the code editor/source tree with new files.

Link to the Bolt URL that caused the error

http://localhost:8788

Steps to reproduce

  1. "Create a scaffold for a react app using bootstrap"
  2. "Add a user model including username, email, password, TOTP secret."

Expected behavior

  1. Open IDE (OK)
  2. Create the file tree (OK)
  3. Populate the project files (such as package.json, index.jsx etc) (OK)
  4. Launch the project in the WC (OK)
  5. Create a models folder (FAIL)
  6. Add a user model file (FAIL)
  7. Populate the user model file (FAIL)

Stages 5-7 fail and simply reply with code in the chat window with no context to the project setup in the original prompt.

Screen Recording / Screenshot

No response

Platform

Additional context

I would expect Bolt to retain context throughout the entire workflow and add files to the project instead of just losing context and responding with code in chat.

milutinke commented 2 weeks ago

What model do you use in Bolt? Since different models have different context window sizes, and are of different quality, maybe you can try out another model with a bigger context Window, like Claude 3.5.

AlexanderHanff commented 2 weeks ago

I use Ollama with deepseek-coder-v2:16b and have now set the context size to 128k using a Modelfile - this seems to have fixed it (although my testing has been limited so until I generate more code, I can't confirm for certain) and I have now managed to have several new files added to the source tree with additional prompts.

Previously (when I posted this issue) I was using Ollama's default context which I believe is only 2048 now I am using a context of 131072.

syndicate604 commented 2 weeks ago

Any coding done with opensource models will fail and bug a lot , only GPT4 / Sonnet are reliable

AlexanderHanff commented 2 weeks ago

Any coding done with opensource models will fail and bug a lot , only GPT4 / Sonnet are reliable

This is quite frankly, false. I have access to GPT4 in my enterprise and I find deepseek-coder-v2:16b and llama3.1:70b both out perform it with regards to accuracy.

I have just pulled qwen2.5 32b and 72b models as well, which I am yet to test, but also seem to be leading the pack on coding tasks (with a great deal of excitement around the upcoming qwen2.5-coder:32b slated to soon be released).

You couldn't pay me enough to use any closed model by Anthropic or OpenAI especially given I have ongoing legal action against OpenAI. I will only use models in the public domain.

As for bugs, anyone using an LLM for anything without checking the results for accuracy, shouldn't have access to a device capable of connecting to the Internet, period.