coleam00 / bolt.new-any-llm

Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
https://bolt.new
MIT License
4.63k stars 1.92k forks source link

No files generated and no preview shown #203

Closed SouthPenninesDT closed 3 weeks ago

SouthPenninesDT commented 3 weeks ago

Describe the bug

No code files generated when prompt has finished and no preview of the output is shown.

Link to the Bolt URL that caused the error

http://localhost:5173/chat/24

Steps to reproduce

See video attached

Expected behavior

Expecting the files to be shown in the file explorer on the right, and for the preview to show the generated output.

Screen Recording / Screenshot

https://github.com/user-attachments/assets/2f5eaac7-7fb7-4210-878a-4828bbccb320

Platform

Additional context

No response

coleam00 commented 3 weeks ago

I see you are using Llama 3.2 3b. Unfortunately from my testing any model smaller than 7 billion parameters doesn't seem to be able to handle the large Bolt.new prompt. So this isn't an issue with the app rather it's the LLM just not being capable enough to produce the necessary output to work with the webcontainer.

If you want something pretty lightweight that still kicks butt, I would try qwen2.5-coder:7b!

SouthPenninesDT commented 3 weeks ago

Hey @coleam00 thank you. I made this change and increased the context parameter, but I'm still getting the same issue.

coleam00 commented 3 weeks ago

Which model are you using @SouthPenninesDT? Smaller models will struggle with this even with the extra context length unfortunately.

SouthPenninesDT commented 3 weeks ago

Hi, I'm using qwen2.5-coder:7b

On Fri, 8 Nov 2024, 01:51 Cole Medin, @.***> wrote:

Which model are you using @SouthPenninesDT https://github.com/SouthPenninesDT? Smaller models will struggle with this even with the extra context length unfortunately.

— Reply to this email directly, view it on GitHub https://github.com/coleam00/bolt.new-any-llm/issues/203#issuecomment-2463595957, or unsubscribe https://github.com/notifications/unsubscribe-auth/ASY3I4YTBWJ37NJFYYXFIZTZ7QKJVAVCNFSM6AAAAABRK5AXBGVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDINRTGU4TKOJVG4 . You are receiving this because you were mentioned.Message ID: @.***>

coleam00 commented 3 weeks ago

Huh, I definitely get good results with Qwen 2.5 coder! Sometimes local LLMs still hallucinate though and it's best to just try another time or two, that honestly usually fixes it for me.

SouthPenninesDT commented 3 weeks ago

No worries @coleam00. I'll try again, thanks for coming back to me this is a great project!

coleam00 commented 3 weeks ago

You are welcome, thank you @SouthPenninesDT!