coleam00 / bolt.new-any-llm

Prompt, run, edit, and deploy full-stack web applications using any LLM you want!
https://bolt.new
MIT License
2.77k stars 1.24k forks source link

task looping #182

Open bantzclan opened 1 week ago

bantzclan commented 1 week ago

Describe the bug

there comes a time during building complex apps that it says what needs to be done next it then completes then you prompt saying whats next and it will loop the same tasks, every prompt gets that response

Link to the Bolt URL that caused the error

http://localhost:5173/chat/

Steps to reproduce

compex app standard prompts

Expected behavior

id expect it to move onto new tasks and not loop

Screen Recording / Screenshot

No response

Platform

Additional context

No response

xilorez-ux commented 1 week ago

Describe the bug

there comes a time during building complex apps that it says what needs to be done next it then completes then you prompt saying whats next and it will loop the same tasks, every prompt gets that response

Link to the Bolt URL that caused the error

http://localhost:5173/chat/

Steps to reproduce

compex app standard prompts

Expected behavior

id expect it to move onto new tasks and not loop

Screen Recording / Screenshot

No response

Platform

  • OS: [e.g. macOS, Windows, Linux]
  • Browser: [e.g. Chrome, Safari, Firefox]
  • Version: [e.g. 91.1]

Additional context

No response

Can you provide more information about the prompt and the AI model you were connected to ? Also, what browser were you specifically using ? Some people encountered preview issues with chrome canary. Also, is it still happening if you try same prompt complexity today ?

bantzclan commented 1 week ago

Can you provide more information about the prompt and the AI model you were connected to ? Also, what browser were you specifically using ? Some people encountered preview issues with chrome canary. Also, is it still happening if you try same prompt complexity today ? this was happening in both chrome browsers, the AI model was openai, claude 3.5 and deepseek happened randomly different prompts mainly when it suggests what to do next and you confirm to do it, it will do it correct for a few prompts then gets this issue and gets stuck, and yes its still happening.

xilorez-ux commented 1 week ago

Can you provide more information about the prompt and the AI model you were connected to ? Also, what browser were you specifically using ? Some people encountered preview issues with chrome canary. Also, is it still happening if you try same prompt complexity today ? this was happening in both chrome browsers, the AI model was openai, claude 3.5 and deepseek happened randomly different prompts mainly when it suggests what to do next and you confirm to do it, it will do it correct for a few prompts then gets this issue and gets stuck, and yes its still happening.

I think you erased your answer when you try to respond to my comment

bantzclan commented 1 week ago

this was happening in both chrome browsers, the AI model was openai, claude 3.5 and deepseek happened randomly different prompts mainly when it suggests what to do next and you confirm to do it, it will do it correct for a few prompts then gets this issue and gets stuck, and yes its still happening.

xilorez-ux commented 1 week ago

Okay thank you This is probably related to the current capacity of LLM to handle complex projects... Even if they're fine-tuned and given context at every step, this is just to early for complex prompts, as it combines with bolt.new prompt for including the project context into the API request. Best advice would be to lower your prompts complexity, maybe as gpt o1-preview shall be released quite soon you'll be able to give more complex prompts

bantzclan commented 1 week ago

Okay thank you This is probably related to the current capacity of LLM to handle complex projects... Even if they're fine-tuned and given context at every step, this is just to early for complex prompts, as it combines with bolt.new prompt for including the project context into the API request. Best advice would be to lower your prompts complexity, maybe as gpt o1-preview shall be released quite soon you'll be able to give more complex prompts

i fully understand that however the thing is id ask what should be done next and it would tell me what parts needed to be done next in order and then ask me if i woulld like to proceed with the 1st one and id say yes and it would work but then after say the 4th or 5th it would get stuck on looping the same task even if you prompt it showing its mistake it would pick up on that and say make a note and then say something along the lines of i will now move on to .... and it would say the same thing it was looping so i do not think its the complexity but i could be wrong

xilorez-ux commented 1 week ago

Okay thank you This is probably related to the current capacity of LLM to handle complex projects... Even if they're fine-tuned and given context at every step, this is just to early for complex prompts, as it combines with bolt.new prompt for including the project context into the API request. Best advice would be to lower your prompts complexity, maybe as gpt o1-preview shall be released quite soon you'll be able to give more complex prompts

i fully understand that however the thing is id ask what should be done next and it would tell me what parts needed to be done next in order and then ask me if i woulld like to proceed with the 1st one and id say yes and it would work but then after say the 4th or 5th it would get stuck on looping the same task even if you prompt it showing its mistake it would pick up on that and say make a note and then say something along the lines of i will now move on to .... and it would say the same thing it was looping so i do not think its the complexity but i could be wrong

Oh ok thanks for the context ! Well in this case this is probably the project itself that was to complex... If not, I admit having no clue about what could go wrong, maybe the fine-tuning or the model prompt isn't optimized enough, but I'm only guessing here

bantzclan commented 1 week ago

Okay thank you This is probably related to the current capacity of LLM to handle complex projects... Even if they're fine-tuned and given context at every step, this is just to early for complex prompts, as it combines with bolt.new prompt for including the project context into the API request. Best advice would be to lower your prompts complexity, maybe as gpt o1-preview shall be released quite soon you'll be able to give more complex prompts

i fully understand that however the thing is id ask what should be done next and it would tell me what parts needed to be done next in order and then ask me if i woulld like to proceed with the 1st one and id say yes and it would work but then after say the 4th or 5th it would get stuck on looping the same task even if you prompt it showing its mistake it would pick up on that and say make a note and then say something along the lines of i will now move on to .... and it would say the same thing it was looping so i do not think its the complexity but i could be wrong

Oh ok thanks for the context ! Well in this case this is probably the project itself that was to complex... If not, I admit having no clue about what could go wrong, maybe the fine-tuning or the model prompt isn't optimized enough, but I'm only guessing here

I think the way it would work is like with many chatbots you could cancel and refresh the prompt list etc this would also help with many other errors people are facing

xilorez-ux commented 1 week ago

Okay thank you This is probably related to the current capacity of LLM to handle complex projects... Even if they're fine-tuned and given context at every step, this is just to early for complex prompts, as it combines with bolt.new prompt for including the project context into the API request. Best advice would be to lower your prompts complexity, maybe as gpt o1-preview shall be released quite soon you'll be able to give more complex prompts

i fully understand that however the thing is id ask what should be done next and it would tell me what parts needed to be done next in order and then ask me if i woulld like to proceed with the 1st one and id say yes and it would work but then after say the 4th or 5th it would get stuck on looping the same task even if you prompt it showing its mistake it would pick up on that and say make a note and then say something along the lines of i will now move on to .... and it would say the same thing it was looping so i do not think its the complexity but i could be wrong

Oh ok thanks for the context ! Well in this case this is probably the project itself that was to complex... If not, I admit having no clue about what could go wrong, maybe the fine-tuning or the model prompt isn't optimized enough, but I'm only guessing here

I think the way it would work is like with many chatbots you could cancel and refresh the prompt list etc this would also help with many other errors people are facing

That's actually a good idea to improve the project ! I hope someone will see this conv