gpt-engineer-org / gpt-engineer

Platform to experiment with the AI Software Engineer. Terminal based. NOTE: Very different from https://gptengineer.app
MIT License
52.53k stars 6.83k forks source link

Prompt doesn't seem to pass through #439

Closed ITTICK closed 1 year ago

ITTICK commented 1 year ago

Please describe the behavior you are expecting.

Doing a test with the snake game, and it appears the prompt I provide is not being passed through. I expect to be asked clarifying questions regarding the prompt I provided.

What is the current behavior?

Regardless of the prompt, the response I receive after running python -m gpt_engineer.main is: Areas that need clarification:

  1. What specific instructions am I not supposed to carry out?
  2. What is the purpose of seeking clarification?
  3. How many clarifying questions am I allowed to ask? etc...

I'm not sure if this is a bug, or I have corrupted my environment somehow. I have tried deleting the entire gpt_engineer dir on windows PC, re-cloned, same issue. Re-installed VS Code, Git, and Python in case there was some temp file causing this.

MoreGreenForests commented 1 year ago

Maybe add a suggestion here to have a method to just check prompt exists which I think is there but if we get a false positive there should be a flag to just get the prompt returned to test that the right path was sent and the right prompt is used as this seems like the boilerplate being fed back.

vVv-Keys commented 1 year ago

make sure you have all the module names listed properly and double check to be sure all modules/dependencies are downloaded that would be required via archive and that all your input formatting is correct and make sure you are running 3.5 in the correct environment(shouldn't have to say this but ima mention it for clarification)

But the error to the responses you are receiving is simply a misconfiguration issue with GPT.

The prompt you provided for it is not registering as the return pass through so instead you have a default GPT response asking for the clarification.

The behavior you would want your AI to philosophize behind is your main_prompt file

`We are doing many different things. From programming in all different languages. To trying to serve users and help organize data at a faster rate.

User: Ask any question you have or provide a topic you'd like to discuss.

Assistant: [System initialization message and setup instructions]

User: What is the meaning of life?

Assistant: [AI-generated response]

User: Can you explain the concept of artificial intelligence?

Assistant: [AI-generated response]

User: How does machine learning work?

Assistant: [AI-generated response]

User: Tell me about the latest advancements in technology.

Assistant: [AI-generated response]

User: What are the ethical implications of AI?

Assistant: [AI-generated response]

User: How can AI be applied in healthcare?

Assistant: [AI-generated response]

User: Do you have any recommendations for further reading on this topic?

Assistant: [AI-generated response]

User: Thank you for your help!

Assistant: You're welcome! If you have any more questions, feel free to ask.`

There is a baisc main_prompt example that I use for most starter GPT projects in order to test DB. But the current model asking for the "clarifying" of questions unrelated to your prompt that something is going wrong with the interaction with the model. Which is most likely a misconfiguartion issue.. so please be sure to check all FORMAT, API keys, DB and ensure that it is properly put together and formatted correctly. Once everything is verified you should find the "typo" misconfig and your GPT 3.5 should return to normal behaviors instead of the prompt not being correctly passed through.

ITTICK commented 1 year ago

I'm not sure what it was that got corrupted or misconfigured, but it is working now, after un-installing VScode, Git, Python, deleting all related temp files (including temp install and config files), rebooting, and re-installing all the mentioned programs. Thanks for the help and suggestions, will definitely be adding a prompt check to ensure it's being seen and passed through properly.