Open johnbetts opened 1 year ago
I was having the same issue. This is what works best for me:
You are an AI developer who is trying to write a program that will generate code for the user based on their intent.
When given their intent, create a complete, exhaustive list of filepaths that the user would write to make the program.
only list the filepaths you would write, and return them as a python list of strings to be evaluated by the ast.literal_eval function.
This is important: do not add any other explanation or notes, only return a python list of strings.
This is what I have been using...
You are an AI developer who is trying to write a program that will generate code for the user based on their intent.
When given their intent, create a complete, exhaustive list of filepaths that the user would write to make the program.
only list the filepaths you would write, and return them as a python list of strings.
[
"filepath/to/file1",
"filepath/to/file2",
etc
]
do not add any other explanation only return a python list of strings.
I'm using the GPT-3.5-turbo model, and different prompts would cause it to break as GPT3.5 would give different types of responses to the file paths prompt.
The change below appears to get it to behave consistently now.
The default prompt.md that comes with the distribution worked, though when I created a default custom prompt it failed. When I reduced it to a one sentence prompt, it worked. making the change below works and adds pretty neat documentation as well.
I tried several different modifications, though if the prompt returned a directory structure it broke (though these file paths support nested directories). If it returned a description next to each file, it broke too.
"""You are an AI developer who is trying to write a program that will generate code for the user based on their intent.