Open mlaugharn opened 1 year ago
Cool idea. As it increases the number of tokens it increases costs and reduces the maximal length of processed code as far as I know. Could be an additional option.
Does vectorizing the Python code make it easier for the LLM to understand the syntax and semantics of the code, and potentially provide better suggestions for error fixes?
thanks for the tip, I will look into PySnooper!
Does vectorizing the Python code make it easier for the LLM to understand the syntax and semantics of the code, and potentially provide better suggestions for error fixes?
GPT-4 is pretty good at understanding syntax, loops, etc. It's possible it could help, feel free to experiment and let us know!
I mean, you could simply cut out the first row with timestamps to reduce the number of token. But I would suggest making a test in all 3 cases to compare the performance
pysnooper.snoop()
is a decorator that helps automate printf style debugging: exampleI think this will help focus/improve GPT-4's debugging ability - https://github.com/cool-RR/PySnooper
There's also torchsnooper for even better snoop insight into pytorch