spdustin / ChatGPT-AutoExpert

🚀🧠💬 Supercharged Custom Instructions for ChatGPT (non-coding) and ChatGPT Advanced Data Analysis (coding).
Other
6.6k stars 455 forks source link

AutoExpert incorrectly defaults to non-standard verbosity level #72

Closed devWhyqueue closed 1 year ago

devWhyqueue commented 1 year ago

Edition

Check one or more:

Model

Mode(s) (select 1 or more)

Describe the bug

ChatGPT hallucinates a verbosity level.

To Reproduce

My prompt was Introduce stochastic gradient descent in context of MLPs and MSE to me.

The resulting markdown table showed as plan: As a Machine Learning Engineer and Data Scientist, I'll first define key concepts like MLPs, MSE, and stochastic gradient descent. Then, I'll explain how these concepts interrelate, focusing on how stochastic gradient descent optimizes MLPs using MSE as a loss function. This explanation will be concise, adhering to your verbosity preference (V=2).

Expected behavior

The default verbosity level V=3 should be assumed.

Shared chat

https://chat.openai.com/share/ef7ded70-846e-42de-9c16-821bbc325ed4

Additional context

It also happened for other prompts, sometimes assuming verbosity level V=5 as well.

spdustin commented 1 year ago

Oof, yeah. That's one of the things that I'm overhauling due to "All Tools" and its impact on AutoExpert (and other custom instructions). Verbosity stopped being reliable about a week ago (more so with "All Tools") and it's really irksome. The next update handles verbosity differently, and evals have me hopeful it's a little more resistant to attention loss.