Frequent hallucinations: expressing “facts” that aren’t true in plausible ways. Always fact check its answers.
Suggestible. Avoid leading questions because they can frequently cause hallucinations.
Often tripped up by large number math. Use a calculator, instead, or consider giving GPTs a programming language interpreter if it needs to perform reliable logic and calculations.
GPTs are great at role playing, but don’t seem to hold strong opinions of their own. They don’t appear to actually care about anything, including giving you grounded, factual, or correct responses.
GPTs often crack jokes and use sarcasm, but not well, and it’s not always obvious the’ve tried to do it unless they explicitly tell you. Sometimes it is obvious, though: “Could a cat pilot a rocket?” GPT-3: “The purr of the engine would be very distracting.”
Of course, also mention pretraining, no long-term memory after training cutoff, occasional tendency to paste media that is not original (particularly Codex - there is a filter for this in the Copilot settings)….
Credit: Eric Elliott