Closed moritztim closed 5 months ago
The word "query" is used quite a lot, "prompt" is the correct term and is known quite well to the general public
This is highly subjective.
From my equally subjective opinion, I strongly disagree. LLMs are well on their way to replacing traditional search engines and I've all but cut out Google search (and StackOverflow) from my personal workflow, specifically by using this tool.
Managing hallucinations isn't that different from managing misinformation in search engines, you train yourself to be skeptical by default.
Grounding with RAG is also becoming the norm which should further reduce hallucinations and provide real-time information.
LLMs are well on their way to replacing traditional search engines
Yes. But currently they are still not quite there
Managing hallucinations isn't that different from managing misinformation in search engines, you train yourself to be skeptical by default.
That's a good point but search engines can generally be expected to be managing misinformation, but for LLMs that's not the case by default. The first google result (after ads) usually is trustworthy, but with LLMs it depends on the query and how well it's represented in the dataset.
I think there's no harm in implementing my proposed wording changes but there may well be harm in spreading the belief that LLMs are currently equally truthful as search engines.
Calling it a search engine also implies that results are directly based on a specific source that was found on the internet. You wouldn't call your professor a search engine.
I agree with the point made by @jeanlucthumm.
Managing hallucinations isn't that different from managing misinformation in search engines, you train yourself to be skeptical by default.
The reason it was written this way is that I initially aimed to solve the constant "googling" problem for myself when I started developing ShellGPT. The main use case for sgpt
is essentially a replacement for "googling" in my opinion.
Closing this issue as the examples in question appear to have been removed in a prior releases.
https://github.com/TheR1D/shell_gpt/blob/7ac1f98a8b7003f08e879743aae67860e47828e0/README.md?plain=1#L16C1-L29C4 https://github.com/TheR1D/shell_gpt#simple-queries
This claim is a misconception. I think It's crucial to clarify that Language Models (LLMs) are not search engines. I feel like there's a common misconception spreading that LLMs do have direct access to the to real-time information. Relying on an LLM as a replacement for a search engine can lead to spread of misinformation due to hallucinations, which is why I think it's important to separate the two. This should be rephrased to something like
The examples below are also questionable, they sound more like google searches than LLM prompts:
Math is also not a strength of GPT. The last example is very simple, so that works but anything more complicated doesn't.
While these are questions an LLM is typically able to answer because they most likely quite common in the training data, they might give a user the idea that they should rather ask sgpt simple questions about any topic than googling it (and there are shell clients for search engines).
To be clear, I think these are perfectly fine examples of prompts for an LLM, they just shouldn't be presented without context or they might spread the misconception that LLMs can replace search engines, which they cannot do yet.