npiv / chatblade

A CLI Swiss Army Knife for ChatGPT
GNU General Public License v3.0
2.58k stars 92 forks source link

-c 4 indeed does not work #69

Closed CatIIIIIIII closed 10 months ago

CatIIIIIIII commented 10 months ago

When test on complex questions, chatblade -c 4 act the same with chatblade. But chatblade -c 4t work.

(base) ➜ ~ chatblade --chat-gpt 3.5 lease estimate roughly how many Fermi questions are being asked everyday user lease estimate roughly how many Fermi questions are being asked everyday assistant. It is difficult to provide an exact estimate as the number of Fermi questions being asked daily can vary greatly depending on various factors such as the context, audience, and platform. However, considering the popularity of Fermi questions as a tool for critical thinking and problem-solving, it is reasonable to assume that a significant number of Fermi questions are being asked every day across different educational, scientific, and creative communities.

(base) ➜ ~ chatblade --chat-gpt 4 what version are you user what version are you assistant As an artificial intelligence, I don't have a specific version. I'm constantly updated and improved by OpenAI.

(base) ➜ ~ chatblade --chat-gpt 4 lease estimate roughly how many Fermi questions are being asked everyday user lease estimate roughly how many Fermi questions are being asked everyday assistant As an AI, I don't have real-time data or the ability to monitor all conversations happening globally. However, considering that Fermi questions are often used in educational settings, science competitions, and casual discussions, it's safe to say that potentially hundreds or even thousands could be asked daily worldwide. This is a rough estimate and the actual number could be higher or lower.

(base) ➜ ~ chatblade --chat-gpt 4t lease estimate roughly how many Fermi questions are being asked everyday user lease estimate roughly how many Fermi questions are being asked everyday assistant A Fermi question is a type of estimation problem that typically requires making educated guesses about quantities that seem impossible to know offhand. These questions are named after physicist Enrico Fermi, who was known for his ability to make good approximate calculations with little or no actual data.

Estimating the number of Fermi questions asked every day is itself a Fermi question. To make this estimation, we would need to consider several factors:

1 Contexts in which Fermi questions are asked: These questions are often used in educational settings, job interviews (especially for consulting, finance, and tech roles), and casual conversations among people interested in problem-solving or trivia. 2 Population involved: We would need to estimate the number of people engaged in activities where Fermi questions might be asked. This includes students, teachers, interviewers, interviewees, and enthusiasts. 3 Frequency of Fermi questions per context: We would need to estimate how often Fermi questions are asked in each context. For example, a teacher might ask a few Fermi questions in a class, or an interviewer might ask one or two during an interview.

Given the lack of specific data, we can make a very rough estimate:

• Assume there are about 1 million people worldwide who are in a position to ask Fermi questions daily (teachers, interviewers, etc.). • Each of these individuals might ask, on average, one Fermi question per day.

This would lead to an estimate of about 1 million Fermi questions asked per day. However, this number could be significantly higher or lower depending on the actual number of people asking such questions and the frequency with which they do so.

It's important to note that this is a very rough estimate and the actual number could be much different. The purpose of a Fermi estimation is not to arrive at an exact number but to provide a reasonable order-of-magnitude guess.

npiv commented 10 months ago

Hi @CatIIIIIIII

I'm not sure if this is a very accurate test. You can check exactly what model is sent to openapi by adding --debug so for example:

chatblade --debug -c 4

should tell you 'model': 'gpt-4'


These models are detailed here: https://platform.openai.com/docs/models/gpt-4-and-gpt-4-turbo

If you are not happy with the differences between them you should take that up in the openai forum, if you think it is a bug in chatblade then by all means reopen.