Open vishnua2j opened 11 months ago
Based on the provided context, it seems that there are issues with the Mistral 7b model not consistently providing the expected response in a chatbot application. The user is attempting to create a chatbot using the Mistral 7b model, which should reply "I don't know" for certain types of questions about the latest events. However, the model is not consistently producing the desired response.
Possible reasons for this inconsistency could include:
Model Fine-tuning: The model may need further fine-tuning to better understand and respond to the specific types of questions it's supposed to handle.
Data Quality: The dataset used to train or fine-tune the model may not adequately cover all relevant scenarios or may contain noise, leading to inconsistencies in responses.
Prompt Design: The prompt template or input format provided to the model might not be optimal for eliciting the desired responsesconsistently.
Memory Limitations: The memory constraints or history tracking mechanism within the chatbot architecture could be affecting the model's ability to provide consistent responses.
Model Architecture: There could be inherent limitations or biases in the Mistral 7b model architecture that affect its performance in certain scenarios.
To address this issue, the user may need to experiment with different approaches, such as adjusting the fine-tuning process, refining the prompt template, or investigating potential data quality issues. Additionally, seeking assistance from the developer community or consulting with experts in natural language processing could provide valuable insights into resolving the inconsistency in the model's responses.
@vishnua2j Can you please update the issue again.
One more question. Is the bug/issue related to this project?
I have more questions based on which I can answer/debug/help/suggest a solution for your problem.
Thank you, Nitkarsh Chourasia
I am trying to create a chatbot using mistral 7b model (mistral-7b-openorca.Q4_K_M.gguf) . The model should reply "I don't know" for latest events questions like (What is the weather today in Delhi?, Who won the 2023 mens cricket worldcup?) . The model is giving expected answer some times but not every time. What could be the reason?
llm = CTransformers( model="./mistral-7b-openorca.Q4_K_M.gguf", model_type="mistral",config = {'context_length' : 2048}) memory=ConversationBufferWindowMemory( memory_key="chat_history", input_key="question",k=3 ) template = """You are a virtual assistant trained on data up to December 2022. Please answer the following question to the best of your ability in less than 120 words, and reply "I don't know" if you do not know the answer or if the question is about an event happening after December 2022. If required reffer chat history.
chat_history: {chat_history} Question: {question}"""
PROMPT = PromptTemplate(input_variables=["chat_history","question"], template=template) chain = LLMChain(llm=llm, prompt=PROMPT, verbose=True, memory=memory)