langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
95.66k stars 15.53k forks source link

TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'candidate_count' #15228

Closed dianz17 closed 8 months ago

dianz17 commented 11 months ago

Issue you'd like to raise.

@dosu-bot Currently im experiencing an old bug that was supposed to be fixed patches ago.

File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 1455, in wsgi_app
    response = self.full_dispatch_request()
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 869, in full_dispatch_request
    rv = self.handle_user_exception(e)
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 867, in full_dispatch_request
    rv = self.dispatch_request()
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/flask/app.py", line 852, in dispatch_request
    return self.ensure_sync(self.view_functions[rule.endpoint])(**view_args)
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/functions_framework/__init__.py", line 134, in view_func
    return function(request._get_current_object())
  File "/workspace/main.py", line 109, in entry_point_http
    faq_response = chain.invoke(inputs)
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/runnables/base.py", line 1510, in invoke
    input = step.invoke(
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 160, in invoke
    self.generate_prompt(
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 491, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 378, in generate
    raise e
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 368, in generate
    self._generate_with_cache(
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 524, in _generate_with_cache
    return self._generate(
  File "/layers/google.python.pip/pip/lib/python3.10/site-packages/langchain/chat_models/vertexai.py", line 187, in _generate
    response = chat.send_message(question.content, **msg_params)
TypeError: _ChatSessionBase.send_message() got an unexpected keyword argument 'candidate_count' 

My current version is 0.0.348 and im trying to create a Cloud Function. Here is my code:

from google.cloud import bigquery, storage
from langchain.chains.conversation.memory import ConversationBufferWindowMemory
from langchain.chains.query_constructor.base import AttributeInfo
from langchain.chat_models import ChatVertexAI
from langchain.document_loaders import PyPDFLoader
from langchain.document_loaders.csv_loader import CSVLoader
from langchain.memory import ConversationSummaryBufferMemory
from langchain.schema.runnable import RunnablePassthrough, RunnableLambda
from operator import itemgetter
from langchain.schema.output_parser import StrOutputParser
from langchain.callbacks.tracers import ConsoleCallbackHandler
from langchain.embeddings import VertexAIEmbeddings
from langchain.llms import VertexAI
from langchain.prompts import PromptTemplate, ChatPromptTemplate
from langchain.retrievers import BM25Retriever, EnsembleRetriever, ContextualCompressionRetriever
from langchain.retrievers.merger_retriever import MergerRetriever
from langchain.document_transformers import EmbeddingsRedundantFilter
from langchain.retrievers.document_compressors import DocumentCompressorPipeline
from langchain.retrievers.document_compressors import LLMChainExtractor
from langchain.schema import Document
from langchain.schema import StrOutputParser
from langchain.text_splitter import RecursiveCharacterTextSplitter
from langchain.vectorstores import Chroma
import google.cloud.storage
from langchain.agents.format_scratchpad import format_log_to_str
from langchain.prompts import StringPromptTemplate
from typing import List, Union
from langchain.prompts import StringPromptTemplate
from langchain.schema import AgentAction, AgentFinish, OutputParserException
from langchain.vectorstores import MatchingEngine
import re

import io
import ipywidgets as widgets
import json
import langchain
import math
import os
import pandas as pd
import time
import logging

from faq_redpro_prompt import template_faq
from faq_redpro_fewshot import few_shot_faq

PROJECT_ID_ME = os.environ.get("PROJECT_ID_ME")
ME_REGION = os.environ.get("ME_REGION")
ME_BUCKET_FAQ = os.environ.get("ME_BUCKET_FAQ")
ME_INDEX_ID_FAQ = os.environ.get("ME_INDEX_ID_FAQ")
ME_INDEX_ENDPOINT_ID_FAQ = os.environ.get("ME_INDEX_ENDPOINT_ID_FAQ")

def entry_point_http(request):
    request_json = request.get_json()

    # Extraer la entrada del parámetro enviado por Dialogflow CX
    user_query = request_json.get('sessionInfo', {}).get('parameters', {}).get('user_query')

    #Modelos
    llm = VertexAI(
        model_name = "text-bison",
        temperature = 0.1 #Prueba
    )

    chat = ChatVertexAI(
        model_name = "chat-bison@001",
        temperature = 0.4,
        top_p = 0.8,
        top_k = 40,
        max_output_tokens = 500
    )

    embeddings = VertexAIEmbeddings(model_name="textembedding-gecko-multilingual@001")

    me_faqs = MatchingEngine.from_components(
        project_id=PROJECT_ID_ME,
        region=ME_REGION,
        gcs_bucket_name=ME_BUCKET_FAQ,
        embedding=embeddings,
        index_id=ME_INDEX_ID_FAQ,
        endpoint_id=ME_INDEX_ENDPOINT_ID_FAQ,
    )

    me_retriever = me_faqs.as_retriever(
        search_type="similarity",
        search_kwargs={
            "k": 2,
        },
    )

    faq_prompt = PromptTemplate(
    template=template_faq,
    input_variables=["context", "question", "few_shot_faq"]
    )

    chain = (
    RunnablePassthrough.assign(
        context=itemgetter("question") | me_retriever,
        question=itemgetter("question"),
        few_shot_faq=itemgetter("few_shot_faq"),
    )
    | faq_prompt
    | chat
    | StrOutputParser()
    )

    inputs = {"question": user_query, "few_shot_faq": few_shot_faq}
    faq_response = chain.invoke(inputs)

    print(f'LangChain response: {faq_response}')
    formatted_results = format_response(faq_response)
    response["fulfillment_response"]["messages"][0]["text"]["text"][0] = formatted_results

    return (response, 200, headers)

def format_response(results):
    answer = results['answer']
    sources = results.get('sources', '')
    if sources != '':
        source_uri = sources
    else:
        source_documents = results.get('source_documents', '')
        if source_documents != '':
            source_uri = results['source_documents'][0].metadata['source']
        else:
            source_uri = 'Não encontrei uma fonte para essa pergunta.'
    formatted_response = f"{answer}\nSources: {source_uri}"

    return formatted_response

Suggestion:

No response

dosubot[bot] commented 11 months ago

🤖

I'm sorry to hear that you're experiencing this issue. The error message suggests that the send_message() method is receiving an unexpected keyword argument candidate_count. This could be due to a change in the method's signature in a recent update or a discrepancy between the version of LangChain you're using and the version in which the bug was fixed.

To help me investigate this further, could you please provide the following additional information:

  1. The exact version of LangChain in which the bug was supposed to be fixed.
  2. Any changes you made to your code or environment prior to encountering this issue.
  3. The full traceback of the error, if available.

This information will help me understand the context better and provide a more accurate solution. Thank you for your cooperation.


This response is meant to be useful and save you time. It is not meant to be a precise solution, but rather a starting point for your own research.

Help me be more useful! Please leave a 👍 if this is helpful and 👎 if it is irrelevant.

If you want to continue the conversation, start your reply with @dosu-bot.