langchain-ai / langchain

🦜🔗 Build context-aware reasoning applications
https://python.langchain.com
MIT License
88.85k stars 13.98k forks source link

LangChain AzureOpenAIEmbeddings issue when passing list of ints vs [str] #23268

Open DanielV1992 opened 2 weeks ago

DanielV1992 commented 2 weeks ago

Checked other resources

Example Code

def set_embed(self, query: str) -> None:
    # Load model configurations
    load_dotenv(self.model_conf)

    # Load app configurations
    config = ConfigParser(interpolation=None) 
    config.read('app.ini')
    apim_params = config['apim']

    # Set env variables
    os.environ["OPENAI_API_KEY"] = apim_params['OPENAI_API_KEY']

    # Set emb_model name variables
    #embed_model = os.getenv('EMBEDDING_MODEL_TYPE')
    embed_model = os.getenv('MODEL_TYPE_EMBEDDING')
    print(embed_model)

    # Set apim request parameters
    params: Mapping[str, str] = {
        'api-version': os.getenv('OPENAI_API_VERSION')
    }   
    headers: Mapping[str, str] = {
        'Content-Type': apim_params['CONTENT_TYPE'],
        'Ocp-Apim-Subscription-Key': os.getenv('OCP-APIM-SUBSCRIPTION-KEY')
    }
    client = httpx.Client(
        base_url = os.getenv('AZURE_OPENAI_ENDPOINT'),
        params = params,
        headers = headers,
        verify = apim_params['CERT_PATH']
    )

    print(client.params)
    print(client.headers)
    try:
        # Load embedding model
        self.embed = AzureOpenAIEmbeddings(
            model = 'text-embedding-ada-002',
                                        azure_deployment=embed_model,
                                        azure_deployment='text-embedding-ada-002',
                                        chunk_size=2048,
                                        http_client=client)
        print (self.embed)
        result = self.embed.embed_query(query)

        print (f'{embed_model} model initialized')

    except Exception as e:
        raise Exception(f'ApimUtils-set_embed : Error while initializing embedding model - {e}')

Error Message and Stack Trace (if applicable)

ApimUtils-set_embed : Error while initializing embedding model - Error code: 400 - {'statusCode': 400, 'message': "Unable to parse and estimate tokens from incoming request. Please ensure incoming request is of one of the following types: 'Chat Completion', 'Completion', 'Embeddings' and works with current prompt estimation mode of 'Auto'."}

Description

When using the AzureOpenAIEmbedding class with our Azure APIM in front of our Azure OpenAI services it breaks within our APIM policy which captures/calculates prompt/completion tokens from the request. We believe this is due to how the AzureOpenAIEmbedding class is sending a list of integers ex. b'{"input": [[3923, 374, 279, 4611, 96462, 46295, 58917, 30]], "model": "text-embedding-ada-002", "encoding_format": "base64"}' vs [str] from the query text.

System Info

System Information

OS: Windows OS Version: 10.0.22631 Python Version: 3.11.5 (tags/v3.11.5:cce6ba9, Aug 24 2023, 14:38:34) [MSC v.1936 64 bit (AMD64)]

Package Information

langchain_core: 0.2.5 langchain: 0.2.3 langchain_community: 0.2.4 langsmith: 0.1.75 langchain_openai: 0.1.8 langchain_text_splitters: 0.2.1

nuernber commented 1 week ago

Just ran into this issue as well... I've also set the azure-openai-emit-token-metric APIM API policy. When the APIM setting to support custom metrics is enabled, this error occurs. When I disable the custom metrics, this error does not occur. When I directly use the openai python library to get embeddings, there's no error as far so far. So the workaround is to just use the openai python library directly for embeddings if you want to keep using the azure-openai-emit-token-metric APIM API policy.

nuernber commented 1 week ago

Another workaround appears to be simply setting check_embedding_ctx_length=False; see https://api.python.langchain.com/en/latest/embeddings/langchain_openai.embeddings.base.OpenAIEmbeddings.html#langchain_openai.embeddings.base.OpenAIEmbeddings.check_embedding_ctx_length

bastiaanvandenbussche commented 22 hours ago

@nuernber - you are a lifesaver! Building a whole AI platform with APIM and token consumption mechanisms, only the have this ambiguous error, was annoying. Yet your solution is easy to implement and proven to be working. Thanks a lot!