NVIDIA / NeMo-Guardrails

NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems.
Other
4.19k stars 397 forks source link

Error while execution generate_user_intent: Connection error. #628

Closed AadarshBhalerao closed 4 months ago

AadarshBhalerao commented 4 months ago

This is my .py file

import os
import nest_asyncio
import asyncio
from config import *
from langchain.chat_models import AzureChatOpenAI
from nemoguardrails import LLMRails, RailsConfig
# import logging

# logging.basicConfig(level=logging.DEBUG)
# Apply nest_asyncio
nest_asyncio.apply()

# Define LLM and parameters to pass to the guardrails configuration
try:
    chat_model = AzureChatOpenAI(
        openai_api_type=OPENAI_API_TYPE,
        openai_api_version=OPENAI_API_VERSION,  
        openai_api_key=OPENAI_API_KEY,
        azure_deployment=AZURE_DEPLOYMENT,
        openai_api_base=OPENAI_API_BASE
    )
    print("Chat model instantiated successfully.")
except Exception as e:
    print(f"Error instantiating chat model: {e}")

# Load configuration
try:
    config = RailsConfig.from_path("/mnt/c/D/Python-dev3/rpa-infra/response_time/execution-eproc/Guardrails/Simple Bot/config")
    print("Configuration loaded successfully.")
except Exception as e:
    print(f"Error loading configuration: {e}")

# Configuration of LLMs is passed
try:
    app = LLMRails(config=config, llm=chat_model)
    print("LLM Rails app instantiated successfully.")
except Exception as e:
    print(f"Error instantiating LLM Rails app: {e}")

# Define an asynchronous function to handle the generation
async def generate_message():
    try:
        new_message = await app.generate_async(messages=[{
            "role": "user",
            "content": "What are life and health insurances?"
        }])
        print(f"new_message: {new_message}")
    except Exception as e:
        print(f"Error generating message: {e}")

# Run the asynchronous function
try:
    asyncio.run(generate_message())
except Exception as e:
    print(f"Error running async function: {e}")

Here is the config.yml

 models:
  - type: main
    engine: openai
    model: gpt-3.5-turbo
    instructions:
      - type: general
        content: |
          You are an AI assistant that supports employees at an insurance company's customer support center.

But when I run the py file in debug mode I am getting an error

     httpx.ConnectError: [Errno -3] Temporary failure in name resolution
DEBUG:openai._base_client:Raising connection error
WARNING:nemoguardrails.actions.action_dispatcher:Error while execution generate_user_intent: Connection error.
INFO:nemoguardrails.colang.v1_0.runtime.runtime:Processing event: {'type': 'hide_prev_turn'}
INFO:nemoguardrails.colang.v1_0.runtime.runtime:Event :: hide_prev_turn {}
INFO:nemoguardrails.rails.llm.llmrails:--- :: Total processing took 9.25 seconds. LLM Stats: 1 total calls, 0 total time, 0 total tokens, 0 total prompt tokens, 0 total completion tokens, [] as latencies
new_message: {'role': 'assistant', 'content': "I'm sorry, an internal error has occurred."}
Pouyanpi commented 4 months ago

Hello @AadarshBhalerao, it would be greatly appreciated if you could use code blocks when sharing code snippets. This will enhance readability and make it easier for us to understand the issue. For future reference, you can do the same as I did with your comment.

Now, just to make sure, can you use the chat_model in isolation. Something like

messages = [
    (
        "system",
        "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ),
    ("human", "I love programming."),
]
chat_model.invoke(messages)

as in langchain's documentation.

AadarshBhalerao commented 4 months ago

Thanks for the revert.

Here's the code:

from langchain.chat_models import AzureChatOpenAI
from config import *

chat_model = AzureChatOpenAI(
        openai_api_type=OPENAI_API_TYPE,
        openai_api_version=OPENAI_API_VERSION,  
        openai_api_key=OPENAI_API_KEY,
        azure_deployment=AZURE_DEPLOYMENT,
        openai_api_base=OPENAI_API_BASE
    )

messages = [
    (
        "system",
        "You are a helpful assistant that translates English to French. Translate the user sentence.",
    ),
    ("human", "I love programming."),
]
chat_model.invoke(messages)

And here's the error message

Traceback (most recent call last):
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 69, in map_httpcore_exceptions
    yield
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 233, in handle_request
    resp = self._pool.handle_request(req)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 216, in handle_request
    raise exc from None
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection_pool.py", line 196, in handle_request
    response = connection.handle_request(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 99, in handle_request
    raise exc
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 76, in handle_request
    stream = self._connect(request)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_sync/connection.py", line 122, in _connect
    stream = self._network_backend.connect_tcp(**kwargs)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_backends/sync.py", line 205, in connect_tcp
    with map_exceptions(exc_map):
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpcore/_exceptions.py", line 14, in map_exceptions
    raise to_exc(exc) from exc
httpcore.ConnectError: [Errno -3] Temporary failure in name resolution

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 978, in _request
    response = self._client.send(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 914, in send
    response = self._send_handling_auth(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 942, in _send_handling_auth
    response = self._send_handling_redirects(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 979, in _send_handling_redirects
    response = self._send_single_request(request)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_client.py", line 1015, in _send_single_request
    response = transport.handle_request(request)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 232, in handle_request
    with map_httpcore_exceptions():
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/contextlib.py", line 153, in __exit__
    self.gen.throw(typ, value, traceback)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/httpx/_transports/default.py", line 86, in map_httpcore_exceptions
    raise mapped_exc(message) from exc
httpx.ConnectError: [Errno -3] Temporary failure in name resolution

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
  File "/mnt/c/D/Python-dev3/rpa-infra/response_time/execution-eproc/Guardrails/Simple Bot/config/github.py", line 19, in <module>
    chat_model.invoke(messages)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 158, in invoke
    self.generate_prompt(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 560, in generate_prompt
    return self.generate(prompt_messages, stop=stop, callbacks=callbacks, **kwargs)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 421, in generate
    raise e
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 411, in generate
    self._generate_with_cache(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_core/language_models/chat_models.py", line 632, in _generate_with_cache
    result = self._generate(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 441, in _generate
    response = self.completion_with_retry(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/langchain_community/chat_models/openai.py", line 356, in completion_with_retry
    return self.client.create(**kwargs)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_utils/_utils.py", line 277, in wrapper
    return func(*args, **kwargs)
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/resources/chat/completions.py", line 643, in create
    return self._post(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1266, in post
    return cast(ResponseT, self.request(cast_to, opts, stream=stream, stream_cls=stream_cls))
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 942, in request
    return self._request(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1002, in _request
    return self._retry_request(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1079, in _retry_request
    return self._request(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1002, in _request
    return self._retry_request(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1079, in _retry_request
    return self._request(
  File "/home/aadarshbhalerao/miniconda3/envs/nemo_gr/lib/python3.10/site-packages/openai/_base_client.py", line 1012, in _request
    raise APIConnectionError(request=request) from err
openai.APIConnectionError: Connection error.

I now understand that this may not be a NeMo Guardrails issue. But can you advise how to solve this?

Pouyanpi commented 4 months ago

Thanks for sharing the output 👍🏻 You are right! The best is to open an issue on langchain or even OpenAI, but it seems that you have DNS issue.

Here is OpenAI's suggestion

I can suggest followings:

Use a tool like nslookup to test DNS resolution for api.openai.com:

nslookup api.openai.com

If the DNS resolution fails, it shows an issue with your DNS configuration or network.

Again, the best is to ask at the relevant GH's project if none of these help. Let us know how you finally resolved this problem.

Thanks!