NVIDIA / GenerativeAIExamples

Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
Apache License 2.0
2.41k stars 516 forks source link

When I run /RetrievalAugmentedGeneration/examples/developer_rag/chains.py #158

Open Suiji12 opened 3 months ago

Suiji12 commented 3 months ago

My setting about rag-app-text-chatbot.yaml is services: jupyter-server: container_name: notebook-server image: notebook-server:${TAG:-latest} build: context: ../../ dockerfile: ./notebooks/Dockerfile.notebooks # replace GPU enabled Dockerfile ./notebooks/Dockerfile.gpu_notebook ports:

networks: default: name: nvidia-rag What should I do?

Suiji12 commented 3 months ago

I met a error:C:\Users\jiaojiaxing.conda\envs\localgpt\python.exe E:\jiaojiaxing\GenerativeAIExamples\RetrievalAugmentedGeneration\examples\developer_rag\chains.py C:\Users\jiaojiaxing.conda\envs\localgpt\lib\site-packages\langchain_nvidia_ai_endpoints_common.py:172: UserWarning: An API key is required for the hosted NIM. This will become an error in the future. warnings.warn( C:\Users\jiaojiaxing.conda\envs\localgpt\lib\site-packages\langchain_nvidia_ai_endpoints_common.py:172: UserWarning: An API key is required for the hosted NIM. This will become an error in the future. warnings.warn( Traceback (most recent call last): File "E:\jiaojiaxing\GenerativeAIExamples\RetrievalAugmentedGeneration\examples\developer_rag\chains.py", line 40, in set_service_context() File "E:\jiaojiaxing\GenerativeAIExamples\RetrievalAugmentedGeneration\common\utils.py", line 115, in wrapper return func(args_hashable, kwargs_hashable) File "E:\jiaojiaxing\GenerativeAIExamples\RetrievalAugmentedGeneration\common\utils.py", line 122, in set_service_context llm = LangChainLLM(get_llm(kwargs)) File "E:\jiaojiaxing\GenerativeAIExamples\RetrievalAugmentedGeneration\common\utils.py", line 115, in wrapper return func(args_hashable, **kwargs_hashable) File "E:\jiaojiaxing\GenerativeAIExamples\RetrievalAugmentedGeneration\common\utils.py", line 265, in get_llm return ChatNVIDIA(model=settings.llm.model_name, File "C:\Users\jiaojiaxing.conda\envs\localgpt\lib\site-packages\langchain_nvidia_ai_endpoints\chat_models.py", line 243, in init self._client = _NVIDIAClient( File "C:\Users\jiaojiaxing.conda\envs\localgpt\lib\site-packages\langchain_nvidia_ai_endpoints_common.py", line 213, in init raise ValueError( ValueError: Model ensemble is unknown, check available_models Process finished with exit code 1