Closed RevyaM closed 5 months ago
Have same error on 3080 and ryzen 5-3600, win11
have same error win11, intel 10900k, and RTX3090FE. I downloaded the .zip, extracted, and installed. Install was successful. First run looks like the following:
Environment path found: D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag
App running with config
{
"models": {
"supported": [
{
"name": "Mistral 7B int4",
"installed": true,
"metadata": {
"model_path": "model\\mistral\\mistral7b_int4_engine",
"engine": "llama_float16_tp1_rank0.engine",
"tokenizer_path": "model\\mistral\\mistral7b_hf",
"max_new_tokens": 1024,
"max_input_token": 7168,
"temperature": 0.1
}
},
{
"name": "Llama 2 13B int4",
"installed": true,
"metadata": {
"model_path": "model\\llama\\llama13_int4_engine",
"engine": "llama_float16_tp1_rank0.engine",
"tokenizer_path": "model\\llama\\llama13_hf",
"max_new_tokens": 1024,
"max_input_token": 3900,
"temperature": 0.1
}
}
],
"selected": "Mistral 7B int4"
},
"sample_questions": [
{
"query": "How does NVIDIA ACE generate emotional responses?"
},
{
"query": "What is Portal prelude RTX?"
},
{
"query": "What is important about Half Life 2 RTX?"
},
{
"query": "When is the launch date for Ratchet & Clank: Rift Apart on PC?"
}
],
"dataset": {
"sources": [
"directory",
"youtube",
"nodataset"
],
"selected": "directory",
"path": "dataset",
"isRelative": true
},
"strings": {
"directory": "Folder Path",
"youtube": "YouTube URL",
"nodataset": "AI model default"
}
}
Traceback (most recent call last):
File "D:\NVIDIA_ChatWithRTX_Demo\RAG\trt-llm-rag-windows-main\app.py", line 101, in <module>
llm = TrtLlmAPI(
File "D:\NVIDIA_ChatWithRTX_Demo\RAG\trt-llm-rag-windows-main\trt_llama_api.py", line 106, in __init__
runtime_rank = tensorrt_llm.mpi_rank()
File "D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag\lib\site-packages\tensorrt_llm\_utils.py", line 221, in mpi_rank
return mpi_comm().Get_rank()
File "D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag\lib\site-packages\tensorrt_llm\_utils.py", line 216, in mpi_comm
from mpi4py import MPI
ImportError: DLL load failed while importing MPI: The specified procedure could not be found.
Press any key to continue . . .
have same error win11, intel 10900k, and RTX3090FE. I downloaded the .zip, extracted, and installed. Install was successful. First run looks like the following:
Environment path found: D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag App running with config { "models": { "supported": [ { "name": "Mistral 7B int4", "installed": true, "metadata": { "model_path": "model\\mistral\\mistral7b_int4_engine", "engine": "llama_float16_tp1_rank0.engine", "tokenizer_path": "model\\mistral\\mistral7b_hf", "max_new_tokens": 1024, "max_input_token": 7168, "temperature": 0.1 } }, { "name": "Llama 2 13B int4", "installed": true, "metadata": { "model_path": "model\\llama\\llama13_int4_engine", "engine": "llama_float16_tp1_rank0.engine", "tokenizer_path": "model\\llama\\llama13_hf", "max_new_tokens": 1024, "max_input_token": 3900, "temperature": 0.1 } } ], "selected": "Mistral 7B int4" }, "sample_questions": [ { "query": "How does NVIDIA ACE generate emotional responses?" }, { "query": "What is Portal prelude RTX?" }, { "query": "What is important about Half Life 2 RTX?" }, { "query": "When is the launch date for Ratchet & Clank: Rift Apart on PC?" } ], "dataset": { "sources": [ "directory", "youtube", "nodataset" ], "selected": "directory", "path": "dataset", "isRelative": true }, "strings": { "directory": "Folder Path", "youtube": "YouTube URL", "nodataset": "AI model default" } } Traceback (most recent call last): File "D:\NVIDIA_ChatWithRTX_Demo\RAG\trt-llm-rag-windows-main\app.py", line 101, in <module> llm = TrtLlmAPI( File "D:\NVIDIA_ChatWithRTX_Demo\RAG\trt-llm-rag-windows-main\trt_llama_api.py", line 106, in __init__ runtime_rank = tensorrt_llm.mpi_rank() File "D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag\lib\site-packages\tensorrt_llm\_utils.py", line 221, in mpi_rank return mpi_comm().Get_rank() File "D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag\lib\site-packages\tensorrt_llm\_utils.py", line 216, in mpi_comm from mpi4py import MPI ImportError: DLL load failed while importing MPI: The specified procedure could not be found. Press any key to continue . . .
the same thing happens to me
Me too. Although the installation was successful.
And when I execute pip install tensorrt_llm
manually, it returns:
Collecting tensorrt_llm
Downloading tensorrt-llm-0.7.1.tar.gz (6.9 kB)
Installing build dependencies ... done
Getting requirements to build wheel ... error
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> [20 lines of output]
Traceback (most recent call last):
File "D:\Programs\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module>
main()
File "D:\Programs\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main
json_out['return_val'] = hook(**hook_input['kwargs'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "D:\Programs\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel
return hook(config_settings)
^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\AppData\Local\Temp\pip-build-env-xxx\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel
return self._get_build_requires(config_settings, requirements=['wheel'])
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "C:\Users\xxx\AppData\Local\Temp\pip-build-env-xxx\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires
self.run_setup()
File "C:\Users\xxx\AppData\Local\Temp\pip-build-env-xxx\overlay\Lib\site-packages\setuptools\build_meta.py", line 480, in run_setup
super().run_setup(setup_script=setup_script)
File "C:\Users\xxx\AppData\Local\Temp\pip-build-env-xxx\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup
exec(code, locals())
File "<string>", line 90, in <module>
RuntimeError: Bad params
[end of output]
note: This error originates from a subprocess, and is likely not a problem with pip.
error: subprocess-exited-with-error
× Getting requirements to build wheel did not run successfully.
│ exit code: 1
╰─> See above for output.
I installed Microsoft MPI, that resolved the "ImportError: DLL load failed while importing MPI" issue for me : https://www.microsoft.com/en-us/download/details.aspx?id=57467 Only install the setup.exe, the sdk doesn't seem useful
Me too. Although the installation was successful.
And when I execute
pip install tensorrt_llm
manually, it returns:Collecting tensorrt_llm Downloading tensorrt-llm-0.7.1.tar.gz (6.9 kB) Installing build dependencies ... done Getting requirements to build wheel ... error error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> [20 lines of output] Traceback (most recent call last): File "D:\Programs\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 353, in <module> main() File "D:\Programs\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 335, in main json_out['return_val'] = hook(**hook_input['kwargs']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "D:\Programs\Python311\Lib\site-packages\pip\_vendor\pyproject_hooks\_in_process\_in_process.py", line 118, in get_requires_for_build_wheel return hook(config_settings) ^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxx\AppData\Local\Temp\pip-build-env-xxx\overlay\Lib\site-packages\setuptools\build_meta.py", line 325, in get_requires_for_build_wheel return self._get_build_requires(config_settings, requirements=['wheel']) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\xxx\AppData\Local\Temp\pip-build-env-xxx\overlay\Lib\site-packages\setuptools\build_meta.py", line 295, in _get_build_requires self.run_setup() File "C:\Users\xxx\AppData\Local\Temp\pip-build-env-xxx\overlay\Lib\site-packages\setuptools\build_meta.py", line 480, in run_setup super().run_setup(setup_script=setup_script) File "C:\Users\xxx\AppData\Local\Temp\pip-build-env-xxx\overlay\Lib\site-packages\setuptools\build_meta.py", line 311, in run_setup exec(code, locals()) File "<string>", line 90, in <module> RuntimeError: Bad params [end of output] note: This error originates from a subprocess, and is likely not a problem with pip. error: subprocess-exited-with-error × Getting requirements to build wheel did not run successfully. │ exit code: 1 ╰─> See above for output.
I fixed this by changing my Python version (in my Path variable) from 311 to 310. I happened to have 310 already installed, so I just updated the environment variable.
I installed Microsoft MPI, that resolved the "ImportError: DLL load failed while importing MPI" issue for me : https://www.microsoft.com/en-us/download/details.aspx?id=57467 Only install the setup.exe, the sdk doesn't seem useful
I installed MPI and then tried installing ChatWithRTX again... fails in the same way as before...
Environment path found: C:\Users\A-A-Ron\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag
App running with config
{
"models": {
"supported": [
{
"name": "Mistral 7B int4",
"installed": true,
"metadata": {
"model_path": "model\\mistral\\mistral7b_int4_engine",
"engine": "llama_float16_tp1_rank0.engine",
"tokenizer_path": "model\\mistral\\mistral7b_hf",
"max_new_tokens": 1024,
"max_input_token": 7168,
"temperature": 0.1
}
},
{
"name": "Llama 2 13B int4",
"installed": false,
"metadata": {
"model_path": "model\\llama\\llama13_int4_engine",
"engine": "llama_float16_tp1_rank0.engine",
"tokenizer_path": "model\\llama\\llama13_hf",
"max_new_tokens": 1024,
"max_input_token": 3900,
"temperature": 0.1
}
}
],
"selected": "Mistral 7B int4"
},
"sample_questions": [
{
"query": "How does NVIDIA ACE generate emotional responses?"
},
{
"query": "What is Portal prelude RTX?"
},
{
"query": "What is important about Half Life 2 RTX?"
},
{
"query": "When is the launch date for Ratchet & Clank: Rift Apart on PC?"
}
],
"dataset": {
"sources": [
"directory",
"youtube",
"nodataset"
],
"selected": "directory",
"path": "dataset",
"isRelative": true
},
"strings": {
"directory": "Folder Path",
"youtube": "YouTube URL",
"nodataset": "AI model default"
}
}
Traceback (most recent call last):
File "C:\Users\A-A-Ron\AppData\Local\NVIDIA\ChatWithRTX\RAG\trt-llm-rag-windows-main\app.py", line 101, in <module>
llm = TrtLlmAPI(
File "C:\Users\A-A-Ron\AppData\Local\NVIDIA\ChatWithRTX\RAG\trt-llm-rag-windows-main\trt_llama_api.py", line 106, in __init__
runtime_rank = tensorrt_llm.mpi_rank()
File "C:\Users\A-A-Ron\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\site-packages\tensorrt_llm\_utils.py", line 221, in mpi_rank
return mpi_comm().Get_rank()
File "C:\Users\A-A-Ron\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag\lib\site-packages\tensorrt_llm\_utils.py", line 216, in mpi_comm
from mpi4py import MPI
ImportError: DLL load failed while importing MPI: The specified procedure could not be found.
Press any key to continue . . .
I installed the latest version and I get this error ModuleNotFoundError: No module named 'tensorrt_llm'
Here is the full output
Environment path found: C:\Users\mahab\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag
App running with config
{
"models": {
"supported": [
{
"name": "Mistral 7B int4",
"installed": true,
"metadata": {
"model_path": "model\\mistral\\mistral7b_int4_engine",
"engine": "llama_float16_tp1_rank0.engine",
"tokenizer_path": "model\\mistral\\mistral7b_hf",
"max_new_tokens": 1024,
"max_input_token": 7168,
"temperature": 0.1
}
},
{
"name": "Llama 2 13B int4",
"installed": true,
"metadata": {
"model_path": "model\\llama\\llama13_int4_engine",
"engine": "llama_float16_tp1_rank0.engine",
"tokenizer_path": "model\\llama\\llama13_hf",
"max_new_tokens": 1024,
"max_input_token": 3900,
"temperature": 0.1
}
}
],
"selected": "Mistral 7B int4"
},
"sample_questions": [
{
"query": "How does NVIDIA ACE generate emotional responses?"
},
{
"query": "What is Portal prelude RTX?"
},
{
"query": "What is important about Half Life 2 RTX?"
},
{
"query": "When is the launch date for Ratchet & Clank: Rift Apart on PC?"
}
],
"dataset": {
"sources": [
"directory",
"youtube",
"nodataset"
],
"selected": "directory",
"path": "dataset",
"isRelative": true
},
"strings": {
"directory": "Folder Path",
"youtube": "YouTube URL",
"nodataset": "AI model default"
}
}
Traceback (most recent call last):
File "R:\RTX_LLM\ChatWithRTX\RAG\trt-llm-rag-windows-main\app.py", line 28, in <module>
from trt_llama_api import TrtLlmAPI
File "R:\RTX_LLM\ChatWithRTX\RAG\trt-llm-rag-windows-main\trt_llama_api.py", line 42, in <module>
from utils import (DEFAULT_HF_MODEL_DIRS, DEFAULT_PROMPT_TEMPLATES,
File "R:\RTX_LLM\ChatWithRTX\RAG\trt-llm-rag-windows-main\utils.py", line 22, in <module>
import tensorrt_llm
ModuleNotFoundError: No module named 'tensorrt_llm'
Press any key to continue . . .
dont know if this might help anyone, but i was having the same issue as op. The first few installation attempts of Chat with RTX failed. On the third attempt it installed. Running the application resulted in the ModuleNotFoundError for tensorrt (or tensorrt_llm).
I opened the Anaconda Navigator and noticed that there were 3 Environments named env_nvd_rag, two of them had no mention of tensorrt (you can search for it in the navigator with the environment selected), so i deleted them. The remaining 1 had the various tensorrt, tensorrt-llm, etc. I tried running the Chat with RTX app again and it worked.
(windows 11)
Good luck
dont know if this might help anyone, but i was having the same issue as op. The first few installation attempts of Chat with RTX failed. On the third attempt it installed. Running the application resulted in the ModuleNotFoundError for tensorrt (or tensorrt_llm).
I opened the Anaconda Navigator and noticed that there were 3 Environments named env_nvd_rag, two of them had no mention of tensorrt (you can search for it in the navigator with the environment selected), so i deleted them. The remaining 1 had the various tensorrt, tensorrt-llm, etc. I tried running the Chat with RTX app again and it worked.
(windows 11)
Good luck
I've installed Anaconda Navigator and marked using it as a default python 11. Went in, saw only one env_nvd_rag which DID have tensorrt and then ran rtx chat. For some reason it did work afterwards?
have same error win11, intel 10900k, and RTX3090FE. I downloaded the .zip, extracted, and installed. Install was successful. First run looks like the following:
Environment path found: D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag App running with config { "models": { "supported": [ { "name": "Mistral 7B int4", "installed": true, "metadata": { "model_path": "model\\mistral\\mistral7b_int4_engine", "engine": "llama_float16_tp1_rank0.engine", "tokenizer_path": "model\\mistral\\mistral7b_hf", "max_new_tokens": 1024, "max_input_token": 7168, "temperature": 0.1 } }, { "name": "Llama 2 13B int4", "installed": true, "metadata": { "model_path": "model\\llama\\llama13_int4_engine", "engine": "llama_float16_tp1_rank0.engine", "tokenizer_path": "model\\llama\\llama13_hf", "max_new_tokens": 1024, "max_input_token": 3900, "temperature": 0.1 } } ], "selected": "Mistral 7B int4" }, "sample_questions": [ { "query": "How does NVIDIA ACE generate emotional responses?" }, { "query": "What is Portal prelude RTX?" }, { "query": "What is important about Half Life 2 RTX?" }, { "query": "When is the launch date for Ratchet & Clank: Rift Apart on PC?" } ], "dataset": { "sources": [ "directory", "youtube", "nodataset" ], "selected": "directory", "path": "dataset", "isRelative": true }, "strings": { "directory": "Folder Path", "youtube": "YouTube URL", "nodataset": "AI model default" } } Traceback (most recent call last): File "D:\NVIDIA_ChatWithRTX_Demo\RAG\trt-llm-rag-windows-main\app.py", line 101, in <module> llm = TrtLlmAPI( File "D:\NVIDIA_ChatWithRTX_Demo\RAG\trt-llm-rag-windows-main\trt_llama_api.py", line 106, in __init__ runtime_rank = tensorrt_llm.mpi_rank() File "D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag\lib\site-packages\tensorrt_llm\_utils.py", line 221, in mpi_rank return mpi_comm().Get_rank() File "D:\NVIDIA_ChatWithRTX_Demo\env_nvd_rag\lib\site-packages\tensorrt_llm\_utils.py", line 216, in mpi_comm from mpi4py import MPI ImportError: DLL load failed while importing MPI: The specified procedure could not be found. Press any key to continue . . .
I just used the MPI installer given in ChatWithRTX_installer_3_5.zip\RAG named msmpisetup.exe, after restarting my PC. Also, I saw that you installed the MPI from the Microsoft website(in an earlier comment), maybe uninstalling that and installing the one given by Nvidia would help.
I have encountered the same issue for ImportError: DLL load failed while importing MPI: The specified procedure could not be found.
and made a solution, If ChatwithRTX is installed in your home directory and conda is installed in ProgramData, you can run this command in command prompt to fix it C:\ProgramData\miniconda3\Scripts\conda.exe install -p '%USERPROFILE%\AppData\Local\NVIDIA\ChatWithRTX\env_nvd_rag' -c intel mpi4py
The solution is install the RTX Chat on C:, i was install this on D: and get the same error of all of you, now is loading the first installation :D
], "dataset": { "sources": [ "directory", "nodataset" ], "selected": "directory", "path": "dataset", "isRelative": true }, "strings": { "directory": "Folder Path", "nodataset": "AI model default" } } .gitattributes: 100%|█████████████████████████████████████████████████████████████████████| 1.52k/1.52k [00:00<?, ?B/s] 1_Pooling/config.json: 100%|██████████████████████████████████████████████████████████████████| 297/297 [00:00<?, ?B/s] README.md: 100%|██████████████████████████████████████████████████████████████████| 64.2k/64.2k [00:00<00:00, 52.9MB/s] config.json: 100%|████████████████████████████████████████████████████████████████████████████| 655/655 [00:00<?, ?B/s] config_sentence_transformers.json: 100%|███████████████████████████████████████████████| 171/171 [00:00<00:00, 171kB/s] model.safetensors: 100%|██████████████████████████████████████████████████████████| 1.34G/1.34G [00:48<00:00, 27.4MB/s] model.onnx: 38%|████████████████████████▊
dont know if this might help anyone, but i was having the same issue as op. The first few installation attempts of Chat with RTX failed. On the third attempt it installed. Running the application resulted in the ModuleNotFoundError for tensorrt (or tensorrt_llm).
I opened the Anaconda Navigator and noticed that there were 3 Environments named env_nvd_rag, two of them had no mention of tensorrt (you can search for it in the navigator with the environment selected), so i deleted them. The remaining 1 had the various tensorrt, tensorrt-llm, etc. I tried running the Chat with RTX app again and it worked.
(windows 11)
Good luck
I had the same issue, also had a couple install failures, and this fixed it for me.
Thanks!
we just release a updated version 0.3 . Please use that branch and follow readme: https://github.com/NVIDIA/ChatRTX/blob/release/0.3/README.md to setup the application
Have this error:
Traceback (most recent call last): File "F:\Programs\RTXChat\RAG\trt-llm-rag-windows-main\app.py", line 28, in
from trt_llama_api import TrtLlmAPI
File "F:\Programs\RTXChat\RAG\trt-llm-rag-windows-main\trt_llama_api.py", line 42, in
from utils import (DEFAULT_HF_MODEL_DIRS, DEFAULT_PROMPT_TEMPLATES,
File "F:\Programs\RTXChat\RAG\trt-llm-rag-windows-main\utils.py", line 22, in
import tensorrt_llm
File "F:\Programs\ChatWithRTX\env_nvd_rag\lib\site-packages\tensorrt_llm__init__.py", line 15, in
import tensorrt_llm.functional as functional
File "F:\Programs\ChatWithRTX\env_nvd_rag\lib\site-packages\tensorrt_llm\functional.py", line 26, in
import tensorrt as trt
ModuleNotFoundError: No module named 'tensorrt'
Not sure if it's worth mentioning, but the first install has failed building Mistral, this one, however, did complete installation successfully just won't launch.
Win10, RTX 3060ti, i5-12400F, installed through an exe from nvidia site.