Open sleepwalker2017 opened 9 months ago
Hi @sleepwalker2017, it's fixed in the main branch (PR #465) but v0.6.1 doesn't include it.
https://github.com/NVIDIA/TensorRT-LLM/blob/main/tensorrt_llm/profiler.py#L150.
Please install pynvml>=11.5.0
and psutil
in order to avoid the issue. Thanks,
Hi @sleepwalker2017, it's fixed in the main branch (PR #465) but v0.6.1 doesn't include it.
Please install
pynvml>=11.5.0
andpsutil
in order to avoid the issue. Thanks,
Still has some problem, this bug
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py", line 74, in _wrap
fn(i, *args)
File "/data/weilong.yu/TRT-LLM-0.6/examples/llama/build.py", line 737, in build
profiler.check_gpt_mem_usage(
File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/builder.py", line 48, in decorated
return f(*args, **kwargs)
File "/usr/local/lib/python3.10/dist-packages/tensorrt_llm/profiler.py", line 314, in check_gpt_mem_usage
logger.warning(
TypeError: Logger.warning() takes 2 positional arguments but 3 were given
0.6.1 rel branch
Traceback (most recent call last): File "D:\AI\TensorRT-LLM\examples\chatglm\build.py", line 775, in <module> run_build() File "D:\AI\TensorRT-LLM\examples\chatglm\build.py", line 767, in run_build build(0, args) File "D:\AI\TensorRT-LLM\examples\chatglm\build.py", line 723, in build check_gpt_mem_usage( File "C:\Users\hucd\.conda\envs\trllm\lib\site-packages\tensorrt_llm\builder.py", line 48, in decorated return f(*args, **kwargs) File "C:\Users\hucd\.conda\envs\trllm\lib\site-packages\tensorrt_llm\profiler.py", line 312, in check_gpt_mem_usage _, _, total_mem = device_memory_info(torch.cuda.current_device()) TypeError: cannot unpack non-iterable NoneType object
So how to solve this problem
GPU. 2*V100
convert command