-
**问题描述 / Problem Description**
根据Dockerfile构建的镜像以后,使用docker命令启动成功,访问正常。
docker执行命令如下:
`docker run -d --gpus all -v /home/chatglm3-6b:/Langchain-Chatchat/chatglm3-6b -p 8501:8501 registry.cn-hangzho…
-
torchrun --nproc-per-node=8 --nnodes=1 --node_rank=0 --master_addr 10.255.xxx.xxx --master_port 8109 run.py --data LLaVABench --model llava-internlm2-20b --verbose
But I met the following problem b…
-
### Checklist
- [ ] 1. I have searched related issues but cannot get the expected help.
- [ ] 2. The bug has not been fixed in the latest version.
### Describe the bug
针对InternVL 1.5进行了性能瓶颈分析,发现在BS…
-
Hi,
can we use serper or bing search ?
also can we tell the search agent minimum retrieved content count to increase details ?
I am getting these errors always and the final summary cant…
-
-
Traceback (most recent call last):
File "/root/.conda/envs/InternLM2_Huixiangdou/lib/python3.10/site-packages/aiohttp/web_protocol.py", line 452, in _handle_request
resp = await request_handle…
acwwt updated
5 months ago
-
### Motivation
Currently if pass model name as pass to lmdeploy:
```
docker run -d --runtime nvidia --gpus '"device=0"' \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HUGGING…
-
### Describe the bug
failing everytime and getting CUDA out of memory . We using GPU of nvidia A100 GPU (with 24 vcpu and 220 GIB memory)
### Environment
failing everytime and getting CUDA out of m…
-
### Describe the question.
Hope to support internlm2 in fastchat.
-