vllm-project / vllm

A high-throughput and memory-efficient inference and serving engine for LLMs
https://docs.vllm.ai
Apache License 2.0
26.59k stars 3.9k forks source link

[Bug]: No available block found in 60 second. #7145

Closed blackblue9 closed 1 month ago

blackblue9 commented 1 month ago

Your current environment

Collecting environment information...
PyTorch version: 2.3.1+cu121
Is debug build: False
CUDA used to build PyTorch: 12.1
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.4 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Clang version: Could not collect
CMake version: version 3.30.1
Libc version: glibc-2.31

Python version: 3.10.13 (main, Sep 11 2023, 13:44:35) [GCC 11.2.0] (64-bit runtime)
Python platform: Linux-4.18.0-193.14.2.el8_2.x86_64-x86_64-with-glibc2.31
Is CUDA available: True
CUDA runtime version: 12.2.140
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA A800-SXM4-80GB
GPU 1: NVIDIA A800-SXM4-80GB
GPU 2: NVIDIA A800-SXM4-80GB
GPU 3: NVIDIA A800-SXM4-80GB
GPU 4: NVIDIA A800-SXM4-80GB
GPU 5: NVIDIA A800-SXM4-80GB
GPU 6: NVIDIA A800-SXM4-80GB
GPU 7: NVIDIA A800-SXM4-80GB

Nvidia driver version: 535.129.03
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.4.0
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.4.0
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture:                    x86_64
CPU op-mode(s):                  32-bit, 64-bit
Byte Order:                      Little Endian
Address sizes:                   46 bits physical, 57 bits virtual
CPU(s):                          128
On-line CPU(s) list:             0-127
Thread(s) per core:              2
Core(s) per socket:              32
Socket(s):                       2
NUMA node(s):                    2
Vendor ID:                       GenuineIntel
CPU family:                      6
Model:                           106
Model name:                      Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz
Stepping:                        6
Frequency boost:                 enabled
CPU MHz:                         3500.000
CPU max MHz:                     3500.0000
CPU min MHz:                     800.0000
BogoMIPS:                        6000.00
Virtualization:                  VT-x
L1d cache:                       3 MiB
L1i cache:                       2 MiB
L2 cache:                        80 MiB
L3 cache:                        96 MiB
NUMA node0 CPU(s):               0-31,64-95
NUMA node1 CPU(s):               32-63,96-127
Vulnerability Itlb multihit:     Not affected
Vulnerability L1tf:              Not affected
Vulnerability Mds:               Not affected
Vulnerability Meltdown:          Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1:        Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2:        Vulnerable, IBPB: disabled, STIBP: disabled
Vulnerability Srbds:             Not affected
Vulnerability Tsx async abort:   Not affected
Flags:                           fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf pni pclmulqdq dtes64 ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb cat_l3 invpcid_single ssbd mba ibrs ibpb stibp ibrs_enhanced tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdt_a avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb intel_pt avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local wbnoinvd dtherm ida arat pln pts avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg tme avx512_vpopcntdq la57 rdpid md_clear pconfig flush_l1d arch_capabilities

Versions of relevant libraries:
[pip3] numpy==1.26.4
[pip3] torch==2.3.1
[pip3] torchvision==0.18.1
[pip3] triton==2.3.1
[conda] numpy                     1.26.4                   pypi_0    pypi
[conda] torch                     2.3.1                    pypi_0    pypi
[conda] torchvision               0.18.1                   pypi_0    pypi
[conda] triton                    2.3.1                    pypi_0    pypiROCM Version: Could not collect
Neuron SDK Version: N/A
vLLM Version: 0.3.3
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled; Neuron: Disabled
GPU Topology:
GPU0    GPU1    GPU2    GPU3    GPU4    GPU5    GPU6    GPU7    NIC0    NIC1    NIC2    NIC3    NIC4    NIC5    NIC6    CPU Affinity    NUMA Affinity   GPU NUMA ID
GPU0     X      NV8     NV8     NV8     NV8     NV8     NV8     NV8     PXB     PXB     SYS     SYS     SYS     SYS     SYS     0-31,64-95      0               N/A
GPU1    NV8      X      NV8     NV8     NV8     NV8     NV8     NV8     PXB     PXB     SYS     SYS     SYS     SYS     SYS     0-31,64-95      0               N/A
GPU2    NV8     NV8      X      NV8     NV8     NV8     NV8     NV8     SYS     SYS     PXB     SYS     SYS     SYS     SYS     0-31,64-95      0               N/A
GPU3    NV8     NV8     NV8      X      NV8     NV8     NV8     NV8     SYS     SYS     PXB     SYS     SYS     SYS     SYS     0-31,64-95      0               N/A
GPU4    NV8     NV8     NV8     NV8      X      NV8     NV8     NV8     SYS     SYS     SYS     SYS     SYS     PXB     SYS     32-63,96-127    1               N/A
GPU5    NV8     NV8     NV8     NV8     NV8      X      NV8     NV8     SYS     SYS     SYS     SYS     SYS     PXB     SYS     32-63,96-127    1               N/A
GPU6    NV8     NV8     NV8     NV8     NV8     NV8      X      NV8     SYS     SYS     SYS     SYS     SYS     SYS     PXB     32-63,96-127    1               N/A
GPU7    NV8     NV8     NV8     NV8     NV8     NV8     NV8      X      SYS     SYS     SYS     SYS     SYS     SYS     PXB     32-63,96-127    1               N/A
NIC0    PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS      X      PIX     SYS     SYS     SYS     SYS     SYS
NIC1    PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS     PIX      X      SYS     SYS     SYS     SYS     SYS
NIC2    SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS     SYS     SYS     SYS
NIC3    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      PIX     SYS     SYS
NIC4    SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     SYS     PIX      X      SYS     SYS
NIC5    SYS     SYS     SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS     SYS      X      SYS
NIC6    SYS     SYS     SYS     SYS     SYS     SYS     PXB     PXB     SYS     SYS     SYS     SYS     SYS     SYS      X

Legend:

  X    = Self
  SYS  = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
  NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
  PHB  = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
  PXB  = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
  PIX  = Connection traversing at most a single PCIe bridge
  NV#  = Connection traversing a bonded set of # NVLinks

NIC Legend:

  NIC0: mlx5_0
  NIC1: mlx5_1
  NIC2: mlx5_2
  NIC3: mlx5_3
  NIC4: mlx5_4
  NIC5: mlx5_5
  NIC6: mlx5_6

🐛 Describe the bug

When running the following code, the code will report an error after running two or three times. The code is as follows:

from vllm import LLM, SamplingParams

stop_tokens=[
    "<|im_start|>",
    "<|im_end|>",
    "<|endoftext|>",
    "Assistant",
    "assistant"
]

qwen_stop_token_ids= [
    151643,
    151644,
    151645
]

sampling_params = SamplingParams(temperature=1,
                                 top_p=1,
                                 n=200,
                                 max_tokens=2048,
                                 skip_special_tokens=True,
                                 stop=stop_tokens,
                                 stop_token_ids=qwen_stop_token_ids)

llm = LLM(model="/mnt/model/qwen2_72B_chat/", tensor_parallel_size=8)
prompts = ["<|im_start|>user\n" for _ in range(100)]
pre_query_template="<|im_start|>user\n"
for i in range(1000):
    output = llm.generate(pre_query_template, sampling_params)

The error message is as follows:

(VllmWorkerProcess pid=1464965) INFO 08-05 11:19:30 model_runner.py:1181] Graph capturing finished in 23 secs.
Processed prompts: 100%|████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:57<00:00, 57.39s/it, est. speed input: 0.24 toks/s, output: 1038.28 toks/s]
fininsh : 1
Processed prompts:   0%|                                                                                                       | 0/1 [00:00<?, ?it/s, est. speed input: 0.00 toks/s, output: 0.00 toks/s](VllmWorkerProcess pid=1464965) WARNING 08-05 11:21:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464963) WARNING 08-05 11:21:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464964) WARNING 08-05 11:21:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464966) WARNING 08-05 11:21:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464967) WARNING 08-05 11:21:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464968) WARNING 08-05 11:21:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464969) WARNING 08-05 11:21:52 shm_broadcast.py:404] No available block found in 60 second.
shm_broadcast.py:404] No available block found in 60 second(VllmWorkerProcess pid=1464963) WARNING 08-05 11:22:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464966) WARNING 08-05 11:22:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464965) WARNING 08-05 11:22:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464964) WARNING 08-05 11:22:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464967) WARNING 08-05 11:22:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464968) WARNING 08-05 11:22:52 shm_broadcast.py:404] No available block found in 60 second.
(VllmWorkerProcess pid=1464969) WARNING 08-05 11:22:52 shm_broadcast.py:404] No available block found in 60 second.
^C(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method start_worker_execution_loop: , Traceback (most recent call last):
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method start_worker_execution_loop: , Traceback (most recent call last):
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method start_worker_execution_loop: , Traceback (most recent call last):
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 65, in start_worker_execution_loop
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 65, in start_worker_execution_loop
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = self.execute_model(execute_model_req=None)
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = self.execute_model(execute_model_req=None)
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 250, in execute_model
(VllmWorkerProcess pid=1464963) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method start_worker_execution_loop: , Traceback (most recent call last):
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     return func(*args, **kwargs)
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 250, in execute_model
(VllmWorkerProcess pid=1464968) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method start_worker_execution_loop: , Traceback (most recent call last):
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     broadcast_data = broadcast_tensor_dict(src=0)
(VllmWorkerProcess pid=1464964) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226] Exception in worker VllmWorkerProcess while processing method start_worker_execution_loop: , Traceback (most recent call last):
(VllmWorkerProcess pid=1464963) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 65, in start_worker_execution_loop
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     broadcast_data = broadcast_tensor_dict(src=0)
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/distributed/communication_op.py", line 32, in broadcast_tensor_dict
(VllmWorkerProcess pid=1464968) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=1464964) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/executor/multiproc_worker_utils.py", line 223, in _run_worker_process
(VllmWorkerProcess pid=1464963) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = self.execute_model(execute_model_req=None)
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/distributed/communication_op.py", line 32, in broadcast_tensor_dict
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/worker/worker_base.py", line 250, in execute_model
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     return get_tp_group().broadcast_tensor_dict(tensor_dict, src)
(VllmWorkerProcess pid=1464968) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=1464964) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     output = executor(*args, **kwargs)
(VllmWorkerProcess pid=1464968) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=1464963) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=1464965) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     return get_tp_group().broadcast_tensor_dict(tensor_dict, src)
(VllmWorkerProcess pid=1464967) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     broadcast_data = broadcast_tensor_dict(src=0)
(VllmWorkerProcess pid=1464966) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/vllm/distributed/parallel_state.py", line 529, in broadcast_tensor_dict
(VllmWorkerProcess pid=1464964) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]   File "/usr/local/miniconda3/envs/megpie_qwen/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
(VllmWorkerProcess pid=1464968) ERROR 08-05 11:23:09 multiproc_worker_utils.py:226]     return func(*args, **kwargs)

How should I solve this problem?

youkaichao commented 1 month ago

What would happen when you directly send all prompts to the generate funcion, instead of calling it 1000 times?

blackblue9 commented 1 month ago

Do you mean to set the parameter n in SamplingParams to a larger value, such as n=2000? I have tried setting n to a larger value, such as 400, but the model has no output for a long time and the GPU utilization is always 0

youkaichao commented 1 month ago
llm = LLM(model="/mnt/model/qwen2_72B_chat/", tensor_parallel_size=8)
prompts = ["<|im_start|>user\n" for _ in range(100)]
pre_query_template="<|im_start|>user\n"
outputs = llm.generate(prompts, sampling_params)
blackblue9 commented 1 month ago

It has been running for more than 10 minutes and seems to be OK. Thank you for your reply.

ccjincong commented 1 month ago

10 minutes is not enough. I meet the same problem after 2 hours running.

tian969 commented 1 month ago
llm = LLM(model="/mnt/model/qwen2_72B_chat/", tensor_parallel_size=8)
prompts = ["<|im_start|>user\n" for _ in range(100)]
pre_query_template="<|im_start|>user\n"
outputs = llm.generate(prompts, sampling_params)

I'm having the same problem, I'm trying to reason on 50K pieces of data at a time, but I get this error after 648 times of inference, do you mean I should stuff the 50K prompts into the list at a time, and then call the generate function?

My code: `def main(args): totallines = sum(1 for in open(args.input_file)) stop_tokens = args.stop_tokens.split(",")

with open(args.input_file, 'r') as input_file:
    with open(args.output_file, 'a') as output_file:
        llm = LLM(model="/home/disk1/LLMs/Meta-Llama-3___1-8B-Instruct", tensor_parallel_size=8)
        for i, line in enumerate(tqdm(input_file, total=total_lines, desc="Extract objects from the description")):
            if i < args.start_line:
                continue  
            if i >= args.end_line:
                break
            json_obj = json.loads(line)
            image_name = json_obj.get("image")
            description = json_obj.get("description")

            extr_prompt = f"""..."""
            prompt = args.prompt_structure.format(input=extr_prompt)
            sampling_params = SamplingParams(temperature=0.75, top_p=0.95, max_tokens=2048, stop=stop_tokens)
            response = llm.generate(prompt, sampling_params)
            obj_extr= response[0].outputs[0].text

            extr_obj = obj_extr.split(". ")

            output_data = {
                "image": image_name,
                "extr_obj_fr_desc": extr_obj,
                "description": description
            }
            output_file.write(json.dumps(output_data) + '\n')`
tian969 commented 1 month ago
image
youkaichao commented 1 month ago

I think there might be some scheduling bugs there. You can process 1k prompts each time, if it works for you.

tian969 commented 1 month ago

I think there might be some scheduling bugs there. You can process 1k prompts each time, if it works for you.

Thank you. It sloved after use batch inference.