InternLM / lmdeploy

LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
https://lmdeploy.readthedocs.io/en/latest/
Apache License 2.0
4.1k stars 373 forks source link

[Bug] This event loop is already running #2030

Closed cuong-dyania closed 1 month ago

cuong-dyania commented 1 month ago

Checklist

Describe the bug

I followed the tutorial from the main Github repo to run the inference for Llama3-8B model as follows:

os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3,4,5,6,7"
import torch
from lmdeploy import pipeline, TurbomindEngineConfig
model_id= "meta-llama/Meta-Llama-3-8B"
backend_config = TurbomindEngineConfig(cache_max_entry_count=0.2, tp = 8)

pipe = pipeline(model_id,backend_config=backend_config)
response = pipe(['Hi, pls intro yourself'])
print(response)```

But I got the following error:
This event loop is already running

### Reproduction

As mentioned in the description.

### Environment

```Shell
NA

Error traceback

Cell In[1], line 21
     17 backend_config = TurbomindEngineConfig(cache_max_entry_count=0.2, tp = 8)
     19 pipe = pipeline(model_id,
     20                 backend_config=backend_config)
---> 21 response = pipe(['Hi, pls intro yourself'])
     22 print(response)

File /opt/usr/miniforge3/envs/abcdef/lib/python3.9/site-packages/lmdeploy/serve/async_engine.py:304, in AsyncEngine.__call__(self, prompts, gen_config, request_output_len, top_k, top_p, temperature, repetition_penalty, ignore_eos, do_preprocess, adapter_name, use_tqdm, **kwargs)
    296 if gen_config is None:
    297     gen_config = GenerationConfig(
    298         max_new_tokens=request_output_len,
    299         top_k=top_k,
   (...)
    302         repetition_penalty=repetition_penalty,
    303         ignore_eos=ignore_eos)
--> 304 return self.batch_infer(prompts,
    305                         gen_config=gen_config,
    306                         do_preprocess=do_preprocess,
    307                         adapter_name=adapter_name,
    308                         use_tqdm=use_tqdm,
    309                         **kwargs)

File /opt/usr/miniforge3/envs/abcdef/lib/python3.9/site-packages/lmdeploy/serve/async_engine.py:428, in AsyncEngine.batch_infer(self, prompts, gen_config, do_preprocess, adapter_name, use_tqdm, **kwargs)
    424 async def gather():
    425     await asyncio.gather(
    426         *[_inner_call(i, generators[i]) for i in range(len(prompts))])
--> 428 _get_event_loop().run_until_complete(gather())
    429 outputs = outputs[0] if need_list_wrap else outputs
    430 return outputs

File /opt/usr/miniforge3/envs/names_extraction/lib/python3.9/asyncio/base_events.py:623, in BaseEventLoop.run_until_complete(self, future)
    612 """Run until the Future is done.
    613 
    614 If the argument is a coroutine, it is wrapped in a Task.
   (...)
    620 Return the Future's result, or raise its exception.
    621 """
    622 self._check_closed()
--> 623 self._check_running()
    625 new_task = not futures.isfuture(future)
    626 future = tasks.ensure_future(future, loop=self)

File /opt/usr/miniforge3/envs/names_extraction/lib/python3.9/asyncio/base_events.py:583, in BaseEventLoop._check_running(self)
    581 def _check_running(self):
    582     if self.is_running():
--> 583         raise RuntimeError('This event loop is already running')
    584     if events._get_running_loop() is not None:
    585         raise RuntimeError(
    586             'Cannot run the event loop while another loop is running')

RuntimeError: This event loop is already running
zhuraromdev commented 1 month ago

I have had the same issue. Try to install nest_asyncio. And add nest_asyncio.apply() after import of lib and the error disappear. Hope, that it will be useful for you :)

AllentDan commented 1 month ago

I have had the same issue. Try to install nest_asyncio. And add nest_asyncio.apply() after import of lib and the error disappear. Hope, that it will be useful for you :)

Yeah, that would be a solution for interactive python env.

github-actions[bot] commented 1 month ago

This issue is marked as stale because it has been marked as invalid or awaiting response for 7 days without any further response. It will be closed in 5 days if the stale label is not removed or if there is no further response.

github-actions[bot] commented 1 month ago

This issue is closed because it has been stale for 5 days. Please open a new issue if you have similar issues or you have any new updates now.