jina-ai / jina

☁️ Build multimodal AI applications with cloud-native stack
https://docs.jina.ai
Apache License 2.0
21k stars 2.22k forks source link

How to solve this error? #6158

Closed Demogorgon24242 closed 5 months ago

Demogorgon24242 commented 6 months ago

problem

I am trying to run the readme file program on my local system with Nvidia GPU. However everytime I am executing Deployment.block() with the parameters it throws the following error. What is the cause of this and how to solve it?

Environment

Traceback (most recent call last): File "", line 1, in File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 269, in run_path return _run_module_code(code, init_globals, run_name, File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "c:\Users\91980\OneDrive\Desktop\Harman_genAI_stack\jina\test.py", line 37, in with dep: File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\site-packages\jina\orchestrate\orchestrator.py", line 14, in enter return self.start() File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\site-packages\jina\orchestrate\deployments__init.py", line 1146, in start self.enter_context(self.shards[shard_id]) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 492, in enter_context result = _cm_type.enter(cm) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\site-packages\jina\orchestrate\deployments__init.py", line 236, in enter pod = PodFactory.build_pod(_args).start() File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\site-packages\jina\orchestrate\pods\init.py", line 316, in start self.worker.start() File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 121, in start Traceback (most recent call last): File "", line 1, in File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 116, in spawn_main exitcode = _main(fd, parent_sentinel) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 125, in _main prepare(preparation_data) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 236, in prepare _fixup_main_from_path(data['init_main_from_path']) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 287, in _fixup_main_from_path main_content = runpy.run_path(main_path, File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 269, in run_path self._popen = self._Popen(self) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 224, in _Popen return _run_module_code(code, init_globals, run_name, File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 96, in _run_module_code _run_code(code, mod_globals, init_globals, File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\runpy.py", line 86, in _run_code exec(code, run_globals) File "c:\Users\91980\OneDrive\Desktop\Harman_genAI_stack\jina\test.py", line 37, in with dep: File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\site-packages\jina\orchestrate\orchestrator.py", line 14, in enter return self.start() File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\site-packages\jina\orchestrate\deployments\init__.py", line 1146, in start self.enter_context(self.shards[shard_id]) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\contextlib.py", line 492, in enter_context result = _cm_type.enter(cm) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\site-packages\jina\orchestrate\deployments__init.py", line 236, in enter pod = PodFactory.build_pod(_args).start() File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\site-packages\jina\orchestrate\pods\init__.py", line 316, in start return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 327, in _Popen self.worker.start() File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\process.py", line 121, in start return Popen(process_obj) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\popen_spawn_win32.py", line 45, in init self._popen = self._Popen(self) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 224, in _Popen return _default_context.get_context().Process._Popen(process_obj) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\context.py", line 327, in _Popen return Popen(process_obj) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\popen_spawn_win32.py", line 45, in init__ prep_data = spawn.get_preparation_data(process_obj._name) prep_data = spawn.get_preparation_data(process_obj._name) File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 154, in get_preparation_data File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 154, in get_preparation_data _check_not_importing_main() _check_not_importing_main() File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main File "C:\Users\91980\AppData\Local\Programs\Python\Python310\lib\multiprocessing\spawn.py", line 134, in _check_not_importing_main raise RuntimeError(''' raise RuntimeError(''' RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

RuntimeError: An attempt has been made to start a new process before the current process has finished its bootstrapping phase.

    This probably means that you are not using fork to start your
    child processes and you have forgotten to use the proper idiom
    in the main module:

        if __name__ == '__main__':
            freeze_support()
            ...

    The "freeze_support()" line can be omitted if the program
    is not going to be frozen to produce an executable.

p1

JoanFM commented 6 months ago

can u change the code to do:

if __name__ == '__main__':
   dep = Deployment(...)
   with dep:
       dep.block()
Demogorgon24242 commented 6 months ago

Thanks for the recommendation it did work, now I am just facing gateway issues and loading models I shall try and resolve them.

JoanFM commented 5 months ago

check the memory available in your system

Demogorgon24242 commented 5 months ago

Actually it is 16gb so I am fairly sure that is not the issue. However I am now trying to make an executor on Kaggle or Colab and trying to access it via my local system. There is just one doc present for it, but I believe there should be some examples present as well.

Demogorgon24242 commented 5 months ago

I am trying to use https://colab.research.google.com/github/jina-ai/jina/blob/master/docs/Using_Jina_on_Colab.ipynb#scrollTo=ACoTXhn-Lpz5 (jina on collab) However everytime i try to send a request to the Flow via.

r = f.post('/', Document())
print(r[0].tags)

This is the entire error trace:-

Exception in thread Thread-16: Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/local/lib/python3.10/dist-packages/jina/helper.py", line 1297, in run self.result = asyncio.run(func(*args, kwargs)) File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.10/asyncio/base_events.py", line 649, in run_until_complete return future.result() File "/usr/local/lib/python3.10/dist-packages/jina/clients/mixin.py", line 410, in _get_results async for resp in c._get_results(args, kwargs): File "/usr/local/lib/python3.10/dist-packages/jina/clients/base/grpc.py", line 145, in _get_results async for ( File "/usr/local/lib/python3.10/dist-packages/jina/clients/base/stream_rpc.py", line 58, in stream_rpc_with_retry callback_exec( File "/usr/local/lib/python3.10/dist-packages/jina/clients/helper.py", line 81, in callback_exec raise BadServer(response.header) jina.excepts.BadServer: request_id: "0e6a0f923ff44b2793bd6a1dbff0895a" status { code: ERROR description: "AttributeError(\"\'AnyDoc\' object has no attribute \'tags\'\")" exception { name: "AttributeError" args: "\'AnyDoc\' object has no attribute \'tags\'" stacks: "Traceback (most recent call last):\n" stacks: " File \"/usr/local/lib/python3.10/dist-packages/jina/serve/runtimes/worker/request_handling.py\", line 1091, in process_data\n result = await self.handle(\n" stacks: " File \"/usr/local/lib/python3.10/dist-packages/jina/serve/runtimes/worker/request_handling.py\", line 707, in handle\n return_data = await self._executor.acall(\n" stacks: " File \"/usr/local/lib/python3.10/dist-packages/jina/serve/executors/init.py\", line 748, in acall\n return await self.__acall_endpoint__(req_endpoint, kwargs)\n" stacks: " File \"/usr/local/lib/python3.10/dist-packages/jina/serve/executors/init.py\", line 880, in acall_endpoint__\n return await exec_func(\n" stacks: " File \"/usr/local/lib/python3.10/dist-packages/jina/serve/executors/init__.py\", line 838, in exec_func\n return await get_or_reuse_loop().run_in_executor(\n" stacks: " File \"/usr/lib/python3.10/concurrent/futures/thread.py\", line 58, in run\n result = self.fn(self.args, self.kwargs)\n" stacks: " File \"/usr/local/lib/python3.10/dist-packages/jina/serve/executors/decorators.py\", line 325, in arg_wrapper\n return fn(executor_instance, *args, **kwargs)\n" stacks: " File \"\", line 7, in foo\n docs[0].tags[\'cuda\'] = torch.cuda.is_available()\n" stacks: " File \"/usr/local/lib/python3.10/dist-packages/docarray/base_doc/doc.py\", line 260, in getattr\n return super().getattribute(item)\n" stacks: "AttributeError: \'AnyDoc\' object has no attribute \'tags\'\n" executor: "GPUExec" } } exec_endpoint: "/" target_executor: ""


AttributeError Traceback (most recent call last) /usr/local/lib/python3.10/dist-packages/jina/helper.py in run_async(func, *args, **kwargs) 1310 try: -> 1311 return thread.result 1312 except AttributeError:

AttributeError: '_RunThread' object has no attribute 'result'

During handling of the above exception, another exception occurred:

BadClient Traceback (most recent call last) 4 frames /usr/local/lib/python3.10/dist-packages/jina/helper.py in run_async(func, *args, **kwargs) 1313 from jina.excepts import BadClient 1314 -> 1315 raise BadClient( 1316 'something wrong when running the eventloop, result can not be retrieved' 1317 )

BadClient: something wrong when running the eventloop, result can not be retrieved

JoanFM commented 5 months ago

Not sure exactly what is the code u are running, but u need to add the return_type parameter to the post to know which return type to expect in output.

JoanFM commented 5 months ago

or u need to make sure to use docarray <= 0.21 in this case.

Demogorgon24242 commented 5 months ago

yes tuning down the version to docarray=0.21 did work. Thank you, however what was the reason of the error, it would be helpful in future scenarios if you could share some insights. Additionally I believe in order to resolve this issue the colab notebook should be updated to include:- pip install docarray==0.21

JoanFM commented 5 months ago

u can do a PR to update it, it would be very nice.

The reason is that with the release of docarray 0.30 a very big change in the usage of jina and docarray happened.