royerlab / napari-chatgpt

A napari plugin to process and analyse images with chatGPT!
BSD 3-Clause "New" or "Revised" License
238 stars 25 forks source link

Issue with connecting through "start conversation with Omega" #49

Open kiraheikes opened 4 months ago

kiraheikes commented 4 months ago

Hi all,

I am very excited to try Omega in napari! I would appreciate your help with getting it working.

I have run into an error when I click "start conversing with Omega." I believe I have done everything properly up until that point, including generating an API Key with chatgpt3.5. Napari loads the traceback error below and the browser window directing to http://127.0.0.1:9000/ never stops loading, so I am unable to connect with chatgpt.

Dr. Royer entered the issue into ChatGPT, which determined the error is due to port 9000 already being in use.

I would appreciate anyone's time on this issue. Best, Kira

Kira Heikes, PhD (she/hers) Postdoc – Munjal Lab Cell Biology – Duke University

Traceback error log:

OSError Traceback (most recent call last) File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\uvicorn\server.py:162, in Server.startup(self=, sockets=None) 161 try: --> 162 server = await loop.create_server( loop = <_WindowsSelectorEventLoop running=False closed=True debug=False> config = <uvicorn.config.Config object at 0x000001B5C4D50340> config.host = '127.0.0.1' config.port = 9000 config.ssl = None config.backlog = 2048 163 create_protocol, 164 host=config.host, 165 port=config.port, 166 ssl=config.ssl, 167 backlog=config.backlog, 168 ) 169 except OSError as exc:

File ~\anaconda3\envs\napari-chatgpt-env\lib\asyncio\base_events.py:1506, in BaseEventLoop.create_server(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>, protocol_factory=<function Server.startup..create_protocol>, host='127.0.0.1', port=9000, family=<AddressFamily.AF_UNSPEC: 0>, flags=<AddressInfo.AI_PASSIVE: 1>, sock=<socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>, backlog=2048, ssl=None, reuse_address=False, reuse_port=None, ssl_handshake_timeout=None, start_serving=True) 1505 except OSError as err: -> 1506 raise OSError(err.errno, 'error while attempting ' sa = ('127.0.0.1', 9000) 1507 'to bind on address %r: %s' 1508 % (sa, err.strerror.lower())) from None 1509 completed = True

OSError: [Errno 10048] error while attempting to bind on address ('127.0.0.1', 9000): only one usage of each socket address (protocol/network address/port) is normally permitted

During handling of the above exception, another exception occurred:

SystemExit Traceback (most recent call last) File ~\anaconda3\envs\napari-chatgpt-env\lib\threading.py:980, in Thread._bootstrap_inner(self=<Thread(Thread-9, stopped 14252)>) 977 _sys.setprofile(_profile_hook) 979 try: --> 980 self.run() self = <Thread(Thread-9, stopped 14252)> self.run = <bound method IPythonKernel._initialize_thread_hooks..run_closure of <Thread(Thread-9, stopped 14252)>> 981 except: 982 self._invoke_excepthook(self)

File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\ipykernel\ipkernel.py:761, in IPythonKernel._initialize_thread_hooks..run_closure(self=<Thread(Thread-9, stopped 14252)>) 759 else: 760 stream._thread_to_parent[self.ident] = parent --> 761 _threading_Thread_run(self) _threading_Thread_run = <function Thread.run at 0x000001B5F85D0CA0> self = <Thread(Thread-9, stopped 14252)>

File ~\anaconda3\envs\napari-chatgpt-env\lib\threading.py:917, in Thread.run(self=<Thread(Thread-9, stopped 14252)>) 915 try: 916 if self._target: --> 917 self._target(*self._args, **self._kwargs) self = <Thread(Thread-9, stopped 14252)> 918 finally: 919 # Avoid a refcycle if the thread is running a function with 920 # an argument that has a member that points to the thread. 921 del self._target, self._args, self._kwargs

File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\napari_chatgpt\chat_server\chat_server.py:327, in start_chat_server..server_thread_function() 325 def server_thread_function(): 326 # Start Chat server: --> 327 chat_server.run() chat_server = <napari_chatgpt.chat_server.chat_server.NapariChatServer object at 0x000001B59B3F9D90>

File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\napari_chatgpt\chat_server\chat_server.py:252, in NapariChatServer.run(self=) 251 def run(self): --> 252 self._start_uvicorn_server(self.app) self.app = <fastapi.applications.FastAPI object at 0x000001B59B3F98B0> self = <napari_chatgpt.chat_server.chat_server.NapariChatServer object at 0x000001B59B3F9D90>

File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\napari_chatgpt\chat_server\chat_server.py:249, in NapariChatServer._start_uvicorn_server(self=, app=) 247 config = Config(app, port=self.port) 248 self.uvicorn_server = Server(config=config) --> 249 self.uvicorn_server.run() self.uvicorn_server = <uvicorn.server.Server object at 0x000001B59B3983A0> self = <napari_chatgpt.chat_server.chat_server.NapariChatServer object at 0x000001B59B3F9D90>

File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\uvicorn\server.py:65, in Server.run(self=, sockets=None) 63 def run(self, sockets: list[socket.socket] | None = None) -> None: 64 self.config.setup_event_loop() ---> 65 return asyncio.run(self.serve(sockets=sockets)) self = <uvicorn.server.Server object at 0x000001B59B3983A0> sockets = None

File ~\anaconda3\envs\napari-chatgpt-env\lib\asyncio\runners.py:44, in run(main=, debug=None) 42 if debug is not None: 43 loop.set_debug(debug) ---> 44 return loop.run_until_complete(main) loop = <_WindowsSelectorEventLoop running=False closed=True debug=False> main = <coroutine object Server.serve at 0x000001B59B40DC40> 45 finally: 46 try:

File ~\anaconda3\envs\napari-chatgpt-env\lib\asyncio\base_events.py:634, in BaseEventLoop.run_until_complete(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>, future=<Task finished name='Task-56' coro=<Server.serve...es\uvicorn\server.py:67> exception=SystemExit(1)>) 632 future.add_done_callback(_run_until_complete_cb) 633 try: --> 634 self.run_forever() self = <_WindowsSelectorEventLoop running=False closed=True debug=False> 635 except: 636 if new_task and future.done() and not future.cancelled(): 637 # The coroutine raised a BaseException. Consume the exception 638 # to not log a warning, the caller doesn't have access to the 639 # local task.

File ~\anaconda3\envs\napari-chatgpt-env\lib\asyncio\base_events.py:601, in BaseEventLoop.run_forever(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>) 599 events._set_running_loop(self) 600 while True: --> 601 self._run_once() self = <_WindowsSelectorEventLoop running=False closed=True debug=False> 602 if self._stopping: 603 break

File ~\anaconda3\envs\napari-chatgpt-env\lib\asyncio\base_events.py:1905, in BaseEventLoop._run_once(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>) 1903 self._current_handle = None 1904 else: -> 1905 handle._run() handle = <Handle <TaskWakeupMethWrapper object at 0x000001B59B3EB640>()> 1906 handle = None # Needed to break cycles when an exception occurs.

File ~\anaconda3\envs\napari-chatgpt-env\lib\asyncio\events.py:80, in Handle._run(self=<Handle <TaskWakeupMethWrapper object at 0x000001B59B3EB640>()>) 78 def _run(self): 79 try: ---> 80 self._context.run(self._callback, *self._args) self = <Handle <TaskWakeupMethWrapper object at 0x000001B59B3EB640>()> self._callback = <TaskWakeupMethWrapper object at 0x000001B59B3EB640> self._context = <Context object at 0x000001B5C5777600> self._args = (,) 81 except (SystemExit, KeyboardInterrupt): 82 raise

File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\uvicorn\server.py:69, in Server.serve(self=, sockets=None) 67 async def serve(self, sockets: list[socket.socket] | None = None) -> None: 68 with self.capture_signals(): ---> 69 await self._serve(sockets) self = <uvicorn.server.Server object at 0x000001B59B3983A0> sockets = None

File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\uvicorn\server.py:84, in Server._serve(self=, sockets=None) 81 color_message = "Started server process [" + click.style("%d", fg="cyan") + "]" 82 logger.info(message, process_id, extra={"color_message": color_message}) ---> 84 await self.startup(sockets=sockets) self = <uvicorn.server.Server object at 0x000001B59B3983A0> sockets = None 85 if self.should_exit: 86 return

File ~\anaconda3\envs\napari-chatgpt-env\lib\site-packages\uvicorn\server.py:172, in Server.startup(self=, sockets=None) 170 logger.error(exc) 171 await self.lifespan.shutdown() --> 172 sys.exit(1) 174 assert server.sockets is not None 175 listeners = server.sockets

SystemExit: 1

royerloic commented 4 months ago

Here is ChatGPT analysis of the bug:

The most likely explanation is indeed that some other program is already using that port. According to ChatGPT, here is a non-exhaustive list of potential culprits:

1.  SonarQube: A platform for continuous inspection of code quality.
2.  Splunk: Specifically, the Splunk web interface.
3.  WSO2 Carbon: The core platform for WSO2 products.
4.  Grok Exporter: Used to collect logs and export them to Prometheus.
5.  Play Framework: A web application framework, typically in development mode.
6.  H2 Database: A Java SQL database often used in embedded mode; its web console may use port 9000.
7.  SIP Communicator (Jitsi): Open-source VoIP, videoconferencing, and instant messaging application.
8.  UrbanCode Deploy (UCD): IBM’s application release automation tool.
9.  Jupyter Notebook: Sometimes configured to run on port 9000.
10. NetBeans IDE: Used for its internal web server during debugging.
11. Kibana: An open-source data visualization dashboard for Elasticsearch.
12. GitBucket: A Git platform powered by Scala.
13. Red5: An open-source media server for live-streaming solutions.
14. Openfire: A real-time collaboration (RTC) server.
15. Zookeeper: A centralized service for maintaining configuration information, naming, providing distributed synchronization, and providing group services, sometimes configured to run on port 9000.
16. Rundeck: An open-source software job scheduler and runbook automation tool.
17. Celery Flower: A real-time monitoring tool for Celery, often uses port 9000.

You can try to identify which program is using the port and temporarily closing it.

Another possibility is to manually change the port using the configuration file found at ~/.omega/config.yaml

image

Replace this line:

port: 9000

with for example:

port: 9001

A better and longer-term solution is to implement an automatic port check and incrementation mechanism, so that napari-chatgpt automatically searches for the first available port starting at 9000, all automatically, so that users don't have to worry about that. I will do that ASAP. In the meantime, try the workaround above.

Thanks!

royerloic commented 1 month ago

Implemented automatic available port search, will release with fixes soon.