royerlab / napari-chatgpt

A napari plugin to process and analyse images with chatGPT!
BSD 3-Clause "New" or "Revised" License
232 stars 25 forks source link

API key docs #1

Closed haesleinhuepf closed 1 year ago

haesleinhuepf commented 1 year ago

Hi Loic @royerloic ,

first of all congrats to this plugin! It looks great.

I'm having some issues in getting it to run. I signed up to the OpenAI API and created an API key. That key is a single long string. Omega asks me for the key and a password. I entered the name of the key and the long string as password. This seems to be wrong. I receive the following error and a browser opens like this:

image

Is there maybe a step I might have missed?

Any hint is welcome! Thanks!

Best, Robert

(napari-chatgpt) C:\Users\rober>napari
|-> Starting Omega!
C:\Users\rober\miniconda3\envs\napari-chatgpt\lib\site-packages\napari\_qt\qt_event_loop.py:409: UserWarning: A QApplication is already running with 1 event loop. To enter *another* event loop, use `run(max_loop_level=2)`!
INFO:     Started server process [17864]
INFO:     Waiting for application startup.
INFO:     Application startup complete.
ERROR:    [Errno 10048] error while attempting to bind on address ('0.0.0.0', 9000): only one usage of each socket address (protocol/network address/port) is normally permitted
INFO:     Waiting for application shutdown.
INFO:     Application shutdown complete.
---------------------------------------------------------------------------
OSError                                   Traceback (most recent call last)
File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\server.py:161, in Server.startup(self=<uvicorn.server.Server object>, sockets=None)
    160 try:
--> 161     server = await loop.create_server(
        loop = <_WindowsSelectorEventLoop running=False closed=True debug=False>
        config = <uvicorn.config.Config object at 0x00000204C7FBFB80>
        config.host = '0.0.0.0'
        config.port = 9000
        config.ssl = None
        config.backlog = 2048
    162         create_protocol,
    163         host=config.host,
    164         port=config.port,
    165         ssl=config.ssl,
    166         backlog=config.backlog,
    167     )
    168 except OSError as exc:

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\base_events.py:1506, in BaseEventLoop.create_server(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>, protocol_factory=<function Server.startup.<locals>.create_protocol>, host='0.0.0.0', port=9000, family=<AddressFamily.AF_UNSPEC: 0>, flags=<AddressInfo.AI_PASSIVE: 1>, sock=<socket.socket [closed] fd=-1, family=AddressFamily.AF_INET, type=SocketKind.SOCK_STREAM, proto=6>, backlog=2048, ssl=None, reuse_address=False, reuse_port=None, ssl_handshake_timeout=None, start_serving=True)
   1505     except OSError as err:
-> 1506         raise OSError(err.errno, 'error while attempting '
        sa = ('0.0.0.0', 9000)
   1507                       'to bind on address %r: %s'
   1508                       % (sa, err.strerror.lower())) from None
   1509 completed = True

OSError: [Errno 10048] error while attempting to bind on address ('0.0.0.0', 9000): only one usage of each socket address (protocol/network address/port) is normally permitted

During handling of the above exception, another exception occurred:

SystemExit                                Traceback (most recent call last)
File ~\miniconda3\envs\napari-chatgpt\lib\threading.py:980, in Thread._bootstrap_inner(self=<Thread(Thread-2, started 12736)>)
    977     _sys.setprofile(_profile_hook)
    979 try:
--> 980     self.run()
        self = <Thread(Thread-2, started 12736)>
    981 except:
    982     self._invoke_excepthook(self)

File ~\miniconda3\envs\napari-chatgpt\lib\threading.py:917, in Thread.run(self=<Thread(Thread-2, started 12736)>)
    915 try:
    916     if self._target:
--> 917         self._target(*self._args, **self._kwargs)
        self = <Thread(Thread-2, started 12736)>
    918 finally:
    919     # Avoid a refcycle if the thread is running a function with
    920     # an argument that has a member that points to the thread.
    921     del self._target, self._args, self._kwargs

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\napari_chatgpt\chat_server\chat_server.py:170, in start_chat_server.<locals>.server_thread_function()
    168 def server_thread_function():
    169     # Start Chat server:
--> 170     chat_server.run()
        chat_server = <napari_chatgpt.chat_server.chat_server.NapariChatServer object at 0x00000204C7FCDCA0>

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\napari_chatgpt\chat_server\chat_server.py:150, in NapariChatServer.run(self=<napari_chatgpt.chat_server.chat_server.NapariChatServer object>)
    148 def run(self):
    149     import uvicorn
--> 150     uvicorn.run(self.app, host="0.0.0.0", port=9000)
        self.app = <fastapi.applications.FastAPI object at 0x00000204C7FCDD60>
        self = <napari_chatgpt.chat_server.chat_server.NapariChatServer object at 0x00000204C7FCDCA0>

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\main.py:578, in run(app=<fastapi.applications.FastAPI object>, host='0.0.0.0', port=9000, uds=None, fd=None, loop='auto', http='auto', ws='auto', ws_max_size=16777216, ws_ping_interval=20.0, ws_ping_timeout=20.0, ws_per_message_deflate=True, lifespan='auto', interface='auto', reload=False, reload_dirs=None, reload_includes=None, reload_excludes=None, reload_delay=0.25, workers=None, env_file=None, log_config={'disable_existing_loggers': False, 'formatters': {'access': {'()': 'uvicorn.logging.AccessFormatter', 'fmt': '%(levelprefix)s %(client_addr)s - "%(request_line)s" %(status_code)s'}, 'default': {'()': 'uvicorn.logging.DefaultFormatter', 'fmt': '%(levelprefix)s %(message)s', 'use_colors': None}}, 'handlers': {'access': {'class': 'logging.StreamHandler', 'formatter': 'access', 'stream': 'ext://sys.stdout'}, 'default': {'class': 'logging.StreamHandler', 'formatter': 'default', 'stream': 'ext://sys.stderr'}}, 'loggers': {'uvicorn': {'handlers': ['default'], 'level': 'INFO', 'propagate': False}, 'uvicorn.access': {'handlers': ['access'], 'level': 'INFO', 'propagate': False}, 'uvicorn.error': {'level': 'INFO'}}, 'version': 1}, log_level=None, access_log=True, proxy_headers=True, server_header=True, date_header=True, forwarded_allow_ips=None, root_path='', limit_concurrency=None, backlog=2048, limit_max_requests=None, timeout_keep_alive=5, timeout_graceful_shutdown=None, ssl_keyfile=None, ssl_certfile=None, ssl_keyfile_password=None, ssl_version=<_SSLMethod.PROTOCOL_TLS_SERVER: 17>, ssl_cert_reqs=<VerifyMode.CERT_NONE: 0>, ssl_ca_certs=None, ssl_ciphers='TLSv1', headers=None, use_colors=None, app_dir=None, factory=False, h11_max_incomplete_event_size=None)
    576     Multiprocess(config, target=server.run, sockets=[sock]).run()
    577 else:
--> 578     server.run()
        server = <uvicorn.server.Server object at 0x00000204C8099D30>
    579 if config.uds and os.path.exists(config.uds):
    580     os.remove(config.uds)  # pragma: py-win32

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\server.py:61, in Server.run(self=<uvicorn.server.Server object>, sockets=None)
     59 def run(self, sockets: Optional[List[socket.socket]] = None) -> None:
     60     self.config.setup_event_loop()
---> 61     return asyncio.run(self.serve(sockets=sockets))
        self = <uvicorn.server.Server object at 0x00000204C8099D30>
        sockets = None

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\runners.py:44, in run(main=<coroutine object Server.serve>, debug=None)
     42     if debug is not None:
     43         loop.set_debug(debug)
---> 44     return loop.run_until_complete(main)
        loop = <_WindowsSelectorEventLoop running=False closed=True debug=False>
        main = <coroutine object Server.serve at 0x00000204C8072EC0>
     45 finally:
     46     try:

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\base_events.py:634, in BaseEventLoop.run_until_complete(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>, future=<Task finished name='Task-1' coro=<Server.serve(...es\uvicorn\server.py:63> exception=SystemExit(1)>)
    632 future.add_done_callback(_run_until_complete_cb)
    633 try:
--> 634     self.run_forever()
        self = <_WindowsSelectorEventLoop running=False closed=True debug=False>
    635 except:
    636     if new_task and future.done() and not future.cancelled():
    637         # The coroutine raised a BaseException. Consume the exception
    638         # to not log a warning, the caller doesn't have access to the
    639         # local task.

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\base_events.py:601, in BaseEventLoop.run_forever(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>)
    599 events._set_running_loop(self)
    600 while True:
--> 601     self._run_once()
        self = <_WindowsSelectorEventLoop running=False closed=True debug=False>
    602     if self._stopping:
    603         break

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\base_events.py:1905, in BaseEventLoop._run_once(self=<_WindowsSelectorEventLoop running=False closed=True debug=False>)
   1903             self._current_handle = None
   1904     else:
-> 1905         handle._run()
        handle = <Handle <TaskWakeupMethWrapper object at 0x00000204CA065610>(<Future finished result=True>)>
   1906 handle = None

File ~\miniconda3\envs\napari-chatgpt\lib\asyncio\events.py:80, in Handle._run(self=<Handle <TaskWakeupMethWrapper object at 0x00000204CA065610>(<Future finished result=True>)>)
     78 def _run(self):
     79     try:
---> 80         self._context.run(self._callback, *self._args)
        self = <Handle <TaskWakeupMethWrapper object at 0x00000204CA065610>(<Future finished result=True>)>
        self._callback = <TaskWakeupMethWrapper object at 0x00000204CA065610>
        self._context = <Context object at 0x00000204C80A1C40>
        self._args = (<Future finished result=True>,)
     81     except (SystemExit, KeyboardInterrupt):
     82         raise

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\server.py:78, in Server.serve(self=<uvicorn.server.Server object>, sockets=None)
     75 color_message = "Started server process [" + click.style("%d", fg="cyan") + "]"
     76 logger.info(message, process_id, extra={"color_message": color_message})
---> 78 await self.startup(sockets=sockets)
        self = <uvicorn.server.Server object at 0x00000204C8099D30>
        sockets = None
     79 if self.should_exit:
     80     return

File ~\miniconda3\envs\napari-chatgpt\lib\site-packages\uvicorn\server.py:171, in Server.startup(self=<uvicorn.server.Server object>, sockets=None)
    169     logger.error(exc)
    170     await self.lifespan.shutdown()
--> 171     sys.exit(1)
    173 assert server.sockets is not None
    174 listeners = server.sockets

SystemExit: 1
royerloic commented 1 year ago

Hi Robert,

The API key is the OpenAI key (the long string), the password is YOUR OWN PASSWORD that you choose freely, it is used to secure the API key via encryption. That password is the only thing you need to remember for the next times :-)

Hope this helps!

Loic

haesleinhuepf commented 1 year ago

Awesome, thanks for the hint @royerloic ! Assume I have done that wrong once and it now asks me for the password only. How can I reset the API key? Where is it stored?

Thanks again! :-)

haesleinhuepf commented 1 year ago

... I'm just trying on a different computer for now, The error is gone but Omega still doesnt work: image

Might an IP-Address of 0.0.0.0 be a not good choice? Could one alternatively program 127.0.0.1 (a.k.a. localhost)?

tdimino commented 1 year ago

Awesome, thanks for the hint @royerloic ! Assume I have done that wrong once and it now asks me for the password only. How can I reset the API key? Where is it stored?

Thanks again! :-)

Hey Robert, it seems that the API key is stored in this default directory, "~/.omega_api_keys" as per the api_key_vault.py. Haven't tried to run this yet, but will later this weekend!

On another related note: I'm curious how easy it would be to swap out the ChatGPT 3.5 API for another LLM. I suspect we could use Alpaca-LoRA to fine-tune a model on all existing napari documentation, including every plugin on the napari hub. It would be quite awesome if it could recommend a plugin based on a user's request within the Omega chat client.

royerloic commented 1 year ago

@Robert: weird error you are getting, it does seem that the server is bound to the address correctly. Did you try another browser?

@tdimino: Great ideas, I think what would be great would be to have a OpenAI Key free option... as this would be much more 'relaxing' for many that would worry about the logistics of handling a key and its cost. finetunning is then the cherry on top... interfacing with plugins would be great, not all have nice documented functional interfaces, but Robert has some very nice code that could be used to enumerate a lot of functions