approximatelabs / lambdaprompt

λprompt - A functional programming interface for building AI systems
MIT License
374 stars 22 forks source link

doesn't work with fastapi #4

Closed bryanhpchiang closed 1 year ago

bryanhpchiang commented 1 year ago
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/home/bryan/venv/gpu/lib/python3.10/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/home/bryan/venv/gpu/lib/python3.10/site-packages/uvicorn/server.py", line 60, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "uvloop/loop.pyx", line 1517, in uvloop.loop.Loop.run_until_complete
  File "/home/bryan/venv/gpu/lib/python3.10/site-packages/uvicorn/server.py", line 67, in serve
    config.load()
  File "/home/bryan/venv/gpu/lib/python3.10/site-packages/uvicorn/config.py", line 477, in load
    self.loaded_app = import_from_string(self.app)
  File "/home/bryan/venv/gpu/lib/python3.10/site-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/bryan/insight/./main.py", line 4, in <module>
    from lambdaprompt import GPT3Prompt
  File "/home/bryan/venv/gpu/lib/python3.10/site-packages/lambdaprompt/__init__.py", line 12, in <module>
    nest_asyncio.apply()
  File "/home/bryan/venv/gpu/lib/python3.10/site-packages/nest_asyncio.py", line 19, in apply
    _patch_loop(loop)
  File "/home/bryan/venv/gpu/lib/python3.10/site-packages/nest_asyncio.py", line 175, in _patch_loop
    raise ValueError('Can\'t patch loop of type %s' % type(loop))
ValueError: Can't patch loop of type <class 'uvloop.Loop'>
bluecoconut commented 1 year ago

Hmm, it ships with fastapi, and i'm using that as the server in https://github.com/approximatelabs/example-lambdaprompt-server, so I don't think its compatibility with fastapi.

See these tests that use the fastapi as the server in this repo https://github.com/approximatelabs/lambdaprompt/blob/main/tests/test_server.py

Also, i am even using uvicorn to host the app (here's the dockerfile I am using to host prompts.approx.dev)

FROM python:3.11

WORKDIR /

COPY ./requirements.txt /code/requirements.txt

RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt

# 
COPY hosted-prompts /code

# hosted-prompts.main:app --proxy-headers --reload --host 0.0.0.0 --port 1234
CMD ["uvicorn", "code.main:app","--proxy-headers", "--host", "0.0.0.0", "--port", "1234"]

so i'm hosting using uvicorn + fastapi, which is exactly what your error is showing.

Given both of these things above, I can't reproduce the error you are seeing right now..

Are you running fastapi in a special environment, like on a ray cluster or something?

Assuming you are spinning up uvicorn directly, then I found this issue: based on this thread: https://youtrack.jetbrains.com/issue/PY-57332

this at some point has been called by directly calling uvicorn.run(...), and can be fixed by adding loop='asyncio' to uvicorn.run() as a temporary fix

All in all, this sounds like an issue for the upstream package nest_asyncio found here https://github.com/erdewit/nest_asyncio to support the uvloop.Loop

bluecoconut commented 1 year ago

Since this isn't something I can reproduce, and is for an upstream package (the error is in nest_asyncio), and i have been able to verify that lambdaprompt is currently working on fastapi and using uvicorn -- i'm going to close this for now.

I can re-open if you can help with a reproducible example with fastapi being the issue

bryanhpchiang commented 1 year ago

thanks for looking into this here's a simple repro

from lambdaprompt import GPT3Prompt
from pydantic import BaseModel
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware

app = FastAPI()

class Item(BaseModel):
    url: str

@app.post("/upload")
def post(item: Item):
    pass

fastapi 0.88.0 lambdaprompt 0.3.3

starting server with

uvicorn main:app --reload --port 8001

standard python in a venv, no ray cluster

bluecoconut commented 1 year ago

Re-opening while I investigate.

I'm doing almost the exact same thing, so I'm wondering if it's versions and will check that now...

bryanhpchiang commented 1 year ago

awesome here's a repro on repl as well: https://replit.com/join/glirxyyspa-bryanchiang2

bluecoconut commented 1 year ago

I just made a docker image

FROM python:3.10

RUN echo "from lambdaprompt import GPT3Prompt\n\
from pydantic import BaseModel\n\
from fastapi import FastAPI\n\
from fastapi.middleware.cors import CORSMiddleware \n\
\n\
app = FastAPI()\n\
\n\
class Item(BaseModel):\n\
    url: str\n\
\n\
@app.post(\"/upload\")\n\
def post(item: Item):\n\
    pass" > main.py

RUN pip install fastapi uvicorn pydantic lambdaprompt

CMD ["uvicorn", "main:app", "--reload", "--port", "8001"]

ran docker build -t testing .

then docker run testing

and it worked...

❯ docker run testing
INFO:     Will watch for changes in these directories: ['/']
INFO:     Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)
INFO:     Started reloader process [1] using StatReload
INFO:     Started server process [8]
INFO:     Waiting for application startup.
INFO:     Application startup complete.

I think there's something else going on here... some other event loop somehow is running in the environment you are working in?

bluecoconut commented 1 year ago

In the replit you gave, it also worked for me. 🤔

image

I ran pip install -U -r requirements.txt and then uvicorn main:app --reload --port 8001

bluecoconut commented 1 year ago

I do see a pinned version in the poetry version you are (maybe) using (if not using requirements.txt)

[tool.poetry.dependencies]
fastapi = "^0.45.0"
python = "^3.8"
uvicorn = "^0.10.8"

which resolves to install... 0.10.9 (from dec 20, 2019) https://pypi.org/project/uvicorn/0.10.9/

So maybe this is ultimately depending on how the environment is setup?

in the replit that is working for me, here are the versions that are installed (and are verified as working together)

~/Example-FastAPI-uvicorn$ pip freeze | grep "uvicorn\|fastapi\|lambdaprompt"
fastapi==0.89.1
lambdaprompt==0.3.3
uvicorn==0.20.0
bluecoconut commented 1 year ago

Going to close again (hope you don't mind), I'm pretty confident that this is working since it worked on both the replit you sent as well as a raw dockerfile (as light-weight proof as I think I can make to show it is working)

If you find out what the cause on your end ultimately is, i'm curious now, so please report back!

bryanhpchiang commented 1 year ago

might be a python versioning issue. the replit is running some version of 3.8.x

i'm using 3.10.6, all dependencies are at the same version

what exact python version does the docker image use?

bluecoconut commented 1 year ago

Just checked a few more ways

  1. I ran the docker file above with 3.8 -> which resolved to 3.8.16, and it worked
  2. Ran it with 3.10 which resolves to 3.10.9 and it works
  3. I edited the dockerfile to also be 3.10.6 and it works (included below so you can try it if you'd like)
  4. Here's from the tests https://github.com/approximatelabs/lambdaprompt/blob/main/.github/workflows/publish-to-pypi.yml#L10 it currently checks 3.7 -> 3.11 and all pass before any builds happen. (and the tests install fastapi as well)
FROM python:3.10.6

RUN echo "from lambdaprompt import GPT3Prompt\n\
from pydantic import BaseModel\n\
from fastapi import FastAPI\n\
from fastapi.middleware.cors import CORSMiddleware \n\
\n\
app = FastAPI()\n\
\n\
class Item(BaseModel):\n\
    url: str\n\
\n\
@app.post(\"/upload\")\n\
def post(item: Item):\n\
    pass" > main.py

RUN pip install fastapi uvicorn pydantic lambdaprompt

CMD ["uvicorn", "main:app", "--reload", "--port", "8001"]
bluecoconut commented 1 year ago

Based on the error, i think you can get it to work if you run uvicorn main:app --reload --port 8001 --loop asyncio

My guess is that your environment has uvloop, and the uvicorn auto mode selects the loop automatically, and is selecting the incompatable loop. but by just adding this 1 parameter, it should work for you.

(specifically note the --loop asyncio)

bryanhpchiang commented 1 year ago

this was the fix! thanks for looking into this, appreciate it