Closed bryanhpchiang closed 1 year ago
Hmm, it ships with fastapi, and i'm using that as the server in https://github.com/approximatelabs/example-lambdaprompt-server, so I don't think its compatibility with fastapi.
See these tests that use the fastapi
as the server in this repo
https://github.com/approximatelabs/lambdaprompt/blob/main/tests/test_server.py
Also, i am even using uvicorn
to host the app (here's the dockerfile I am using to host prompts.approx.dev
)
FROM python:3.11
WORKDIR /
COPY ./requirements.txt /code/requirements.txt
RUN pip install --no-cache-dir --upgrade -r /code/requirements.txt
#
COPY hosted-prompts /code
# hosted-prompts.main:app --proxy-headers --reload --host 0.0.0.0 --port 1234
CMD ["uvicorn", "code.main:app","--proxy-headers", "--host", "0.0.0.0", "--port", "1234"]
so i'm hosting using uvicorn + fastapi, which is exactly what your error is showing.
Given both of these things above, I can't reproduce the error you are seeing right now..
Are you running fastapi in a special environment, like on a ray
cluster or something?
Assuming you are spinning up uvicorn directly, then I found this issue: based on this thread: https://youtrack.jetbrains.com/issue/PY-57332
this at some point has been called by directly calling uvicorn.run(...)
, and can be fixed by adding loop='asyncio'
to uvicorn.run()
as a temporary fix
All in all, this sounds like an issue for the upstream package nest_asyncio
found here https://github.com/erdewit/nest_asyncio to support the uvloop.Loop
Since this isn't something I can reproduce, and is for an upstream package (the error is in nest_asyncio), and i have been able to verify that lambdaprompt is currently working on fastapi
and using uvicorn
-- i'm going to close this for now.
I can re-open if you can help with a reproducible example with fastapi being the issue
thanks for looking into this here's a simple repro
from lambdaprompt import GPT3Prompt
from pydantic import BaseModel
from fastapi import FastAPI
from fastapi.middleware.cors import CORSMiddleware
app = FastAPI()
class Item(BaseModel):
url: str
@app.post("/upload")
def post(item: Item):
pass
fastapi 0.88.0 lambdaprompt 0.3.3
starting server with
uvicorn main:app --reload --port 8001
standard python in a venv, no ray cluster
Re-opening while I investigate.
I'm doing almost the exact same thing, so I'm wondering if it's versions and will check that now...
awesome here's a repro on repl as well: https://replit.com/join/glirxyyspa-bryanchiang2
I just made a docker image
FROM python:3.10
RUN echo "from lambdaprompt import GPT3Prompt\n\
from pydantic import BaseModel\n\
from fastapi import FastAPI\n\
from fastapi.middleware.cors import CORSMiddleware \n\
\n\
app = FastAPI()\n\
\n\
class Item(BaseModel):\n\
url: str\n\
\n\
@app.post(\"/upload\")\n\
def post(item: Item):\n\
pass" > main.py
RUN pip install fastapi uvicorn pydantic lambdaprompt
CMD ["uvicorn", "main:app", "--reload", "--port", "8001"]
ran docker build -t testing .
then docker run testing
and it worked...
❯ docker run testing
INFO: Will watch for changes in these directories: ['/']
INFO: Uvicorn running on http://127.0.0.1:8001 (Press CTRL+C to quit)
INFO: Started reloader process [1] using StatReload
INFO: Started server process [8]
INFO: Waiting for application startup.
INFO: Application startup complete.
I think there's something else going on here... some other event loop somehow is running in the environment you are working in?
In the replit you gave, it also worked for me. 🤔
I ran pip install -U -r requirements.txt
and then uvicorn main:app --reload --port 8001
I do see a pinned version in the poetry version you are (maybe) using (if not using requirements.txt
)
[tool.poetry.dependencies]
fastapi = "^0.45.0"
python = "^3.8"
uvicorn = "^0.10.8"
which resolves to install... 0.10.9 (from dec 20, 2019) https://pypi.org/project/uvicorn/0.10.9/
So maybe this is ultimately depending on how the environment is setup?
in the replit that is working for me, here are the versions that are installed (and are verified as working together)
~/Example-FastAPI-uvicorn$ pip freeze | grep "uvicorn\|fastapi\|lambdaprompt"
fastapi==0.89.1
lambdaprompt==0.3.3
uvicorn==0.20.0
Going to close again (hope you don't mind), I'm pretty confident that this is working since it worked on both the replit you sent as well as a raw dockerfile (as light-weight proof as I think I can make to show it is working)
If you find out what the cause on your end ultimately is, i'm curious now, so please report back!
might be a python versioning issue. the replit is running some version of 3.8.x
i'm using 3.10.6, all dependencies are at the same version
what exact python version does the docker image use?
Just checked a few more ways
3.8
-> which resolved to 3.8.16
, and it worked3.10
which resolves to 3.10.9
and it works3.10.6
and it works (included below so you can try it if you'd like)FROM python:3.10.6
RUN echo "from lambdaprompt import GPT3Prompt\n\
from pydantic import BaseModel\n\
from fastapi import FastAPI\n\
from fastapi.middleware.cors import CORSMiddleware \n\
\n\
app = FastAPI()\n\
\n\
class Item(BaseModel):\n\
url: str\n\
\n\
@app.post(\"/upload\")\n\
def post(item: Item):\n\
pass" > main.py
RUN pip install fastapi uvicorn pydantic lambdaprompt
CMD ["uvicorn", "main:app", "--reload", "--port", "8001"]
Based on the error, i think you can get it to work if you run
uvicorn main:app --reload --port 8001 --loop asyncio
My guess is that your environment has uvloop
, and the uvicorn
auto mode selects the loop automatically, and is selecting the incompatable loop. but by just adding this 1 parameter, it should work for you.
(specifically note the --loop asyncio
)
this was the fix! thanks for looking into this, appreciate it