Closed sciai-ai closed 1 year ago
Good question. Yes, gradio launches a fastapi api under the hood. But you might be able to do what you need just using the gradio app, as the gradio app also exposes a prediction route. Specifically, when you launch the Gradio app, you'll have:
/
/api/predict
You can see the docs for the prediction API by clicking on "View API" at the bottom of the interface page
is the other way around also possible. so i can have finer control on the app and routes?
We don't currently expose the FastAPI interface. While we are very happy with FastAPI, it is an implementation detail and we don't want to bind our API to the FastAPI api which would add a significant maintenance overhead.
What are you trying to do that isn't currently possible with Gradio?
Edit: I'm wrong.
We access to FastAPI application like this app, local_url, share_url = demo.launch()
or demo.app
/api
endpoint to local_url/api/predict
endpointNot %100 sure but I think both should be possible
here is what I am trying to achieve. I have a prediction endpoint running in a fastapi /api/predict/
-> I want to have an /api/demo/
endpoint which uses some logic from /api/predict
and adds some more logic to make the gradio app work, e.g, having both mic and fileupload inputs requires adapting the /api/predict/
function. I want to have these two endpoints is the same fastapi.
Of course i can make two separate dockers each running separate fastapi apps, however I want to have tighter coupling and consistency so that my /api/predict/
and /api/demo/
get updated together when there is a new version. This way I can also reduce the overhead of running two separate APIs when they have a lot of underlying code in common.
Please also see my additional comments in #1612
Hi @sciai-ai I took a dive into FastAPI and it turns out that it is actually possible to serve your Gradio app within another FastAPI app quite easily using FastAPI's mount()
method. Here's a little demo I put together:
"""
How to launch your Gradio app within another FastAPI app.
Run this from the terminal as you would normally start a FastAPI app: `uvicorn run:app`
and navigate to http://localhost:8000/gradio in your browser to see the Gradio app.
"""
from fastapi import FastAPI
import gradio as gr
CUSTOM_PATH = "/gradio"
app = FastAPI()
@app.get("/")
def read_main():
return {"message": "This is your main app"}
io = gr.Interface(lambda x: "Hello, " + x + "!", "textbox", "textbox")
gradio_app = gr.routes.App.create_app(io)
app.mount(CUSTOM_PATH, gradio_app)
This serves the main app on localhost:8000
, which could be an API endpoint, and it serves the Gradio app on http://localhost:8000/gradio
and allows you to create a Gradio app within another FastAPI app. This should allow you to do what you're looking for, so I'll go ahead and (finally!) close this issue. If not, feel free to reopen with more details.
What if it's a gr.Blocks()
object instead of gr.Interface()
?
What if it's a
gr.Blocks()
object instead ofgr.Interface()
?
same question here..
It should work the same @willprincehearts
It does not work when we use gradio block especially when I use demo.queue().launch()
and gr.Chatbot() approach
See the example below and if you guys give an advice, I would appreciate it.
from fastapi import FastAPI
import gradio as gr
CUSTOM_PATH = "/gradio"
app = FastAPI()
@app.get("/")
def read_main():
return {"message": "This is your main app"}
def flip_text(x):
return x[::-1]
with gr.Blocks as demo:
gr.Markdown(
"""
# Flip Text!
Start typing below to see the output.
"""
)
input = gr.Textbox(placeholder="Flip this text")
output = gr.Textbox()
input.change(fn=flip_text, inputs=input, outputs=output)
gradio_app = gr.routes.App.create_app(demo)
app.mount(CUSTOM_PATH, gradio_app)
I am having the same issue as @rainmaker712
Hi @cpatrickalves @rainmaker712 are you both using the latest version of Gradio? Please check and if you still find the bug, please create a new issue
Yes, I have same bug with same issue below with the latest version of Gradio.
Successfully installed gradio-3.35.2 gradio-client-0.2.7
Traceback (most recent call last):
File "/usr/lib/python3.9/multiprocessing/process.py", line 315, in _bootstrap
self.run()
File "/usr/lib/python3.9/multiprocessing/process.py", line 108, in run
self._target(*self._args, **self._kwargs)
File "/home/ryan/venv_taas/lib/python3.9/site-packages/uvicorn/_subprocess.py", line 76, in subprocess_started
target(sockets=sockets)
File "/home/ryan/venv_taas/lib/python3.9/site-packages/uvicorn/server.py", line 61, in run
return asyncio.run(self.serve(sockets=sockets))
File "/usr/lib/python3.9/asyncio/runners.py", line 44, in run
return loop.run_until_complete(main)
File "/usr/lib/python3.9/asyncio/base_events.py", line 647, in run_until_complete
return future.result()
File "/home/ryan/venv_taas/lib/python3.9/site-packages/uvicorn/server.py", line 68, in serve
config.load()
File "/home/ryan/venv_taas/lib/python3.9/site-packages/uvicorn/config.py", line 473, in load
self.loaded_app = import_from_string(self.app)
File "/home/ryan/venv_taas/lib/python3.9/site-packages/uvicorn/importer.py", line 21, in import_from_string
module = importlib.import_module(module_str)
File "/usr/lib/python3.9/importlib/__init__.py", line 127, in import_module
return _bootstrap._gcd_import(name[level:], package, level)
File "<frozen importlib._bootstrap>", line 1030, in _gcd_import
File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
File "<frozen importlib._bootstrap_external>", line 850, in exec_module
File "<frozen importlib._bootstrap>", line 228, in _call_with_frames_removed
File "/home/ryan/work/Toolbox/main.py", line 16, in <module>
with gr.Blocks as demo:
AttributeError: __enter__
Hi @rainmaker712 !
There are two issues with your code snippet:
gr.Blocks() as demo:
mount_gradio_app
for the submount to be configured correctly. There are some custom gradio things that FastAPI does not know about that need to happen.Using this code works:
from fastapi import FastAPI
import gradio as gr
CUSTOM_PATH = "/gradio"
app = FastAPI()
@app.get("/")
def read_main():
return {"message": "This is your main app"}
def flip_text(x):
return x[::-1]
with gr.Blocks() as demo:
gr.Markdown(
"""
# Flip Text!
Start typing below to see the output.
"""
)
input = gr.Textbox(placeholder="Flip this text")
output = gr.Textbox()
input.change(fn=flip_text, inputs=input, outputs=output)
gr.mount_gradio_app(app, demo, path=CUSTOM_PATH)
@freddyaboulton Thank you for your help!
Actually, I am having trouble with using fastapi with gradio Chatbot. I want to get the internal API with gradio chat (not local ip & the public links) - self-hosting.
Here is the example of chatbot from the official gradio tutorial.
import gradio as gr
import random
import time
with gr.Blocks() as demo:
chatbot = gr.Chatbot()
msg = gr.Textbox()
clear = gr.ClearButton([msg, chatbot])
def user(user_message, history):
return gr.update(value="", interactive=False), history + [[user_message, None]]
def bot(history):
bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"])
history[-1][1] = ""
for character in bot_message:
history[-1][1] += character
time.sleep(0.05)
yield history
response = msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
bot, chatbot, chatbot
)
response.then(lambda: gr.update(interactive=True), None, [msg], queue=False)
demo.queue()
demo.launch()
I have no idea how to use block & launch with
CUSTOM_PATH = "/gradio"
app = FastAPI()
@app.get("/")
def read_main():
return {"message": "This is your main app"}
gr.mount_gradio_app(app, demo, path=CUSTOM_PATH)
Thanks for help.
Hi @rainmaker712 !
You can use the queue by calling queue
before mount_gradio_app
.
import gradio as gr
import random
import time
from fastapi import FastAPI
with gr.Blocks() as demo:
chatbot = gr.Chatbot()
msg = gr.Textbox()
clear = gr.ClearButton([msg, chatbot])
def user(user_message, history):
return gr.update(value="", interactive=False), history + [[user_message, None]]
def bot(history):
bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"])
history[-1][1] = ""
for character in bot_message:
history[-1][1] += character
time.sleep(0.05)
yield history
response = msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then(
bot, chatbot, chatbot
)
response.then(lambda: gr.update(interactive=True), None, [msg], queue=False)
demo.queue()
CUSTOM_PATH = "/gradio"
app = FastAPI()
@app.get("/")
def read_main():
return {"message": "This is your main app"}
app = gr.mount_gradio_app(app, demo, path=CUSTOM_PATH)
You cannot call launch so setting share=True
is not supported in this case.
@freddyaboulton Thank you so much for your advice, It works very well!
Hi , I have one fastapi endpoint which serve the prediction request and it is dockerised running successfully. i have another docker application which is nothing but just gradio interface which sends the user request to fastapi endpoint url. but when i launch the gradio frontend docker , it is unable to hit the fastapi prediction endpoint. getting following error.
2023-07-01 16:55:54 Traceback (most recent call last): 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 200, in _new_conn 2023-07-01 16:55:54 sock = connection.create_connection( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection 2023-07-01 16:55:54 raise err 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 73, in create_connection 2023-07-01 16:55:54 sock.connect(sa) 2023-07-01 16:55:54 ConnectionRefusedError: [Errno 111] Connection refused 2023-07-01 16:55:54 2023-07-01 16:55:54 The above exception was the direct cause of the following exception: 2023-07-01 16:55:54 2023-07-01 16:55:54 Traceback (most recent call last): 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 790, in urlopen 2023-07-01 16:55:54 response = self._make_request( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 496, in _make_request 2023-07-01 16:55:54 conn.request( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 388, in request 2023-07-01 16:55:54 self.endheaders() 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/http/client.py", line 1251, in endheaders 2023-07-01 16:55:54 self._send_output(message_body, encode_chunked=encode_chunked) 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/http/client.py", line 1011, in _send_output 2023-07-01 16:55:54 self.send(msg) 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/http/client.py", line 951, in send 2023-07-01 16:55:54 self.connect() 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 236, in connect 2023-07-01 16:55:54 self.sock = self._new_conn() 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 215, in _new_conn 2023-07-01 16:55:54 raise NewConnectionError( 2023-07-01 16:55:54 urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7effd2516610>: Failed to establish a new connection: [Errno 111] Connection refused 2023-07-01 16:55:54 2023-07-01 16:55:54 The above exception was the direct cause of the following exception: 2023-07-01 16:55:54 2023-07-01 16:55:54 Traceback (most recent call last): 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 486, in send 2023-07-01 16:55:54 resp = conn.urlopen( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 844, in urlopen 2023-07-01 16:55:54 retries = retries.increment( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py", line 515, in increment 2023-07-01 16:55:54 raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 2023-07-01 16:55:54 urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8888): Max retries exceeded with url: /app/v1/UCES (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7effd2516610>: Failed to establish a new connection: [Errno 111] Connection refused')) 2023-07-01 16:55:54 2023-07-01 16:55:54 During handling of the above exception, another exception occurred: 2023-07-01 16:55:54 2023-07-01 16:55:54 Traceback (most recent call last): 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/gradio/routes.py", line 437, in run_predict 2023-07-01 16:55:54 output = await app.get_blocks().process_api( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/gradio/blocks.py", line 1352, in process_api 2023-07-01 16:55:54 result = await self.call_function( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/gradio/blocks.py", line 1077, in call_function 2023-07-01 16:55:54 prediction = await anyio.to_thread.run_sync( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/anyio/to_thread.py", line 33, in run_sync 2023-07-01 16:55:54 return await get_asynclib().run_sync_in_worker_thread( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread 2023-07-01 16:55:54 return await future 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 807, in run 2023-07-01 16:55:54 result = context.run(func, *args) 2023-07-01 16:55:54 File "/UCES_frontend/app.py", line 33, in predict 2023-07-01 16:55:54 response = requests.request("POST", url, headers=headers, data=payload) 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 59, in request 2023-07-01 16:55:54 return session.request(method=method, url=url, **kwargs) 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 589, in request 2023-07-01 16:55:54 resp = self.send(prep, **send_kwargs) 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/requests/sessions.py", line 703, in send 2023-07-01 16:55:54 r = adapter.send(request, **kwargs) 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 519, in send 2023-07-01 16:55:54 raise ConnectionError(e, request=request) 2023-07-01 16:55:54 requests.exceptions.ConnectionError: HTTPConnectionPool(host='localhost', port=8888): Max retries exceeded with url: /app/v1/UCES (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7effd2516610>: Failed to establish a new connection: [Errno 111] Connection refused')) 2023-07-01 16:55:54 Traceback (most recent call last): 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 200, in _new_conn 2023-07-01 16:55:54 sock = connection.create_connection( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 85, in create_connection 2023-07-01 16:55:54 raise err 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/util/connection.py", line 73, in create_connection 2023-07-01 16:55:54 sock.connect(sa) 2023-07-01 16:55:54 ConnectionRefusedError: [Errno 111] Connection refused 2023-07-01 16:55:54 2023-07-01 16:55:54 The above exception was the direct cause of the following exception: 2023-07-01 16:55:54 2023-07-01 16:55:54 Traceback (most recent call last): 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 790, in urlopen 2023-07-01 16:55:54 response = self._make_request( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 496, in _make_request 2023-07-01 16:55:54 conn.request( 2023-07-01 16:55:54 File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 388, in request 2023-07-01 16:55:55 self.endheaders() 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/http/client.py", line 1251, in endheaders 2023-07-01 16:55:55 self._send_output(message_body, encode_chunked=encode_chunked) 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/http/client.py", line 1011, in _send_output 2023-07-01 16:55:55 self.send(msg) 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/http/client.py", line 951, in send 2023-07-01 16:55:55 self.connect() 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 236, in connect 2023-07-01 16:55:55 self.sock = self._new_conn() 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/urllib3/connection.py", line 215, in _new_conn 2023-07-01 16:55:55 raise NewConnectionError( 2023-07-01 16:55:55 urllib3.exceptions.NewConnectionError: <urllib3.connection.HTTPConnection object at 0x7effd25162b0>: Failed to establish a new connection: [Errno 111] Connection refused 2023-07-01 16:55:55 2023-07-01 16:55:55 The above exception was the direct cause of the following exception: 2023-07-01 16:55:55 2023-07-01 16:55:55 Traceback (most recent call last): 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/requests/adapters.py", line 486, in send 2023-07-01 16:55:55 resp = conn.urlopen( 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/urllib3/connectionpool.py", line 844, in urlopen 2023-07-01 16:55:55 retries = retries.increment( 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/urllib3/util/retry.py", line 515, in increment 2023-07-01 16:55:55 raise MaxRetryError(_pool, url, reason) from reason # type: ignore[arg-type] 2023-07-01 16:55:55 urllib3.exceptions.MaxRetryError: HTTPConnectionPool(host='localhost', port=8888): Max retries exceeded with url: /app/v1/UCES (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7effd25162b0>: Failed to establish a new connection: [Errno 111] Connection refused')) 2023-07-01 16:55:55 2023-07-01 16:55:55 During handling of the above exception, another exception occurred: 2023-07-01 16:55:55 2023-07-01 16:55:55 Traceback (most recent call last): 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/gradio/routes.py", line 437, in run_predict 2023-07-01 16:55:55 output = await app.get_blocks().process_api( 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/gradio/blocks.py", line 1352, in process_api 2023-07-01 16:55:55 result = await self.call_function( 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/gradio/blocks.py", line 1077, in call_function 2023-07-01 16:55:55 prediction = await anyio.to_thread.run_sync( 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/anyio/to_thread.py", line 33, in run_sync 2023-07-01 16:55:55 return await get_asynclib().run_sync_in_worker_thread( 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 877, in run_sync_in_worker_thread 2023-07-01 16:55:55 return await future 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/anyio/_backends/_asyncio.py", line 807, in run 2023-07-01 16:55:55 result = context.run(func, *args) 2023-07-01 16:55:55 File "/UCES_frontend/app.py", line 33, in predict 2023-07-01 16:55:55 response = requests.request("POST", url, headers=headers, data=payload) 2023-07-01 16:55:55 File "/usr/local/lib/python3.8/site-packages/requests/api.py", line 59, in request 2023-07-01 16:55:55 return session.request(method=method, url=url, **kwargs)
may i know why its showing this error. ?
steps to create - this is app.py contents for fastapi endpoint within docker container 1. ` app = FastAPI() @app.get("/") async def root(): return {"Welcome ."}
@app.post("/app/v1/predict") async def predict(input):
response = predict(input)
return response`
this ui.py for gradio frontend docker container 2
import gradio as gr
def predict(input): return input
demo = gr.Interface(fn=predict, inputs="text", outputs="text")
demo.queue().launch(server_name="0.0.0.0", server_port=7000)
'
without docker container both are working fine
Sorry to resurrect a closed issue, and thanks for all the guidance in the thread so far. I've managed to achieve a lot of what I needed to with gr.mount_gradio_app
, but is there are two more things I need to be able to handle:
def gate(request: gr.Request):
print(request.session.get("uid", None))
I get the error Request' object has no attribute 'session'
def gate(request: gr.Request):
return RedirectResponse(url="/")
mytab.select(gate)
does nothing when I switch to the selected tab, Is there a way to achieve redirection to a FastAPI page when switching gradio tabs?
+1
Hi @kells1986 !!
For 1. Is the queue enabled in your app? When the queue is enabled connections are made via websocket so it may be a bug with how we copy over the session state to the request object For 2. I think you would have to do with javascript. Right now gradio really only supports returning component updates from functions. Please file an issue and we can see if we build this directly into gradio.
Hey @freddyaboulton,
I am also struggling with accessing the FastAPI request information within Gradio. I have been searching for a solution for a few days now and I would really appreciate any tips on how to overcome this issue using JS, as you suggested. Happy to hear from other approaches if any, as well :)
Hi @antonionieto ! What is your issue? If you want to access the FastAPI request within your gradio function, you can just type the input argument as gr.Request
. If you want to redirect to a separate page, you must do so from the parent application or using the js
parameter, e.g. (js='window.location.pathname='/<path>'
).
Here is an example that does a redirect to the gradio application using both raw html and a FastAPI RedirectResponse
. It also accesses the request within the gradio application.
from fastapi import FastAPI
from fastapi.responses import HTMLResponse, RedirectResponse
import gradio as gr
import uvicorn
app = FastAPI()
def request(req: gr.Request):
return {k:req.headers[k] for k in req.headers}
print_request = gr.Interface(request, None, "json")
HTML = """
<!DOCTYPE html>
<html>
<h1>Gradio Request Demo</h1>
<p>Click the button to be redirected to the gradio app!</p>
<button onclick="window.location.pathname='/gradio'">Redirect</button>
</html>
"""
@app.get("/")
def read_main():
return HTMLResponse(HTML)
@app.get("/foo")
def redirect():
return RedirectResponse("/gradio")
if __name__ == "__main__":
app = gr.mount_gradio_app(app, print_request, path="/gradio")
uvicorn.run(app, port=8080)
Thanks, @freddyaboulton!! That made the difference.
Btw, for those who are not aware of the Event listener compatibility with JS (like me), you can also use the _js
parameter to do the redirect from a button component:
button = gr.Button(value="Redirect")
button.click(None, [], [], _js="window.location.pathname='/gradio'")
@abidlabs
I have a question about how to mount gradio app that uses simple auth(ex. demo.launch(auth=(username, password))
to FastAPI. Above, all examples are not using gradio auth, but I need to use gradio auth(not huggingface Oauth). How can I do this?
@abidlabs I have a question about how to mount gradio app that uses simple auth(ex.
demo.launch(auth=(username, password))
to FastAPI. Above, all examples are not using gradio auth, but I need to use gradio auth(not huggingface Oauth). How can I do this?
I am looking for something similar also. It would be nice to have some functionality where you can have some sort of depends inject already offered by FastAPI that could look something like this
@app.api_route("/app1", methods=["GET", "POST"])
async def apps(request: Request, user: Annotated[models.User | None, Depends(authentication.check_authorization_of_request)]):
return gr.mount_gradio_app(app, gradio_app.gradio_io) #Bad syntax but should describe the point
@young-hun-jo as a workaround I think you can set the auth
attribute manually before mounting. For example,
demo.auth = ...
app = gr.mount_gradio_app(...)
@young-hun-jo as a workaround I think you can set the
auth
attribute manually before mounting. For example,demo.auth = ... app = gr.mount_gradio_app(...)
I have tried to run the following code, and got a following error,
AttributeError: 'Blocks' object has no attribute 'auth_message'
I solved it by defining demo.auth_message
, like this
demo.auth = ("hello", "world")
demo.auth_message = None
app = gr.mount_gradio_app(...)
Hi @rainmaker712 !
You can use the queue by calling
queue
beforemount_gradio_app
.import gradio as gr import random import time from fastapi import FastAPI with gr.Blocks() as demo: chatbot = gr.Chatbot() msg = gr.Textbox() clear = gr.ClearButton([msg, chatbot]) def user(user_message, history): return gr.update(value="", interactive=False), history + [[user_message, None]] def bot(history): bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"]) history[-1][1] = "" for character in bot_message: history[-1][1] += character time.sleep(0.05) yield history response = msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( bot, chatbot, chatbot ) response.then(lambda: gr.update(interactive=True), None, [msg], queue=False) demo.queue() CUSTOM_PATH = "/gradio" app = FastAPI() @app.get("/") def read_main(): return {"message": "This is your main app"} app = gr.mount_gradio_app(app, demo, path=CUSTOM_PATH)
You cannot call launch so setting
share=True
is not supported in this case.
I use the method, but it always runing, can anyone help?
Hi @rainmaker712 ! You can use the queue by calling
queue
beforemount_gradio_app
.import gradio as gr import random import time from fastapi import FastAPI with gr.Blocks() as demo: chatbot = gr.Chatbot() msg = gr.Textbox() clear = gr.ClearButton([msg, chatbot]) def user(user_message, history): return gr.update(value="", interactive=False), history + [[user_message, None]] def bot(history): bot_message = random.choice(["How are you?", "I love you", "I'm very hungry"]) history[-1][1] = "" for character in bot_message: history[-1][1] += character time.sleep(0.05) yield history response = msg.submit(user, [msg, chatbot], [msg, chatbot], queue=False).then( bot, chatbot, chatbot ) response.then(lambda: gr.update(interactive=True), None, [msg], queue=False) demo.queue() CUSTOM_PATH = "/gradio" app = FastAPI() @app.get("/") def read_main(): return {"message": "This is your main app"} app = gr.mount_gradio_app(app, demo, path=CUSTOM_PATH)
You cannot call launch so setting
share=True
is not supported in this case.
I use the method, but it always runing, can anyone help?
The server log
@freddyaboulton @kimmchii Thanks for your reply. I updated official docs about mount_gradoo_app
function. Now, it has a auth
, auth_message
attribute and after applying this to my gradio app, I successfully set auth attribute while mounting gradio app to FastAPI ! Thanks!
I am making a single fastapi app where one route leads to prediction, and a second route which leads to the gradio interface. My understanding is that gradio itself launches a fastapi instance. is it possible to achieve this?