jina-ai / example-speech-to-image

An example of building a speech to image generation pipeline with Jina, Whisper and StableDiffusion
20 stars 7 forks source link

JINA_MP_START_METHOD=spawn HF_TOKEN=YOUR_FH_TOKEN python flow.py #4

Open sonicviz opened 1 year ago

sonicviz commented 1 year ago

Hi,

I'm using VSCode on Windows and having a few issues getting this running both locally and pushing to cloud. Lots of errors when trying to run it after the installs.

Was this tutorial done on windows, mac, or linux?

While the video was interesting as insight into your process, practically it's hard to replicate it locally for various reasons, not lease being the need to have 3 GPU's!

I think it would be a great idea, and far more useful, to do a written blog tutorial that oulines the same project from building it directly in the cloud on Jina.ai.

Do you currently have any tutorials like that?

Thanks.

alaeddine-13 commented 1 year ago

Hey, This tutorial was tried on ubuntu and should work on both linux and mac. Although Jina is supported on windows, there are a few limitations that come from pytorch and windows. On one hand, pytorch does not support fork multiprocessing method and on the other hand, windows doesn't support properly spawn method. Therefore, I highly recommend using ubuntu. This is a tutorial for you to build the project on the cloud: https://jina.ai/news/speech-to-image-generation/

sonicviz commented 1 year ago

Thanks for the info.

I think ignoring Windows is going to cost you developer uptake. Advising to use Ubuntu is ok from a technical perspective, but the reality is services need to work from all the major development OS's and Windows is the major dev OS. Sure, it does have WSL but it's easier to use docker on WSL than WSL directly for dev, especially if you're doing multiple dev projects that make it more efficient to stay in WIndows proper for development - tooling, setup, access to other apps etc etc.

sonicviz commented 1 year ago

Still not having luck under Ubuntu.

1 Local test

Okay, I tried running it locally under WSL2 in Windows 11 in Ubuntu with a .venv following the readme.

Executing the flow.py it downloads everything then crashed in VSCode with the following:

Screenshot 2022-11-23 204128

2 Remote Deploy test

So that didn't work, then I tried a remote deployment. I put my hf_token in an .env.local (correct no?) and fired it off. Signed in ook to JC login, started deploy, it starts to deploy, then after a minute or so gets disconnected:

🔐 Successfully logged in to Jina AI as sonicviz (username: xxxx)! (.venv) sonicviz@Desktop:/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image$ jc deploy flow.yml Traceback (most recent call last): File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/bin/jc", line 10, in sys.exit(main()) File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/main.py", line 21, in main getattr(api, args.jccli.replace('-', ''))(args) File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/api.py", line 19, in wrapper return asyncio.run(f(*args, *kwargs)) File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/api.py", line 26, in deploy return await CloudFlow(path=args.path).aenter() File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/flow.py", line 305, in aenter await self._deploy() File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/flow.py", line 160, in _deploy raise e File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/flow.py", line 135, in _deploy async with session.post( File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/aiohttp/client.py", line 1141, in aenter self._resp = await self._coro File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/aiohttp/client.py", line 560, in _request await resp.start(conn) File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 899, in start message, payload = await protocol.read() # type: ignore[union-attr] File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/aiohttp/streams.py", line 616, in read await self._waiter aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected (.venv) sonicviz@Desktop:/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image$ jc deploy flow.yml Traceback (most recent call last): File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/bin/jc", line 10, in sys.exit(main()) File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/main.py", line 21, in main getattr(api, args.jccli.replace('-', ''))(args) File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/api.py", line 19, in wrapper return asyncio.run(f(args, **kwargs)) File "/usr/lib/python3.8/asyncio/runners.py", line 44, in run return loop.run_until_complete(main) File "/usr/lib/python3.8/asyncio/base_events.py", line 616, in run_until_complete return future.result() File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/api.py", line 26, in deploy return await CloudFlow(path=args.path).aenter() File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/flow.py", line 305, in aenter await self._deploy() File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/flow.py", line 160, in _deploy raise e File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/jcloud/flow.py", line 135, in _deploy async with session.post( File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/aiohttp/client.py", line 1141, in aenter self._resp = await self._coro File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/aiohttp/client.py", line 560, in _request await resp.start(conn) File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/aiohttp/client_reqrep.py", line 899, in start message, payload = await protocol.read() # type: ignore[union-attr] File "/mnt/e/Source Control/AI Stable Diffusion/jina-example-speech-to-image/.venv/lib/python3.8/site-packages/aiohttp/streams.py", line 616, in read await self._waiter aiohttp.client_exceptions.ServerDisconnectedError: Server disconnected

sonicviz commented 1 year ago

3 Creating directly on the cloud @ https://cloud.jina.ai/

I tried creating an executor in the cloud directly by upload, which went ok, but you can't create flows or apps directly? It seems you still need a local repo to do this then push to the cloud? Looks like I'm stopped dead, as there is no way to create flows in the cloud. As I have been disconnected as seen in local test # 2 local to remote deploy above, there doesn't seem much point in proceeding until I resolve that issue.

Some feedback so far: . Executer names are global, not per user, so they need to be globally unique. This is not apparent till you try to upload one and it exists already on the Executor Hub . Creating an executor online lists the steps as they complete but there is no progress indicator for each step. This would be helpful to know where you are in the upload and creation process. Success for the first one, continuing on, stay tuned....

Receiving zip file... Normalizing the content... Uploading the zip... Building image... STEP 1/7 FROM docker.io/pytorch/pytorch:1.12.1-cuda11.3-cudnn8-runtime@sha256:0bc0971dc8ae319af610d493aced87df46255c9508a8b9e9bc365f11a56e7b75 STEP 2/7 RUN apt-get update && apt-get install --no-install-recommends -y gcc libc6-dev git STEP 3/7 RUN python3 -m pip install --no-cache-dir jina STEP 4/7 COPY requirements.txt requirements.txt STEP 5/7 RUN pip install --default-timeout=1000 --compile -r requirements.txt STEP 6/7 COPY . /workdir/ STEP 7/7 WORKDIR /workdir { "tag": "637e19d302af6c58d5def964", "id": "1p9m2jbt", "name": "WhisperExecutorSonicviz", "alias": "WhisperExecutorSonicviz", "images": [ "jinahub/1p9m2jbt:637e19d302af6c58d5def967", "registry.hubble.jina.ai/executors/1p9m2jbt:637e19d302af6c58d5def967" ], "secret": "9af41f4e5aa97d4890ac29de1ed9e25b", "visibility": "public" } Successfully pushed 1p9m2jbt