mudler / LocalAI

:robot: The free, Open Source alternative to OpenAI, Claude and others. Self-hosted and local-first. Drop-in replacement for OpenAI, running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. Features: Generate Text, Audio, Video, Images, Voice Cloning, Distributed inference
https://localai.io
MIT License
23.63k stars 1.81k forks source link

/talk endpoint via v2.17.1-aio-cpu (Docker Desktop) throwing a 'MediaDevices API not supported!' alert (Chrome on Ubuntu 23.04) #2655

Open semsion opened 3 months ago

semsion commented 3 months ago

LocalAI version: v2.17.1-aio-cpu

Environment, CPU architecture, OS, and Version: Chrome on Ubuntu 23.04, 64-bit

Describe the bug The /talk endpoint via v2.17.1-aio-cpu (Docker Desktop) throwing a 'MediaDevices API not supported!' alert, despite setting gpt-4, whisper-1, and tts-1 in the input fields.

To Reproduce Set gpt-4, whisper-1, and tts-1 in the input fields of the /talk endpoint and press the talk button.

Expected behavior The UI to receive the audio input and fill out the prompt.

Logs n/a

Additional context n/a

jtwolfe commented 3 months ago

i recently had a similar issue with a gradio app. apparently lots of browsers don't like giving media device access to apps without ssl. i had success on my phone bypassing this using 'firefox focus' and 'microsoft edge' on my desktop, for a more permanent solution you can use 'nginx proxy manager' on your host system in a conatiner to manage the certificate and add some security.

notes; gateway <- forward 80 and 443 to your npm ip localai <- configure it to advertise on a 10k+ port (ie ~23317) and add an api key npm <- set your default redirect to yt video of 'never gonna give you up' by rick astley ... profit

jtwolfe commented 3 months ago

@semsion i have been unable to replicate this issue with 2.17, can you provide more details on your configuration ie enable DEBUG=True and provide the logs for this failure.

semsion commented 3 months ago

@semsion i have been unable to replicate this issue with 2.17, can you provide more details on your configuration ie enable DEBUG=True and provide the logs for this failure.

Thank you for your response @jtwolfe

After running docker run --env DEBUG=true localai/localai:latest-aio-cpu or docker run --env DEBUG=true localai/localai in the terminal, Docker appears to try and download a whole new container again, despite already having the 2.17.1 container already. Is this correct for a debug configuration?!

jtwolfe commented 3 months ago

I highly recommend using a versioned image (ie localai/localai:v2.17.1-aio-cpu) as the latest image may unexpectedly change as @mudler works on stuff and his CI pipelines keep chugging along. To clarify more when you run latest docker will check the sha265 digest of the image to ensure that the image is actually 'latest', just a quick look at dockerhub rn i see that the latest-aio-cpu image was uploaded 9 hrs ago, and the v2.17.1-aio-cpu image was uploaded 10 days ago. I first noticed this when some elements of my cluster were unable to retrieve images, turns out i had downloaded so many different versions of the latest image at the time that i had expended the number of image pulls for free users from dockerhub XD

While it is up to @mudler on how he wants to work, personally I would configure CI so that latest was actually just the latest release (ie tagged version) of the standard cpu image and then used the a release candidate branch or tag. @semsion if you want to dig a bit deeper into this, here is my favorite article on git branching.

Also I would recommend configuring the .env file and docker-compose to bring everything up, if you are only using docker run I also expect that the aio models will also download every time you run the container. you should map a directory explicitly for this or create a docker volume to store your images. Regarding this look for the LOCALAI_MODELS_PATH variable in the .env file and amend it accordingly.

PS. use docker compose up this will let you easily access the logs (ie without -d), also if you pull the image first and you just want to export the logs as it all starts try docker compose up > localai.log this will output to the noted file

I hope this helps

SuperPat45 commented 2 months ago

Can you set a better error message about the HTTPS requirement in the talk page of the WebUI when the MediaDevices API are not available and the page are not in HTTPS scheme?

semsion commented 1 month ago

I highly recommend using a versioned image (ie localai/localai:v2.17.1-aio-cpu) as the latest image may unexpectedly change as @mudler works on stuff and his CI pipelines keep chugging along. To clarify more when you run latest docker will check the sha265 digest of the image to ensure that the image is actually 'latest', just a quick look at dockerhub rn i see that the latest-aio-cpu image was uploaded 9 hrs ago, and the v2.17.1-aio-cpu image was uploaded 10 days ago. I first noticed this when some elements of my cluster were unable to retrieve images, turns out i had downloaded so many different versions of the latest image at the time that i had expended the number of image pulls for free users from dockerhub XD ...

Thank you for your response, and will consider these actions.