Closed RevanthRameshkumar closed 1 month ago
Hi @RevanthRameshkumar ! Your application should have 1 microphone (for the bot to speak) and 1 speaker (for the bot to listen) per room. Because of current limitations with microhpones you should do this with processes (instead of threads) for example using Python's multiprocessing
library.
If you have 1000 clients, yes, this will take quite some network connections, that's why at those numbers it might be a good idea to scale.
Some people use flas/celery, others use FastAPI and multiprocessing
, others use Node.js and they spawn a python process. There are many ways to do that.
If you want to build AI agents, you might want to check https://github.com/pipecat-ai/pipecat
Thanks! As a followup question, it seems that celery recommends 1 process per 1 CPU core. So for example if my digital ocean vm has 2 vcpu cores, I can only support 2 users in that vm right?
Thanks! As a followup question, it seems that celery recommends 1 process per 1 CPU core. So for example if my digital ocean vm has 2 vcpu cores, I can only support 2 users in that vm right?
Hard to say, but I would say you will be able to support more than 2 users. I'd be curious to know if you try it out. 😃
ill try it out and report back! closing for now
I am working on the server component to send custom playback to users depending on their mics. I will have 1 user per room basically and an AI agent that the user talks to. How many mic devices can I instantiate? Can I dynamically just make 1 client and 1 mic per user (like in https://github.com/daily-co/daily-python/blob/main/demos/audio/raw_audio_send.py)? How many users can I support like this? I'm guessing the "my-mic" takes up some sort of socket or virtual socket on the server...does that mean if I have like 1000 sockets open, the mic writes will start to lag?
Related: https://github.com/daily-co/daily-python/issues/6