kenwaytis / faster-SadTalker-API

The API server version of the SadTalker project. Runs in Docker, 10 times faster than the original!
MIT License
115 stars 22 forks source link

Does this scale with multiple GPUs/CPUs #6

Open salahzoubi opened 1 year ago

salahzoubi commented 1 year ago

Hello, I'm wondering if there's ways to further optimize the inference speed using your repo?

  1. Was wondering if you're repo scales with more GPUs/cpus?

  2. Is there a way to make resolution 128x128 for inference? That would certainly reduce inference time too right?

Much appreciated for all the help!