Closed debjit closed 1 month ago
There is a docker image, it should be runnable on CPU as well. Although some of the models have abysmal generation times for CPU.
Thank you for your quick reply; there is not much docker in the tutorials. I prefer docker over install because I test a lot of packages, and docker helps me separate everything.
https://github.com/rsxdalv/tts-generation-webui?tab=readme-ov-file#docker-setup
tts-generation-webui can also be ran inside of a Docker container. To get started, first build the Docker image while in the root directory:
docker build -t rsxdalv/tts-generation-webui . Once the image has built it can be started with Docker Compose:
docker compose up -d The container will take some time to generate the first output while models are downloaded in the background. The status of this download can be verified by checking the container logs:
docker logs tts-generation-webui
Thank you. The readme is huge. and I think changelog should be at the end of it. I searched the Docker keyword, and it gave me changelogs pulling an image section, so I tried that.
I agree, it's time to restructure the readme file.
On Sat, May 4, 2024, 8:31 AM Debjit Biswas @.***> wrote:
Thank you. The readme is huge. and I think changelog should be at the end of it. I searched the Docker keyword, and it gave me changelogs pulling an image section, so I tried that.
— Reply to this email directly, view it on GitHub https://github.com/rsxdalv/tts-generation-webui/issues/308#issuecomment-2094031707, or unsubscribe https://github.com/notifications/unsubscribe-auth/ABTRXI3YBYWDI3KH3LT4WILZARXB5AVCNFSM6AAAAABHF5GUICVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAOJUGAZTCNZQG4 . You are receiving this because you commented.Message ID: @.***>
I do not mind the long waiting time; I wanted to run into my R5 2600 and 16 GB of RAM on Ubuntu 22 using Docker. Is that possible?