Open shakenbake15 opened 2 weeks ago
Ooo!
Yeah sure long as this doesn't increase the ram requirements
Your fix looks pretty straight forward tho, if I can find time to test it
or if you give it the okay from your testing I'll slap that edit in
Also I'll add it manually and add your name at my readme and the commit if I implement it.
But if you open a pull request with your fix then we'll be able to get your name on the official GitHub contributors list for this repo
:)
Your chapter save method is non-optimal on books with large chapters. I would recommend that you consider changing the save method to combine the wav files. Currently, you're loading the "combined" file to add a smaller file. When your combined file starts to get large, this considerably slows down the process. loading a 1 min wav file to add 10 seconds is not a big deal, but when you are loading an hour long wav file to add 10 seconds, it can take a while to get to 2 hours or 3 hours. I hope that explanation makes sense. I would suggest that you set a batch limit of 256, then combine the batches for each chapter. This is a minor improvement, but it will speed things up when saving large chapter files.
This is how chat gpt recommends doing the update. Seems reasonable that this would work, but I'm using the program right now, so I can't test it at the moment.
def combine_wav_files(chapter_files, output_path, batch_size=256):
Initialize an empty audio segment