Closed KlaasYntema closed 4 days ago
LLMs are known for their limited output number of tokes size. Long form podcast generation has become an issue of the highest priority since multiple requests have been made.
I'll work on it next; No clue what the solution will be since there isn't any known best practice but it will be fun to develop a robust solution for it.
I've managed to implement longform podcast generation. Would love your feedback:
This has been implemented - I'd love feedback!
--longform
flag in CLI or longform=True
in Python APImax_num_chunks
and min_chunk_size
parameters in conversation configword_count
parameter removed from conversation config as it's no longer used
First of all, thank you for this impressive package! I’ve encountered a possible issue when attempting to create longer audio outputs. Specifically, when I set a high word count (e.g., 5000 words) to generate a podcast of around 10 minutes, the resulting audio file only runs between 2 to 3 minutes.
Could you clarify if I might be configuring something incorrectly, or if there’s currently a limitation on generating longer audio outputs?
Thank you for your help!
This is my configuration: