Open simon-weber opened 8 years ago
Just throwing my two cents in. I think I run a similar situation to yokaiichi.
I disagree, or maybe misunderstand, how splitting a large playlist into several will help. Assuming it was complete and playlists were split by overflow, you'd still have to select which of the several resulting playlists you wanted to play from. If you pick playlist A with the first 1000 songs and put that on random, you're not going to get any of the 700 overflow songs in B. So why was B generated? Are you ever going to actually click on B and listen? If not, just generate A and stop.
I also have playlists with large resulting pools of songs. Usually massive swaths of genres, maybe trimming out songs I've heard in the past 30 days. I might have a playlist pool with 10k songs to pick from, but a single playlist can only hold 1k; bummer. However, since it has a random sorting and it regenerates every time I open my client, it's always a different set of those songs every time. There's not any song in my 10k pool that doesn't have a chance of being heard day by day. They all come in and out depending on the random seed that day. Maybe within a -single- generation I won't be able to hear 9k/10k of my songs, but that's not a problem unless you literally listen to your entire 1k playlist before a regeneration sync happens.
TLDR: I think this feature will just be unnecessary load on the systems
Yeah, I've started leaning that way too: using random sorting seems like a better overall solution.
The only times I've personally found myself wanting skip is when debugging.
See https://github.com/simon-weber/Autoplaylists-for-Google-Music/issues/39#issue-131989545. This could be used for splitting big playlists into chunks.