The original idea behing the timer was that it would allow for a tiny
bit of time for more data to come in to give a large batch of requests
a better chance of being processed together. But I think this had a bad
side effect when it was processing a batch of data on the channel it
will initiate that timer on the first data in and then once the timer
expires that select has an equal chance of being picked and returning
before the channel was fully drained. I saw this in some tests with
races where it would sometime not drain all the data.
Using default means we don't get that little delay, but the default
value never gets selected when one of the other cases will work. Thus it
won't exit until the channel is fully drained.
IMO the second case wins as it has real data behind it (failing test)
where the first case has only an idea.
The original idea behing the timer was that it would allow for a tiny bit of time for more data to come in to give a large batch of requests a better chance of being processed together. But I think this had a bad side effect when it was processing a batch of data on the channel it will initiate that timer on the first data in and then once the timer expires that select has an equal chance of being picked and returning before the channel was fully drained. I saw this in some tests with races where it would sometime not drain all the data.
Using default means we don't get that little delay, but the default value never gets selected when one of the other cases will work. Thus it won't exit until the channel is fully drained.
IMO the second case wins as it has real data behind it (failing test) where the first case has only an idea.