Heroku sometimes cuts off outbound connections after we've responded to a request, since web dynos are only meant to service inbound requests. We cache data on S3 after responding to client requests, though, so we don't add extra latency. We need to handle errors that could come up if Heroku cuts us off.
Knox doesn't yet neatly wrap those into the putBuffer callback for us, so we need to attach handlers to the error events for the read and write streams associated with the S3 request.
The chance of seeing these errors seems to be less if we kick off the S3 request before sending our response to the client, but that's a performance tradeoff we can evaluate separately. We would increase our response time when there's a cache miss, but we'd have a higher cache hit rate.
Heroku sometimes cuts off outbound connections after we've responded to a request, since web dynos are only meant to service inbound requests. We cache data on S3 after responding to client requests, though, so we don't add extra latency. We need to handle errors that could come up if Heroku cuts us off.
Knox doesn't yet neatly wrap those into the putBuffer callback for us, so we need to attach handlers to the
error
events for the read and write streams associated with the S3 request.The chance of seeing these errors seems to be less if we kick off the S3 request before sending our response to the client, but that's a performance tradeoff we can evaluate separately. We would increase our response time when there's a cache miss, but we'd have a higher cache hit rate.
/cc @hampelm