Open Stono opened 4 years ago
It seems like pump
isn't passing through the error
, if you add:
async getWatchObjectStream (options) {
console.log('getting watch stream')
const jsonStream = new JSONStream()
const stream = this.http(Object.assign({ stream: true }, options))
// pass the error through
stream.on('error', (err) => { jsonStream.emit('error', err) })
pump(stream, jsonStream)
return jsonStream
}
we're able to catch it in the client
Hi, We're trying to detect when a
watch
event has timed out. In our use case, this is typically because the kubernetes master has restarted or been upgraded. It is quite easily simulated by setting up a watch, killing your wifi for a couple of minutes, then reconnecting. You'll see the watch doesn't timeout and the stream is still readable.Setting the
timeout
property on request will use the setting during a stream to decide if it is dead (see https://github.com/request/request/blob/df346d8531ac4b8c360df301f228d5767d0e374e/request.js#L807):However setting that property just leads to kubernetes-client silently existing the watch, with no way to capture the timeout.
I believe the issue is here: https://github.com/godaddy/kubernetes-client/blob/master/backends/request/client.js#L225
If you change that to:
You see the timeout:
I believe this function should probably return a Promise, like the line below, and throw this error when it happens so that the consumer can get it, and reconnect?
In other news
request
is deprecated.