Closed kettanaito closed 3 months ago
We can spy on ClientRequest.prototype.write()
and ClientRequest.prototype.end()
to set an internal [kRequestStream]
readable stream. That way, we can access the stream before the socket emits the connect
event.
Since the body stream is the last thing we need to call the request
listener, we can call the listener even before deciding whether to emit the connect+ready on the socket instance.
@mikicho, what do you think about this approach?
Yet another attempt at the Socket-based interceptor
The N time's a charm. :)
Since the body stream is the last thing we need to call the request listener, we can call the listener even before deciding whether to emit the connect+ready on the socket instance.
I like the direction you are going!
Would it work in the following case?
const req = http.request(...)
req.write('sync') // OK
await sleep(1000)
req.write('async') // I think it will work, but I'm not sure about it. my concern is that the ClientRequest would await for the `connect` event to continue
req.end()
It appears that the process.binding function is going to be removed.
Update: seems like we can use const { HTTPParser } = require('_http_common')
(or import { HTTPParser } from 'node:_http_common'
) instead.
Also, a question, Would the socket-based interceptor support fetch as well? or it would still require a different interceptor? If not, we may implement very similar apparch with setGlobalDispatcher
With this approach, we have to mock the connection to the host every time in order for the ClientRequest
(the higher-level consumer) to start streaming the buffered request body through the socket (it won't start that until it receives the connect
event).
This means a few different sequences of events depending on the type of the request.
The same thing as the one above.
This is where things get a bit tricky.
This scenario is different from the event order in the unpatched scenario. The initial lookup
event would include an error, and mirror itself in the error
event on the socket also.
In our case, the lookup/connect/ready are successful. This is problematic for a few reasons:
Error: write EBADF
error, which I believe indicates an invalid write.While no direct consumers care about the lookup/connect/ready events, the error
event is rather crucial. In either case, keeping the event flow as close to the normal behavior as possible is desirable.
We can spy on the net.Socket
, and whenever it's constructed, construct a "shadow" copy of it internally and run the entire request/response on that shadow while keeping the original socket pending.
class SocketController {
constructor(socket) {
this.socket = socket;
// Create a shadow socket.
this.shadow = new net.Socket(...socketArgs);
// Always mock the connection on the shadow.
this.mockConnect(this.shadow)
// Observe the shadow, not the original socket.
this.shadow.write = proxy
this.shadow.push = proxy
this.onShadowHasMock = (response) => {
// If the shadow socket received a mocked response
// from the "request" listener in the interceptor,
// mock the original socket connection (it doesn't matter now)
// and pipe the response to the original socket.
this.mockConnect(this.socket)
pipeResponse(response, this.socket)
}
this.onShadowHasNoMock = () => {
// If the interceptor had no mocked response for this request,
// continue with the original socket as-is.
// The original socket has been pending up to this moment.
}
}
}
In order for us to still respect the original socket and not to implement a dummy Socket class to provision this "waiting" period, we can do something like an infinitely pending promise on the crucial methods of the original socket that we then resolve in case of onShadowHasNoMock
.
this.socket.emit = applyProxyHere((...args) => {
waitForShadow().then(() => {
Reflect.apply(...args)
})
})
If the shadow socket has a mock, the waitForShadow
promise will keep pending and won't ever be resolved (alternatively, we can either resolve it with a flag or reject it and provide a .catch()
handler). If the shadow socket finds no mocked response, it will resolve the promise, which will trigger the method on the original socket in the order that they were originally called.
This should include at least
.emit
and.write
, but may require to add the promise plug to more methods.
Problems with this solution:
ClientRequest
receives the connect
event from it. Catch 22. I tried solving the socket event order issue in https://github.com/mswjs/interceptors/pull/515/commits/f7fded868cf34572424951f019d2cdaeb2286af3. I believe it works great, now I'm writing tests to confirm that.
this.mockConnect(this.shadow)
How is this propogate the connect/ready/lookup
events to the ClientRequest so it will continue to write the data?
Catch 22
Problems with this solution:
Probably not a problem, but I want to make sure I understand, in this option we will emit connect/ready/lookup
twice in case of non-mocked request, right?
While no direct consumers care about the lookup/connect/ready events
It seems like Axios
doesn't use this, but got
exposes this data to its users, I'm not sure for what tho...
I opened a feature request for official mocking API like undici has: https://github.com/nodejs/node/issues/51979
It seems like Axios doesn't use this, but got exposes this data to its users, I'm not sure for what tho...
It doesn't expose that data but rather hooks into those socket events to provide timings measurements. That's okay. If you take a look at the current state of the branch, it emits the events correctly for all scenarios we've talked about. There's no duplication.
I opened a feature request for official mocking API like undici has: https://github.com/nodejs/node/issues/51979
This is great! I'd certainly love to hear Node's opinion on this. I would count on this landing any time soon so our work on the Socket interceptor is relevant for at least a few years.
Probably not a problem, but I want to make sure I understand, in this option we will emit connect/ready/lookup twice in case of non-mocked request, right?
Not anymore.
Not anymore.
This is impressive! how so?
I've pushed the latest state of the changes that supports both HTTP and HTTPS requests. I've moved things around a bit, mainly made the MockSocket
to be agnostic of the type of the messages sent (just giving us the observability), introduced a MockHttpSocket
and utilizes the mock socket and pipes its streams through the HTTP parser, and then integrates with the new _ClientRequestInterceptor
that's now Agent
-based (we aren't mocking the ClientRequest
at all now).
I'm encountering errors when intercepting HTTP/HTTPS requests with a body. Error:
pnpm test:node test/modules/http/intercept/http.request.test.ts
FAIL modules/http/intercept/http.request.test.ts > intercepts a POST request
AbortError: The operation was aborted
❯ Readable.emit ../node:events:513:28
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
Serialized Error: { code: 'ABORT_ERR' }
Caused by: Error: Premature close
❯ Readable.emit ../node:events:513:28
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯
Serialized Error: { code: 'ERR_STREAM_PREMATURE_CLOSE' }
⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯⎯[1/1]⎯
This indicates either that the MockSocket
connection has been closed (not the case, the connection is never established to begin with), or that the MockSocket
is still in writable state when we try to .emit('data')
when forwarding data events from the original socket (in the passthrough()
method). The latter seems to be more likely since this.writable
is still true
. You can also observe the finish
/end
events on the mock socket to see that they are emitted after the passthrough()
start.
This makes sense because right now we call socket.passthrough()
in the interceptor on the request
event, which fires as soon as the request headers are sent. There may be some time passing until the request body has been written completely. I think this indicates an issue that we mustn't start the passthrough until the socket is finished or is in the readable
state.
I've tried things like
// the interceptor
socket.once('end', () => socket.passthrough())
But they haven't fixed the problem.
I've put up an isolated example of this in https://github.com/kettanaito/node-tls-mock. Strangely, all tests are passing there, even for requests with body. The socket implementation seems to be the same. I advise to inspect the differences first to see if I've missed something while porting it to this branch.
ERR_INVALID_ARG_TYPE
on the listener
in JSDOMThe callback
argument to any http*.*
methods throws the following error only in JSDOM:
npx vitest -c test/vitest.config.js test/third-party/axios.test.ts
Error: TypeError [ERR_INVALID_ARG_TYPE]: The "listener" argument must be of type function. Received an instance of Object
at Object.dispatchError (mswjs/interceptors/node_modules/.pnpm/jsdom@16.7.0/node_modules/jsdom/lib/jsdom/living/xhr/xhr-utils.js:63:19)
at EventEmitter.<anonymous> (mswjs/interceptors/node_modules/.pnpm/jsdom@16.7.0/node_modules/jsdom/lib/jsdom/living/xhr/XMLHttpRequest-impl.js:655:18)
at EventEmitter.emit (node:events:526:35)
at mswjs/interceptors/node_modules/.pnpm/jsdom@16.7.0/node_modules/jsdom/lib/jsdom/living/xhr/xhr-utils.js:328:49
at processTicksAndRejections (node:internal/process/task_queues:77:11)
This passes in Node.js so I suspect JSDOM is meddling with the callback function somehow so it's no longer a function. This is caused by us tapping into the actual Node.js http
module via Reflect.apply()
, which will run all the validations, including the callback
validation for the request (http.get(options, callback)
). So, we are getting errors for behaving more like Node.js in JSDOM.
This is difficult to address because it's not related to Interceptors at all. You can reproduce this error in a clean setup with JSDOM and http.*
.
// @vitest-environment jsdom
import http from 'http'
http.get('http://example.com')
This produces the same error. Ideally, we shouldn't do anything about this but it will break consumers and so we have to somehow account for this yet again. I also wonder how is this not a bug in JSDOM given they rely on the http
module to implement XHR polyfill.
JSDOM replaces the global URL
constructor, making it incompatible with Node.js. Verified by forcing the URL
from node:url
to be used as a global constructor. The issue is gone then.
import { URL } from 'node:url'
globalThis.URL = URL
Twofold:
URL
from node:url
explicitly not to rely on the globals in the Interceptors as the globals can be meddled with by the environment.url
is not the instance of the URL
from node:url
, it has been polyfilled and won't be considered a valid URL in Node.js. Cast it to string in that case. That resolves the failing tests. Have to solve the interceptors event deduplication somehow without having to rely on x-request-id
anymore. AsyncLocalStorage works great but it's Node-specific. Tried throwing promises to little success. I can almost feel we can do some .emit
magic on the BatchInterceptor
if each individual interceptor will describe how to identify same events.
@kettanaito isn't this Node only interceptor?
I think I found a small edge case on empty streams:
const got = require('got')
const { ClientRequestInterceptor } = require('@mswjs/interceptors/ClientRequest')
const interceptor = new ClientRequestInterceptor()
interceptor.on('request', function rootListener({ request }) {
request.respondWith(new Response(
new ReadableStream({
start(controller) {controller.close()}
})
))
})
interceptor.apply()
got('http://example.text').then(console.log)
I solved this by pushing an empty string before closing, but I'm not an expert in streams, so maybe there is a more "correct"/straightforward solution.
HEAD requests with response body:
const got = require('got')
const { ClientRequestInterceptor } = require('@mswjs/interceptors/ClientRequest')
const interceptor = new ClientRequestInterceptor()
interceptor.on('request', function rootListener({ request }) {
request.respondWith(new Response('hello'))
})
interceptor.apply()
got.head('http://example.text').then(console.log)
Throw error: RequestError: Parse Error: Expected HTTP/
Now, it may make sense because HEAD requests should not have a body, but nothing enforces this, so I'm wondering if we need to throw a more friendly error on createResponse
to save some time for a poor developer in the future 😅 . According to MDN, it is recommended to ignore this issue. However, it appears that Node throws an error.
@mikicho, not really. The issue is fundamental. In the BatchInterceptor
, you can compose multiple interceptors into a single interface. An interceptor can run in any environment. It's up to the checkEnvironment
method on the interceptor, really. But the issue of one interceptor "falling through" to the other doesn't end with Node.js.
@mikicho, I've fixed the empty ReadableStream issue. I've added a test for the response body for a HEAD request but couldn't reproduce it. May be a got
issue. Feel free to look at the test (test/modules/http/compliance/http-head-response-body.test.ts
) and see if something is missing there.
Something somewhere isn't closing right.
⎯⎯⎯⎯⎯⎯ Unhandled Error ⎯⎯⎯⎯⎯⎯⎯
Error: Worker terminated due to reaching memory limit: JS heap out of memory
❯ new NodeError ../node:internal/errors:399:5
❯ [kOnExit] ../node:internal/worker:277:26
It seems like there is no authorized property in the https socket.
const http = require('http')
const { ClientRequestInterceptor } = require('@mswjs/interceptors/ClientRequest')
const { Stream } = require('stream')
const interceptor = new ClientRequestInterceptor()
const a = new Stream.PassThrough()
interceptor.on('request', function rootListener({ request }) {
request.respondWith(new Response(a))
setTimeout(() => a.push('test'),3000)
})
interceptor.apply()
http.get('http://example.com', res => {
res.on('data', console.log)
res.on('end', () => console.log(1)) // Never get called with stream. it works for new Response('hello')
console.log('hey');
})
Somehow, we skip the end
event when the response is a stream. The data
event emitted normally.
To summary:
request.respondWith(new Response('hello'))
request.respondWith(new Response(a))
@mikicho, are you ending the stream though? It looks like you write to it but never actually close it, thus the end
is never emitted.
const a = new Stream.PassThrough()
interceptor.on('request', function rootListener({ request }) {
request.respondWith(new Response(a))
setTimeout(() => {
a.push('test')
+ a.push(null) // indicate that's the end of the stream
}, 3000)
})
I'm not aware of how well Response
supports Readable
also.
You are correct. I always forget this null thing!
The HTTP parsers are now destroyed like this:
finish
event of the socket. That event indicates that the socket has been ended (.end()
was called) and the socket has flushed any pending data (finish
is emitted). Once these two happen, you can no longer write to the socket, and so the parser won't ever receive any new request HTTP messages.close
event of the socket. If the close
is emitted, the socket has been destroyed, and nothing can push to it anymore. This should work fine memory-wise.
We still need to design a better way of handling this.requestStream
and this.responseStream
since those are inherent to individual requests/responses, and not the entire Socket instance. Basically, need to add a test for when a single Socket is reused for multiple requests/responses.
@mikicho, I'm going to take a break from this, potentially until the end of the month. Please, feel free to tackle any remaining tasks and report/fix any issues you discover from the Nock's test suite. I believe we are at an extremely solid foundation here and I cannot wait to see this merged.
Full disclosure, I don't see Nock compatibility as the requirement to merge this. As soon as we land all the remaining todos in the pull request's description, I consider this task done. We should merge and release it and then tackle missing bits for Nock separately.
Thank you once again for the help with this. Excited to circle back on this in April!
It seems like write
callback never get called.
I guess we don't emit the required event in the socket.
const got = require('got')
const http = require('http')
const { ClientRequestInterceptor } = require('@mswjs/interceptors/ClientRequest')
const interceptor = new ClientRequestInterceptor()
interceptor.on('request', async function rootListener({ request }) {
request.respondWith(new Response('OK!'))
})
interceptor.apply()
const r= http.request('http://example.com', res => {
console.log(1);
res.on('data', console.log)
res.on('end', () =>console.log('end'))
})
r.write('dsa', undefined, () => {
console.log(2); // never get called. it does when I comment out interceptor.apply()
r.end()
})
Update: Also finish callback
r.on('finish', () => console.log(12))
r.end('mamma mia')
And connect
and maybe secureConnect
req.on('socket', socket => {
console.log(13);
socket.once('connect', () => {
console.log(1)
req.end()
})
socket.once('secureConnect', () => console.log(222))
})
@mikicho, connect
, secure
, secureConnect
, and session
events all should be emitted in TLS sockets. I have a compliance test that confirms that. If they are not, you are likely threading in a use case we are not covering properly? Need to distill that use case to the minimum and turn it into an automated test 👀
I think we need to add the chunk size on write when transfer encoding is chunked. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Transfer-Encoding
I think can open a PR to fix this.
@mikicho, let's address the Text-Encoding
issue separately once this merges.
Please, if you have a moment, could you look into why test/third-party/follow-redirect-http.test.ts
is pending forever with the Socket interceptor?
pnpm test:node test/third-party/follow-redirect-http.test.ts
It'd be a great help! 🙏
I suspect the following redirects test suffers from a recursive net.Socket
construction, which leads to a pending test and the worker getting out of memory. There's no protection in place for scenarios when a passthrough request creates another Socket that matches that request, which I suspect happens in the follow-redirect
scenario. I have not confirmed this, just a wild guess.
It seems like we don't propogate mocked response error to the client:
it.only('error', async () => {
interceptor.removeAllListeners()
interceptor.on('request', ({ request }) => {
const replyBody = new PassThrough()
request.respondWith(new Response(new ReadableStream({
start(controller) {
replyBody.on('data', (chunk) => controller.enqueue(chunk))
replyBody.on('error', (error) => controller.error(error))
},
})))
replyBody.push('start')
setTimeout(() => { replyBody.emit('error', 'oh no!') }, 50)
});
const responseErrorPromise = new DeferredPromise<Error>()
http.get('http://localhost:3001/resource', response => {
response.on('error', responseErrorPromise.resolve)
})
const responseError = await responseErrorPromise
expect(responseError.message).toBe('oh no!')
})
The error comes from this line, but I couldn't figure out how to propagate the error.
I tried try {...} catch(e) this.emit('error', e)
but for some reason, it didn't work.
@mikicho, that's a great catch! Do you know how that error is handled in an non-mocked scenario? Does it propagate as the error
event on the ClientRequest
?
I think doing this should work:
await reader.read().catch((error) => /* handle */)
Edit: Opened a fix in https://github.com/mswjs/interceptors/pull/548. Added an automated test. Response stream errors should now be translated to request errors (emit the "error" event), preserving the original stream error.
Do you know how that error is handled in a non-mocked scenario?
Hm.. this is a good question, I couldn't reproduce it with a real request.
According to the docs, IncomingMessage
shouldn't throw and error
event, so maybe we should cover this only from mocked request perspective?
The socket.address()
returns an empty object in connect
event.
I guess it's because we don't establish connection for mocked requests, I'm wondering if we should fake the object, just in case some library expect for these properties (e.g socket.address().port.toString()
)
const { ClientRequestInterceptor } = require('@mswjs/interceptors/ClientRequest')
const http = require('http')
const interceptor = new ClientRequestInterceptor({
name: 'my-interceptor',
})
interceptor.apply();
const req = http.get('http://example.com')
req.once('socket', socket => {
socket.once('connect', () => {
console.log(socket.address()) // Mocked: {}. Non-mocked: { address: '172.27.36.212', family: 'IPv4', port: 42158 }
})
})
Fixing the failing tests in #548.
Opened a fix to forward TLS Socket properties (authorized, encrypted, etc.): https://github.com/mswjs/interceptors/pull/556
Nock (unintentionally?) covers an edge case of a request that ends in the connect
event.
With current socket-based implementation the request never ends.
const { ClientRequestInterceptor } = require('@mswjs/interceptors/ClientRequest')
const http = require('http');
const interceptor = new ClientRequestInterceptor({
name: 'my-interceptor',
})
interceptor.apply();
interceptor.on('request', ({ request }) => {
request.respondWith(new Response())
});
const req = http.request('http://example.com')
req.on('socket', socket => {
socket.once('connect', () => {
console.log(1); // never get called when innterceptor applied.
req.end()
})
socket.once('secureConnect', console.log)
})
WDYT?
How do we need to handle Expect: 100-continue
request header?
What Node.js does:
continue
event. (docs)I wonder if we should emit a continue
event before calling the interceptor.
I think I can open PR when we decide how to handle this.
Nock (unintentionally?) covers an edge case of a request that ends in the connect event.
@mikicho, we cannot cover this. connect
depends on req.end()
so it cannot be called in the connect event listener when the interceptor is enabled. Are there any other details around that test case in Nock (is it a regression)?
How do we need to handle Expect: 100-continue request header?
This reads to be an http.ClientRequest
level of behavior. Is it not happening already? That's odd.
We should support it but we should also be careful not to prematurely support it. As in, if it shouldn't belong to the Socket level and we ignore a potential bug that prevents Node.js from calling checkContinue()
as it does in a non-interceptor scenario.
Let's add an integration test for this first and then see! Would love to dive deeper to learn what exactly Node.js does step-by-step, and at which step it stops when using the interceptor.
@mikicho, hi! You've mentioned there are also some remaining issues left here. Could you please post them in the comment to this pull request? I'd love to see what's left so I can tackle some of them whenever I have a moment. Thank you!
This reads to be an http.ClientRequest level of behavior. Is it not happening already? That's odd.
I understand the issue. The server is the one who returns this response, and Nock mimics this behavior and emits continue
automatically when Expect: 100-continue
is present (source).
As far as I understand, mswjs/interceptors
should be a platform for building tools like Nock and MSW. So I tried to mimic the writeContinue
behavior on Nock's side in the request
event, but then I encountered a couple of problems:
Response
does not support 100
HTTP code (source)
PS: I don't know why Undici has this limitation, and I couldn't find it in the spec.
Uncaught RangeError: init["status"] must be in the range of 200 to 599, inclusive.
- With the current
respondWith
API, how can we return two responses (one with 100 and another when we get the request body)
Having said that, this looks like an esoteric edge case, so if it requires API changes, we might want to automatically writeContinue
inside mswjs/interceptors
.
We are currently correctly destroying the parsers, specifically:
The request parser is destroyed once the request emits the finish
event, indicating that the request can no longer be modified:
The response parser is destroyed once the .destroy()
method is called on the socket, indicating that the socket is no longer operable:
The
.destroy()
method is called for both normal request handling and in case of an exception. We are always freeing the response parser.
I believe this feature is ready for merging! 🎉 The test suite passes, the compatibility looks solid.
There are a few pending tasks for the Nock compatibility, which we've agreed with @mikicho to tackle after this is released. I don't wish to stall this update any longer. It's a huge change and it blocks any bugfixes with having to port them here as well.
Looking forward to releasing the Socket-based interceptor sometime this week!
This has been released in v0.32.0!
Make sure to always update to the latest version (npm i @mswjs/interceptors@latest
) to get the newest features and bug fixes.
Predictable release automation by @ossjs/release.
Yet another attempt at the Socket-based interceptor. This time, we are tapping into the
HTTPParser
from Node.js to give us the HTTP message parsing.Roadmap
SocketController.respondWith()
. The challenge is to convert the Fetch APIResponse
to the HTTP message and push it to the socket instance. We can use https://github.com/har-tools/http-message if nothing else works.connect
andready
events are OK since those are internal and no consumers really depend on them. The main worry is the duplicate.write()
callbacks since the request body would be written twice in the case of request listener (body read 1) + a bypassed request to an existing hostname (body read 2). We can always suppress the callbacks and then replay them.Push the.mockConnect()
event until after the "request" interceptor event has been awaitedClientRequest
won't start writing the request body until it receives the"connect"
and"ready"
events from the socket.signal
Socket constructor options (I presume for aborting the connection). ConsiderAbortSignal.timeout(n)
as well. https://github.com/mswjs/interceptors/pull/559Socket.setTimeout()
that should time out the connection if it hasn't been established within the given time window. Support these for actual socket connections. https://github.com/mswjs/interceptors/pull/558MockHttpSocket
class).tls.connect()
, notnet.connect()
.Add compliance test for non-HTTP transfer over Socket (e.g. SMTP, see this). We should do nothing about these, it's out of scope for this feature.net.connect()
anymore. We utilizeMockSocket
to create a customhttp.Agent
and it's only applied to thehttp
module.this.requestStream
andthis.responseStream
at the right time. Consider that the consumer might've not consumed either of the streams by the time the*End()
methods onMockHttpSocket
are called (that will cause a premature close error).ClientRequest/utils
for it contains a lot of unused code now. https://github.com/mswjs/interceptors/pull/557.authorized
and.encrypyed
(see #455). This is related to inheriting other properties like timeout and signal. (https://github.com/mswjs/interceptors/pull/556)README.md
since the interception algorithm is fundamentally different now.follow-redirect-http
test, it's pending forever (skipped in 230c238ee86806688df57829cb4827a3b8b8f8c0).end
event until all response event listeners are done (#591 backport)