Open PandaWorker opened 1 month ago
headers: { 'content-length': '0' }
request:
GET / HTTP/1.1
host: myhost.local
connection: keep-alive
response:
HTTP/1.1 200
content-length: 0
{
statusCode: 200,
headers: { 'content-length': '0' },
trailers: {},
opaque: null,
body: BodyReadable {
_events: {
close: undefined,
error: undefined,
data: undefined,
end: undefined,
readable: undefined
},
_readableState: ReadableState {
highWaterMark: 65536,
buffer: [],
bufferIndex: 0,
length: 0,
pipes: [],
awaitDrainWriters: null,
[Symbol(kState)]: 1053452
},
_read: [Function: bound resume],
_maxListeners: undefined,
[Symbol(shapeMode)]: true,
[Symbol(kCapture)]: false,
[Symbol(kAbort)]: [Function: abort],
[Symbol(kConsume)]: null,
[Symbol(kBody)]: null,
[Symbol(kContentType)]: '',
[Symbol(kContentLength)]: 0,
[Symbol(kReading)]: false
},
context: undefined
}
{ respText3: '' }
Sorry, I don't understand. What is the problem?
Why create a Readable object for the response if there is none. You can just specify null. I don't understand why to perform these unnecessary operations, for answers without a body
This is interesting idea. Can you try this with a browser? as well as with the Node.js IncomingMessage instance? I'd love to see how other existing clients behave.
May be worth exploring the HTTP spec for content-length: 0 too to see if it specifies that the body should not exist or be empty.
I noticed that a reader is being created for the response body, which is not there and it is explicitly specified in content-length: 0
I also think that we could reject the allocation, the response chunk parse.body via http, and after receiving the completion of the headers, it will switch to FixedLengthReader()/ChunkedReader(), I think that parsing all chunks leads to a slowdown, since you need to make an allocation for each chunk, specify the size to wasm, and then get a steamed chunk through the callbacks
Do we need extra operations on the chunk, or can we just install the reader (ReadableStream?) from the socket