Open avanderhoorn opened 7 years ago
Can't you use the EnableRewind
functionality on the request for this? That should take care of buffering and potentially spool to a temp file is the body is big enough.
Ya I thought about that, but EnableRewind
has the potential side effect of writing to the filesystem. For our scenario, I'm wondering if we can do something similar (by way of how the stream is wrapped) and do it without writing to the filesystem as we only have to do a partial read of the content is too long... does that make sense?
I was talking to @davidfowl and the side effects of EnableRewind
is probably acceptable. Other tools will also use this so its going to be fine for now.
If you only do a partial read (less than the threshold), the bytes never touch disk anyway. You only want the buffering part so you can rewind the stream after you're finished reading what you need. :smile:
Just incase you have any thoughts, here is the update to capture text based bodies:
Body data can only be ready once, hence when we go to read it we will need to wrap the body stream that already exists. This means that from that point onwards, if anyone interacts with the body stream they will be using our copy (the Req/Res objects have a setter for the Body so this should be fine.
Next we need to attempt to read the stream to completion or till we have hit our trim limit (62k - this is a limit that we define so that we don't end up reading 100's mbs). With what ever data we read, we return as part of our payload, but we also need to keep a Memory Stream copy of it so that when someone else attempts to read the Body later we can given them that copy. Only slight trick is that if we only partially read the result previously, we will need to drain our Memory Stream copy first before switching back to the original Body stream.
Note, we also need to be content type/encoding aware as there isn't any point trimming binary content.