Closed msn-tldr closed 1 year ago
@msn-tldr what is the motivation of copying the code from http_parser? Why can't we use the library directly?
Could you also share the original problem (with example code) that we encounter that requires us to go to low level handling of socket?
@trnguyencflt Thanks for suggesting http.client
stdlib package. I had looked at the popular python http-libs like urllib3, aiohttp. Both had used their own version of HttpResponse
with their own custom-parsing logic of chunks, instead of using http.client
. This led me to be believe python's stdlib doesn't expose "public" modules for low-level http-parsing. Hence I looked into other open-source low-level http-parsers. But I am pleasantly surprised that it does! and this significantly reduced the LOC of the example.
The CI job failure is due to missing downstream dep, which is unrelated
[ERROR] Failed to execute goal on project control-center: Could not resolve dependencies for project io.confluent.controlcenter:control-center:jar:7.5.0-99999: Could not find artifact io.confluent:ce-kafka-rest-extensions:jar:7.5.0-99999 in confluent-codeartifact-central (https://confluent-519856050701.dp.confluent.io/maven/maven-public/) -> [Help 1]
This will use the HTTP connection in a fully duplex mode.
Following is the output of
streaming_produce_v3_main.py
, writing 5 records. It demonstrates the connection is being used in full-duplex mode, i.e. record-receipt is read for 1st record even before then 2nd record is written to the connection.Vs the this example(&output) demonstrate that idiomatic http request-response is half-duplex, i.e all records must be written to the wire, and then only record-receipts can be read for all records. https://github.com/confluentinc/kafka-rest/pull/1164#issue-1711773677