awslabs / aws-sdk-rust

AWS SDK for the Rust Programming Language
https://awslabs.github.io/aws-sdk-rust/
Apache License 2.0
3.03k stars 247 forks source link

'error writing body to connection' after dropping eventstreaming connection #296

Closed marcbowes closed 7 months ago

marcbowes commented 3 years ago

Bug Report

As part of https://github.com/awslabs/amazon-qldb-driver-rust/pull/77/ I noticed that after I drop the send/receive channels of an eventstreaming connection I get an error logged. I'm not sure if this is doing any harm.

Version

This is a custom build (due to unreleased features) based off smithy-rs tag v0.27.0-alpha. I can try again using a newer version if that helps, but it requires a bunch of effort on my part :).

Platform

macOS.

AWS Services

QldbSessionV2 (unreleased)

Description

the following via tracing:

2021-11-12T18:50:02.573788Z TRACE send_data{sz=83 requested=1}: h2::proto::streams::prioritize: buffered=83
2021-11-12T18:50:02.573800Z DEBUG amazon_qldb_driver::pool: connection pool shutting down
2021-11-12T18:50:02.573841Z TRACE send_data{sz=83 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: requested=83 additional=82 buffered=83 window=1047868 conn=2030908
2021-11-12T18:50:02.573873Z TRACE send_data{sz=83 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: assigning capacity=82
2021-11-12T18:50:02.573906Z TRACE send_data{sz=83 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::stream:   assigned capacity to stream; available=83; buffered=83; id=StreamId(1)
2021-11-12T18:50:02.573950Z TRACE send_data{sz=83 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: available=83 requested=83 buffered=83 has_unavailable=true
2021-11-12T18:50:02.574069Z TRACE send_data{sz=83 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::store: Queue::push
2021-11-12T18:50:02.574137Z TRACE send_data{sz=83 requested=1}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::store:  -> first entry
2021-11-12T18:50:02.574188Z TRACE send_data{sz=83 requested=1}: h2::proto::streams::prioritize: available=83 buffered=83
2021-11-12T18:50:02.574242Z TRACE send_data{sz=83 requested=1}:Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::prioritize: schedule_send stream.id=StreamId(1)
2021-11-12T18:50:02.574271Z TRACE send_data{sz=83 requested=1}:Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::store: Queue::push
2021-11-12T18:50:02.574294Z TRACE send_data{sz=83 requested=1}:Prioritize::queue_frame{stream.id=StreamId(1)}: h2::proto::streams::store:  -> already queued
2021-11-12T18:50:02.574342Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Open { local: Streaming, remote: Streaming } }; is_closed=false; pending_send_empty=false; buffered_send_data=83; num_recv=0; num_send=1
2021-11-12T18:50:02.574409Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=84 curr=83}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: requested=84 additional=1 buffered=83 window=1047868 conn=2030826
2021-11-12T18:50:02.574441Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=84 curr=83}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: assigning capacity=1
2021-11-12T18:50:02.574468Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=84 curr=83}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::stream:   assigned capacity to stream; available=84; buffered=83; id=StreamId(1)
2021-11-12T18:50:02.574493Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=84 curr=83}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::stream:   notifying task
2021-11-12T18:50:02.574518Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=84 curr=83}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::prioritize: available=84 requested=84 buffered=83 has_unavailable=true
2021-11-12T18:50:02.574546Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=84 curr=83}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::store: Queue::push
2021-11-12T18:50:02.574570Z TRACE reserve_capacity{stream.id=StreamId(1) requested=1 effective=84 curr=83}:try_assign_capacity{stream.id=StreamId(1)}: h2::proto::streams::store:  -> already queued
2021-11-12T18:50:02.574749Z TRACE want: signal: Closed
2021-11-12T18:50:02.574800Z TRACE hyper::proto::h2: send body eos
2021-11-12T18:50:02.574821Z TRACE h2::proto::streams::streams: Streams::recv_eof
2021-11-12T18:50:02.574857Z TRACE h2::proto::streams::state: recv_eof; state=Open { local: Streaming, remote: Streaming }
2021-11-12T18:50:02.574907Z TRACE clear_queue{stream.id=StreamId(1)}: h2::proto::streams::prioritize: dropping frame=Data { stream_id: StreamId(1) }
2021-11-12T18:50:02.574980Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(Error(Io(BrokenPipe, None))) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=1
2021-11-12T18:50:02.575009Z TRACE h2::proto::streams::counts: dec_num_streams; stream=StreamId(1)
2021-11-12T18:50:02.575051Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(Error(Io(BrokenPipe, None))) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=0
2021-11-12T18:50:02.575082Z TRACE mio::poll: deregistering event source from poller
2021-11-12T18:50:02.575095Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(Error(Io(BrokenPipe, None))) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=0
2021-11-12T18:50:02.575137Z TRACE h2::proto::streams::streams: drop_stream_ref; stream=Stream { id: StreamId(1), state: State { inner: Closed(Error(Io(BrokenPipe, None))) }, is_counted: false, ref_count: 1, next_pending_send: None, is_pending_send: false, send_flow: FlowControl { window_size: Window(1047868), available: Window(0) }, requested_send_capacity: 0, buffered_send_data: 0, send_task: None, pending_send: Deque { indices: None }, next_pending_send_capacity: None, is_pending_send_capacity: false, send_capacity_inc: true, next_open: None, is_pending_open: false, is_pending_push: false, next_pending_accept: None, is_pending_accept: false, recv_flow: FlowControl { window_size: Window(2096489), available: Window(2097152) }, in_flight_recv_data: 0, next_window_update: None, is_pending_window_update: false, reset_at: None, next_reset_expire: None, pending_recv: Deque { indices: None }, recv_task: None, pending_push_promises: Queue { indices: None, _p: PhantomData }, content_length: Omitted }
2021-11-12T18:50:02.575257Z TRACE h2::proto::streams::counts: transition_after; stream=StreamId(1); state=State { inner: Closed(Error(Io(BrokenPipe, None))) }; is_closed=true; pending_send_empty=true; buffered_send_data=0; num_recv=0; num_send=0
2021-11-12T18:50:02.575373Z DEBUG hyper::proto::h2::client: client request body error: error writing a body to connection: user error: inactive stream

I think all of that is simply triggered by dropping the pairs:

https://github.com/awslabs/amazon-qldb-driver-rust/blob/ad2479b168efd8a768ffe71e88bdc40f00cedc01/amazon-qldb-driver/src/pool.rs#L67-L71

rcoh commented 3 years ago

cc @seanmonstar do we need to flush somehow prior to dropping? Or is this error benign?

marcbowes commented 3 years ago

I wrote a test using ReplayingConnection (near identical to the transcribe-streaming one) and the error doesn't show up there. However, my test fails because ..

thread 'test_success' panicked at 'assertion failed: `(left == right)`
  left: `3`,
 right: `4`: Frame count didn't match.

So, under test I have different behavior to what I captured (using RecordingConnection). I do not get the 'error writing a body' message under test.

jmklix commented 8 months ago

This is a rather old issue and the sdk has gone GA since this issue was opened. Can you update to the latest version and see if you're still running into this problem?

github-actions[bot] commented 7 months ago

Greetings! It looks like this issue hasn’t been active in longer than a week. We encourage you to check if this is still an issue in the latest release. Because it has been longer than a week since the last update on this, and in the absence of more information, we will be closing this issue soon. If you find that this is still a problem, please feel free to provide a comment or add an upvote to prevent automatic closure, or if the issue is already closed, please feel free to open a new one.