benoitc / hackney

simple HTTP client in Erlang
Other
1.34k stars 427 forks source link

Dangling clients when requests are aborted mid checkout #675

Closed ronaldwind closed 3 years ago

ronaldwind commented 3 years ago

We're using hackney in our Elixir project. It might happen that the process that's performing the :hackney.request call is killed before the request is completed. This results in ever increasing amounts of clients (as exposed by the in_use_count of the pool's stats). This in turn will max out the amount of available connections in the pool.

The issue is introduced in v1.17.0 as it was working properly in v1.16.0.

To reproduce:

iex(1)> for _ <- 0..100 do
          {:ok, pid} = Task.start(fn -> 
            {:ok, status_code, headers, client} = :hackney.request(:get, "https://github.com/benoitc/hackney", [], "", []) 
          end)
          Process.sleep(1)
          Process.exit(pid, :kill)
        end
iex(2)> :hackney_pool.get_stats(:default)
[name: :default, max: 1000, in_use_count: 64, free_count: 0, queue_count: 0]

As can be seen, most requests resulted in an increase of the in_use_count.

The issue is introduced by https://github.com/benoitc/hackney/commit/5f7beef1cc0dbdcfe1a9f65b72cad0cde802925b. And as far as I understand it, the checkout job is now offloaded to a separate process using spawn(Fun). The Fun is run in a non-linked process and is not killed when the request is aborted. A change to spawn_link(Fun) fixes the issue for us.

Please have a look at my merge request https://github.com/benoitc/hackney/pull/674 as I'm not quite sure it is a proper generic fix.

benoitc commented 3 years ago

fixed in master

ronaldwind commented 3 years ago

Just to update this issue as well; as I already mentioned at https://github.com/benoitc/hackney/issues/680 this issue is reintroduced in 1.17.3.