<?hh
use HHReactor\Producer;
use function HHReactor\HTTP\connection_factory;
async function zipper(Awaitable<Connection> $maybe_connection, int $connection_id): Awaitable<Connection> {
$connection = await $maybe_connection;
await $connection->write("You are Connection #$connection_id"); // greet
return $connection;
}
\HH\Asio\join(async {
$connection_producer = Producer::create(connection_factory(8080));
foreach(Producer::zip($connection_producer, Producer::count_up(), fun('zipper')) await as $connection) {
// handle request after greeting
}
});
Now, count_up currently delays emissions with await HH\Asio\later(), but that's only practical if the consumer is on pace with it. A dense schedule (i.e. lots of resumable wait handles) helps. Here, where the requests are sparse, count_up is a tight loop pushing values into the buffer and hugely leaking memory.
The most immediate solution is to get rid of count_up, which isn't really so useful anyways. However, longer-term, it would be useful to distinguish between this "autonomous" iterator behavior and a consumer-tied iterator like amphp's yield $emit(...) to avoid accidental memory leaks like this.
Suppose we wanted to keep count of requests:
Now,
count_up
currently delays emissions withawait HH\Asio\later()
, but that's only practical if the consumer is on pace with it. A dense schedule (i.e. lots of resumable wait handles) helps. Here, where the requests are sparse,count_up
is a tight loop pushing values into the buffer and hugely leaking memory.The most immediate solution is to get rid of
count_up
, which isn't really so useful anyways. However, longer-term, it would be useful to distinguish between this "autonomous" iterator behavior and a consumer-tied iterator like amphp'syield $emit(...)
to avoid accidental memory leaks like this.