tvkitchen / countertop

The entry point for developers who want to set up a TV Kitchen.
https://tv.kitchen
GNU Lesser General Public License v3.0
6 stars 2 forks source link

Writing to kafka isn't guaranteed order #110

Closed slifty closed 3 years ago

slifty commented 3 years ago

Bug

Current Behavior

The CountertopWorker does not await the response from Kafka when writing, which means that it is possible that a given message will not be written in order.

This bug may sound familiar, since we addressed it when we did ingestion engines originally (but I forgot when writing the countertop worker).

Input

N/A -- this is just a bug in the business logic generally, so it would potentially apply to any topology.

Expected Behavior

Kafka messages should always be written in the order that payloads are emitted by an appliance.

Possible Solutions

Instead of writing directly to kafka, the payload event handler should write to a Writable stream. That stream should then honor order.

slifty commented 3 years ago

I'm not sure if @chriszs will give this a formal review but we've been talking about it over slack.

Short story (if I'm summarizing correctly) is...

  1. This will indeed fix the ordering bug (since .write() and .emit() are both synchronous).

However:

  1. This introduces a memory leak, because there is no concept of backpressure and emit will keep on writing to the Writable stream even if the buffer isn't processing fast enough.

I'm inclined to say that this more deterministic "everything will crash eventually" type of bug is better than the "messages will randomly be out of order" bug, and that it's worth merging this and simultaneously creating an issue to capture the backpressure bug (which is related to https://github.com/tvkitchen/meta/issues/17 since that will solve the problem)