ebonnal / streamable

[Python] Stream-like manipulation of iterables.
Apache License 2.0
130 stars 0 forks source link

How does it compare to other Python streaming libraries? #22

Closed snth closed 3 weeks ago

snth commented 1 month ago

Hi,

First off, this project looks really cool! I'm sorry to be starting with an issue asking for comparisons with similar projects but while I've long been a fan of stream processing type patters, I also want to limit how many projects I keep track off so it would be great to know how you see streamable viz a viz similar projects and where you want streamable to go in future?

Things in my current repertoire that look similar on the surface are:

ebonnal commented 1 month ago

Hi @snth, thanks a lot, happy to see you here, and great question! (I'm not a power user of those libraries so correct me if I'm wrong!)

In my opinion, the choice of a library ultimately depends on how naturally and elegantly your use case is implemented with it.

I want to outline the fundamental design of each library to provide insight into how it influences the way you structure your logic when working with them:

More comparison material: if you don't come from there you should check the reddit post (Especially the Comparison section at the end) and this comment.

where you want streamable to go in future?

In term of my future involvement into the project, as I leveraged it at my previous job to implement 30 custom ETL pipelines that are running in production, I have the responsability to at least maintain its quality over years.

I am glad it is now in the feedback phase, gathering some "this would be a cool choice for my use case but it misses that feature" or "how to implement this use case?". I am grateful that other contributors are starting to come into the loop to extend it, you are more than welcome! 🫡 .

Let's minimize its responsabilities and keep it as unopinionated as possible, e.g. this snippet is NOT something the library should look like in the future:

Stream.from_csv("sales.csv")
.join(db_type="postgres", db_name="main", schema_name="public", table="user", on_keys=("user_id",))
.to_bigquery("enriched_sales", partition=datetime.date.today(), batchsize=1024)

instead, one can:

  1. instantiate an Iterable[Dict[str, Any]] of input rows using csv module
  2. join via .map using a psycopg2 client
  3. batch rows via .group(size=1024)
  4. write into BigQuery via .foreach using a bigquery.Client.insert_rows_json

Thank you for reading and let me know if it makes sense 🙏🏻 !