datahq / dataflows

DataFlows is a simple, intuitive lightweight framework for building data processing flows in python.
https://dataflows.org
MIT License
194 stars 39 forks source link

Execution Engine / dataflows runner #96

Open rufuspollock opened 5 years ago

rufuspollock commented 5 years ago

What is the recommended way to run dataflows in production and/or on a regular basis and/or connected to a task queue.

i.e. the equivalent of datapackage-pipelines runner?

User Stories

As X running data processing flows I want to have a queue of data processing tasks run through data flows

As X I want to have a given dataflow run on a regular basis (e.g. daily, hourly) so that I can process data regularly

rufuspollock commented 5 years ago

@akariv any thoughts here?

micimize commented 4 years ago

It seems to me that datapackage-pipelines is as close as we have to a recommended deployment scheme, by virtue of there being docs for it's integration.

In the wild, because my pipeline operates on packages as a unit (#62), deployment has to be custom. I have a container that runs the pipeline scheduled with crython, iterates over whatever new data sources have been added, and applies the dataflows pipeline to them.

I've thought some about more scalable deployment solutions - I think a generic way to deploy auto-scaling python workloads to kubernetes would be a nice fit. I've also been thinking about how feasible it might be to write an adapter from dataflows (or a similarly pythonic data-package-based api) to apache-beam

cschloer commented 4 years ago

I would like to bump this issue @akariv @roll

I'm running dataflows in a production environment and slowly starting to realize that it's not handling larger datasets very well. My understanding is that it doesn't run through processors in the same way as DPP, so eventually it runs out of memory. Switching back to DPP wouldn't be ideal as there isn't currently a way to get the results back from running a pipeline in DPP without adding a dump_to_path step and reading from the filesystem. Is there any way to improve dataflows performance, or update the DPP runner so that it works better running within python?