awslabs / data-solutions-framework-on-aws

An open-source framework that simplifies implementation of data solutions.
https://awslabs.github.io/data-solutions-framework-on-aws/
Apache License 2.0
112 stars 16 forks source link

Provide AWS Glue as an option #267

Open klescosia opened 11 months ago

klescosia commented 11 months ago

Provide AWS Glue as a processing layer

vgkowski commented 11 months ago

Thanks for providing feedback! Can you give us more details on what you would like to see in this construct? Think about your user experience and how this construct can help you as a data engineer (with your preferences).

dashmug commented 7 months ago

A few ideas.

  1. Glue is non-trivial to replicate locally. So engineers end up iterating their scripts in the cloud which makes the development cycle slow.
  2. Glue's CDK constructs are still on L1 making it too low-level and the development experience is not so great.
  3. Glue's CFN deployment only deploys a single script for each job. If you are developing multiple scripts and use some common utility functions (to be DRY), you have to package them into a python package, upload to s3, and then indicate in your Glue job. Again, all of this makes it not-so-friendly to developers.

I am in the process of making my own solutions to the above as I haven't heard of data-solutions-framework-on-aws before. I've looked at aws-ddk but it also did not help Glue development either. This is my project: glue-pyspark-dev-tools.

If there is alignment, I'll be happy to help add my planned features here in this project.

klescosia commented 7 months ago

Bouncing off on your ideas..

  1. Yes, we end up iterating/running/testing scripts in the cloud. We also use Athena to test our transformation logics since I mostly advocated the use Spark SQL scripts for our transformations instead of PySpark.

Our jobs are structured as follows:

What I did for our deployment was to have 2 config files. One CSV file that contains the JobName, Classification (default/custom), Category (Ingestion, etc.), ConnectionName (since our jobs run in private network). This CSV file will be used by the CDK to loop through and deploy the Glue Jobs. Another config file would be for managing the custom job (Clasification) which were tagged from the CSV file.

lmouhib commented 7 months ago

One more point to consider for the feature, provide a way to run unit test, By inferring the arguments from the job construct and running them against the Glue runtime docker container.

vgkowski commented 7 months ago

What I did for our deployment was to have 2 config files. One CSV file that contains the JobName, Classification (default/custom), Category (Ingestion, etc.), ConnectionName (since our jobs run in private network). This CSV file will be used by the CDK to loop through and deploy the Glue Jobs. Another config file would be for managing the custom job (Clasification) which were tagged from the CSV file.

@klescosia Do I understand correctly you have implemented a config-file-based approach on top of CDK and Glue to create Glue jobs in a simpler way than the CDK L1 construct?

vgkowski commented 7 months ago

I am in the process of making my own solutions to the above as I haven't heard of data-solutions-framework-on-aws before. I've looked at aws-ddk but it also did not help Glue development either. This is my project: glue-pyspark-dev-tools. If there is alignment, I'll be happy to help add my planned features here in this project.

@dashmug I see your tool as an equivalent of the EMR toolkit but for Glue: a packaged solution based on this blog post. Am I correct? If yes, your solution would tackle the local dev and unit testing parts which is great! I think DSF would be complementary and can provide value on packaging this local dev to make it deployable in a Glue Job. We just need to ensure both solutions are not mandatory for each other.

What I am thinking of now is to provide as part of DSF:

  1. An abstracted construct for the Glue Job with smart defaults and best practices. Something similar to the SparkEmrServerlessJob construct.
  2. A Glue job packager construct that takes your local env and make it available/consumable by Glue. Something similar to the PySparkApplicationPackage but for Glue specificities.
klescosia commented 7 months ago

What I did for our deployment was to have 2 config files. One CSV file that contains the JobName, Classification (default/custom), Category (Ingestion, etc.), ConnectionName (since our jobs run in private network). This CSV file will be used by the CDK to loop through and deploy the Glue Jobs. Another config file would be for managing the custom job (Clasification) which were tagged from the CSV file.

@klescosia Do I understand correctly you have implemented a config-file-based approach on top of CDK and Glue to create Glue jobs in a simpler way than the CDK L1 construct?

Yes, that is correct. We have many Glue Jobs, each has different functionality and configurations. So I'm looping through the CSV file then executing glue.CfnJob (i'm using Python CDK) then I also have yaml file to store the configurations (number of workers, worker types, s3 paths, etc.) both for default and custom jobs.

lmouhib commented 2 months ago

There is already an alpha L2 construct for Glue, we will wait to see its final form before we work on this. In the meantime we will deliver a construct to package dependencies for glue jobs similar to the one we offer for EMR Spark runtime constructs.