harrystech / arthur-redshift-etl

ELT Code for your Data Warehouse
MIT License
26 stars 11 forks source link

Support schema path templates in database source relations #243

Open bhtucker opened 4 years ago

bhtucker commented 4 years ago

Summary

Extract for database targets doesn't support the power config rendering that's available for static sources.

In extract, the output target directory comes from relation.data_directory, whereas for static sources and unloads, the schema-level path template is used.

Details

Both systems get at the same 'universe' of remote data file/directory addresses:

Unload:

s3_key_prefix = "{schema.s3_unload_path_prefix}/data/{schema.name}/{source.schema}-{source.table}/csv".format(
        schema=schema, source=relation.target_table_name,
    )

Sqoop:

            "--target-dir",
            '"s3n://{}/{}"'.format(relation.bucket_name, relation.data_directory()),

where data_directory is:

         return os.path.join(
            from_prefix or self.prefix or ".",
            "data",
            self.source_path_name,
            (self.schema_config.s3_data_format.format or "CSV").lower(),
        )

The Unload formulation is a bit more powerful. By moving extract targets onto the render-based system, the same 'archiving' use case (e.g. retain daily snapshots of relations using today/yesterday config values) that templating supports in unload can be done directly from upstream DBs at extract time.

I also see data_lake is in the config and seems related but didn't quite see how it fits in. Hopefully this could be involved in the 'harmonization' of these two systems in such a way as to allow configuration of the storage backend for extract/unload between e.g. GCS vs S3.

Labels Please set the label on the issue so that

feature component: extract

tvogels01 commented 4 years ago

Makes sense. I'm not 100% sure that data lake vs. object store is applied consistently. In general, transient files from running the ETL like the schemas and data directory should be in the object store. Unloads should go into a "data lake" where they can be consumed by others systems. One could argue that the extracts also should go into the "data lake".