dlt-hub / dlt

data load tool (dlt) is an open source Python library that makes data loading easy 🛠️
https://dlthub.com/docs
Apache License 2.0
2.51k stars 166 forks source link

Deeply nested structures can produce file names that exceed system limits #1697

Closed sheluchin closed 3 weeks ago

sheluchin commented 2 months ago

dlt version

0.5.2

Describe the problem

When running my pipeline, I got an error like:

<class 'dlt.normalize.exceptions.NormalizeJobFailed'>
Job for my_resource.7547067d79.typed-jsonl failed terminally in load 1723141697.7149394 with message [Errno 36] File name too long: '/home/alex/.dlt/pipelines/my_source/load/new/1723141697.7149394/new_jobs/<long name>.ee379b2de9.0.insert_values'.

The total the filename length (without any of the path) is 43 characters.

The data structures here are deeply nested with long key names (coming from the 3rd party source), so I don't have control over it unless I start renaming things.

Expected behavior

My expectation is that I can run a pipeline with deeply nested resources having long names.

@VioletM and I briefly discussed this issue over Slack. I'm told there is a table name shortening mechanism that prevents this sort of problem for tables, but the same mechanism isn't applied to filenames which are used for intermediate operations, if I understand correctly. I guess this mechanism or similar should be used to ensure runs like this can work.

Steps to reproduce

I cannot share my source data but any data with sufficiently long key names should produce this error when running the pipeline.

Operating system

Linux

Runtime environment

Local

Python version

3.11

dlt data source

API

dlt destination

DuckDB

Other deployment details

No response

Additional information

No response

rudolfix commented 2 months ago

related bug: #191

rudolfix commented 2 months ago

@sheluchin there's a simple workaround (until we fix it properly). right now we support changing destination capabilities in code so you can limit identifier length:

from dlt.destinations import duckdb

pipeline = dlt.pipeline(
        pipeline_name="test_exceed_job_file_name_length",
        destination=duckdb(
            max_identifier_length=200,
        ),
    )

this will limit all identifiers to 200 characters including the file names. unfortunately this also affects columns... still 200 chars is pretty long.

lmk. if that worked for you,

@VioletM we could add this workaround to our docs