Closed sheluchin closed 3 weeks ago
related bug: #191
@sheluchin there's a simple workaround (until we fix it properly). right now we support changing destination capabilities in code so you can limit identifier length:
from dlt.destinations import duckdb
pipeline = dlt.pipeline(
pipeline_name="test_exceed_job_file_name_length",
destination=duckdb(
max_identifier_length=200,
),
)
this will limit all identifiers to 200 characters including the file names. unfortunately this also affects columns... still 200 chars is pretty long.
lmk. if that worked for you,
@VioletM we could add this workaround to our docs
dlt version
0.5.2
Describe the problem
When running my pipeline, I got an error like:
The total the filename length (without any of the path) is 43 characters.
The data structures here are deeply nested with long key names (coming from the 3rd party source), so I don't have control over it unless I start renaming things.
Expected behavior
My expectation is that I can run a pipeline with deeply nested resources having long names.
@VioletM and I briefly discussed this issue over Slack. I'm told there is a table name shortening mechanism that prevents this sort of problem for tables, but the same mechanism isn't applied to filenames which are used for intermediate operations, if I understand correctly. I guess this mechanism or similar should be used to ensure runs like this can work.
Steps to reproduce
I cannot share my source data but any data with sufficiently long key names should produce this error when running the pipeline.
Operating system
Linux
Runtime environment
Local
Python version
3.11
dlt data source
API
dlt destination
DuckDB
Other deployment details
No response
Additional information
No response