Open samLozier opened 1 year ago
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please comment on the issue or else it will be closed in 7 days.
Although we are closing this issue as stale, it's not gone forever. Issues can be reopened if there is renewed community interest. Just add a comment to notify the maintainers.
Describe the feature
A clear and concise description of what you want to happen.
I'm struggling to figure out how to get Snowflake to assign column names to a csv file thats being loaded with this package. I can see that the issue was raised and seemingly resolved before, but I'm unclear what the takeaway is.
https://github.com/dbt-labs/dbt-external-tables/issues/12
Do I need to create a custom file format through another process, or can I set it on a per file basis in sources.yml?
I've been able to create an external table with columns: value+all the columns I specified, however value is a json column and all the columns I specified are null.
My current work around is to not specify any columns and handle the json parsing in a dbt model, but that seems like it would be problematic for anyone working with larger data.
Describe alternatives you've considered
A clear and concise description of any alternative solutions or features you've considered.
My use case seems like a very common application for this package, so I assume that there are many other potential users who could benefit from some guidance around how to configure csv loads.
Additional context
Is this feature database-specific? Which database(s) is/are relevant? Please include any other relevant context here. I'm loading gzip csv data from s3 into snowflake via an external table.
Who will this benefit?
What kind of use case will this feature be useful for? Please be specific and provide examples, this will help us prioritize properly. Anyone trying to get up to speed with this package for loading csv data into snowflake.