Open quentingodeau opened 6 months ago
Thats great to hear :) Let me know if you need any help, feel free to reach out to me on the duckdb discord or sam@duckdblabs.com
Hello duckdb team, nice to see that this has already been posted as an issue. I and my team would also love to have this as a feature. Just to add some context here - we are working with ETL pipelines in my company that mostly use pandas
however for some performance related reasons we have started migrating to duckdb
. We have an Azure native infra and therefore, while we could already enjoy the parquet import feature we would like to have a parquet export capability like in S3.
Just a small example,
CREATE TABLE weather (
city VARCHAR,
temp_lo INTEGER, -- minimum temperature on a day
temp_hi INTEGER, -- maximum temperature on a day
prcp REAL,
date DATE
);
INSERT INTO weather VALUES ('San Francisco', 46, 50, 0.25, '1994-11-27');
INSERT INTO weather VALUES ('San Francisco', 46, 50, 0.25, '1994-11-27');
INSERT INTO weather VALUES ('San Francisco', 46, 50, 0.25, '1994-11-27');
COPY weather TO 'az://⟨my_container⟩/⟨my_file⟩.⟨parquet_or_csv⟩'
Additionally, would love to receive advice on any temporary workarounds that can enable us to write from duckdb
directly to Azure Blob (or Data Lake) Storage. Thanks 😃
@csubhodeep you can try using fsspec if you're on python, they should have azure support
@csubhodeep you can try using fsspec if you're on python, they should have azure support
ok thanks a lot. I will try it.
Thanks again! I tried to use the fsspec
library in conjunction with the adlfs
. TLDR - NO success
Here are what I tried:
>>> storage_account_name = "our_account"
>>> container_name = "our_container"
>>> account_creds = <our_key>
>>> duckdb.register_filesystem(filesystem('abfs', connection_string=account_creds))
>>> duckdb.sql("CREATE OR REPLACE TABLE test_table (a INTEGER, b VARCHAR(100))")
>>> duckdb.sql("INSERT INTO test_table VALUES (1, 'a'), (2, 'b'), (3, 'c')")
>>> duckdb.sql("SELECT * FROM test_table")
┌───────┬─────────┐
│ a │ b │
│ int32 │ varchar │
├───────┼─────────┤
│ 1 │ a │
│ 2 │ b │
│ 3 │ c │
└───────┴─────────┘
>>> write_query = f"COPY test_table TO 'https://{storage_account_name}.blob.core.windows.net/{container_name}/test.parquet' (FORMAT 'parquet')"
---------------------------------------------------------------------------
IOException Traceback (most recent call last)
Cell In[41], [line 2](vscode-notebook-cell:?execution_count=41&line=2)
[1](vscode-notebook-cell:?execution_count=41&line=1) # dump it as parquet
----> [2](vscode-notebook-cell:?execution_count=41&line=2) duckdb.sql(write_query)
IOException: IO Error: Cannot open file "https://<storage_account_name>.blob.core.windows.net/<container_name>/test.parquet": No such file or directory
>>> write_query = f"COPY test_table TO 'az://{storage_account_name}.blob.core.windows.net/{container_name}/test.parquet' (FORMAT 'parquet')"
---------------------------------------------------------------------------
NotImplementedException Traceback (most recent call last)
Cell In[43], [line 2](vscode-notebook-cell:?execution_count=43&line=2)
[1](vscode-notebook-cell:?execution_count=43&line=1) # dump it as parquet
----> [2](vscode-notebook-cell:?execution_count=43&line=2) duckdb.sql(write_query)
NotImplementedException: Not implemented Error: Writing to Azure containers is currently not supported
Please let me know if I am doing something wrong.
could you try the abfs://
urls instead? the az://
once are triggering our auto-installation routing the requests through the azure extension. Alternatively, disable autoloading with set autoinstall_known_extensions=false;
After trying the suggestion above, here are the results:
Exception ignored in: <function AzureBlobFile.__del__ at 0x7feb3d5a4280>
Traceback (most recent call last):
File "/workspaces/rev_man_sys/venv/lib/python3.8/site-packages/adlfs/spec.py", line 2166, in __del__
self.close()
File "/workspaces/rev_man_sys/venv/lib/python3.8/site-packages/adlfs/spec.py", line 1983, in close
super().close()
File "/workspaces/rev_man_sys/venv/lib/python3.8/site-packages/fsspec/spec.py", line 1932, in close
self.flush(force=True)
File "/workspaces/rev_man_sys/venv/lib/python3.8/site-packages/fsspec/spec.py", line 1803, in flush
if self._upload_chunk(final=force) is not False:
File "/workspaces/rev_man_sys/venv/lib/python3.8/site-packages/fsspec/asyn.py", line 118, in wrapper
return sync(self.loop, func, *args, **kwargs)
File "/workspaces/rev_man_sys/venv/lib/python3.8/site-packages/fsspec/asyn.py", line 103, in sync
raise return_result
File "/workspaces/rev_man_sys/venv/lib/python3.8/site-packages/fsspec/asyn.py", line 56, in _runner
result[0] = await coro
File "/workspaces/rev_man_sys/venv/lib/python3.8/site-packages/adlfs/spec.py", line 2147, in _async_upload_chunk
raise RuntimeError(f"Failed to upload block: {e}!") from e
RuntimeError: Failed to upload block: The specifed resource name contains invalid characters.
RequestId:3545381a-d01e-0083-2346-6efd88000000
Time:2024-03-04T15:11:28.7881722Z
ErrorCode:InvalidResourceName
Content: <?xml version="1.0" encoding="utf-8"?><Error><Code>InvalidResourceName</Code><Message>The specifed resource name contains invalid characters.
RequestId:3545381a-d01e-0083-2346-6efd88000000
Time:2024-03-04T15:11:28.7881722Z</Message></Error>!
Is it more of an adlfs
issue?
I think you're not setting the connection string correctly, you're setting it to your key it appears.
Let's move this discussion elsewhere though as this is no longer about this issue. Please check if you're actually using fsspec correctly. If things are still wrong and it appears to be duckdb side, feel free to open an issue in duckdb/duckdb
I have managed to make it work the issue at my end was this part of the path - {storage_account_name}.blob.core.windows.net
which even the adlfs
library does not like for some reason. Not using that in the path just works fine. Thanks a lot again for all the guidance.
Hi. I guess I am facing the same issue. I am using kotlin. Is there any workaround for this?
Thanks
Hello,
For kotlin I'm not aware of a workaround :( Nevertheless I have start to work on the issue but I think I will not be able to make it work until the following PR is merge
+1 for this feature
Did the write operation feature via duckdb extensions end up being merged in the 1.0.0 release? I am currently using the 1.0.0 release and write operations via the azure extension still fails when using the duckdb node.js libraries. I am using both az:// and abfss:// when trying to write back to a parquet file hosted in Azure Storage Account (az://) or Azure Data Lake (abfss://).
Write operation error message:
Error: Not implemented Error: AzureDfsStorageFileSystem: FileExists is not implemented! {errno: -1, code: 'DUCKDB_NODEJS_ERROR', errorType: 'Not implemented', stack: 'Error: Not implemented Error: AzureDfsStorageFile
Any clarification on the write operation support state to Azure via node.js is highly appreciated.
Thank you.
@IlijaStankovski It has not. I would like to pick this up at some point but I can't give a timeline here unfortunately.
@samansmink thanks for the quick update, if you have a branch of code to build from that you want testers for, please let me know. Cheers ...
+1 I support this request. It is really necessary. Thank you EDIT: With this solution it works perfectly: https://github.com/duckdb/duckdb_azure/issues/44#issuecomment-1977270316
it will be nice to have :)
Hi,
It's not really an issue but more an insight on what I plan to work on. For the moment I didn't start but when I do I will post here a message to notify. If someone start before me please let me know ;)