Closed gdubya closed 2 months ago
Thanks for the PR @gdubya! Our CI should run against a test container on azure, see https://github.com/duckdb/duckdb_azure/blob/main/test/sql/cloud/hierarchical_namespace.test
I think you should just be able to copy over a few of those statements and make them use the abfs://
prefix to ensure both formats work.
@samansmink Ok, we'll see if it's that simple. 😅
But part of my concern is that the client behaviour changes depending on the authentication mode used. From the documentation:
Scheme identifier: The abfs protocol is used as the scheme identifier. If you add an s at the end (abfss) then the ABFS Hadoop client driver will always use Transport Layer Security (TLS) irrespective of the authentication method chosen. If you choose OAuth as your authentication, then the client driver will always use TLS even if you specify abfs instead of abfss because OAuth solely relies on the TLS layer. Finally, if you choose to use the older method of storage account key, then the client driver interprets abfs to mean that you don't want to use TLS.
But I guess that's the client driver's responsibility, not this extension.
@samansmink I copied some "abfss" tests and it looks like the checks still pass, but I can't find the tests in the logs. Can you see them?
Add support for abfs, as requested by @sugibuchi in #72