Open mccheah opened 6 years ago
Can you give an example of properties you're trying to set here?
We can cache file systems in HadoopTableOperations, but most of the systems I've worked on use this pattern of getting the right file system for the URI and using the FileSystem level cache.
We are experimenting with using Iceberg as a temporary representation of the tables that are backed by our internal data warehouse solution. When we do so, however, we need to put the Iceberg table metadata somewhere. We want to put it on local disk, but when we put it on local disk we need to encrypt it with a one-time encryption key that only exists for the lifetime of the Spark dataset that is being read / written; So for example we're doing something like this:
Key encryptionKey = generateKey();
Configuration conf = new Configuration();
conf.set("encryption.key", encryptionKey.toString());
HadoopTables tables = new HadoopTables(conf);
// create table and insert all metadata
sparkSession.read().option("iceberg.spark.hadoop.encryption.key", encryptionKey.toString()).load(tempTablePath);
In such a case, we don't want the same file system instance - probably a local FS instance wrapped with some encryption layer - to be cached, because every time we run this code we want a different encryption key every time.
Okay, how about adding the support you're talking about to HadoopTableOperations and opening a PR? That would unblock you because you'd have the caching level you need and we could further evaluate the feature.
Also, why do all of the properties include "spark"?
The properties here assume being injected into sparkSession.read.option
. If we wanted to include them in the Table properties set instead it should be iceberg.hadoop
.
Properties set through Spark wouldn't need to be specific to Spark. You might use the same ones as session properties in Presto.
We shouldn't use
Util.getFS
every time we want aFileSystem
object inHadoopTableOperations
. An example of where this breaks down is if file system object caching is disabled (setfs.<scheme>.impl.disable.cache
). When such caching is disabled, a long string of calls onHadoopTableOperations
in quick succession will create and GCFileSystem
objects very quickly, leading to degraded JVM behavior.An example of where one would want to disable file system caching is so that different instances of
HadoopTableOperations
can be set up withFileSystem
objects that are configured with differentConfiguration
objects - for example, configuring different Hadoop properties when invoking the data source in various iterations, given that we move forward with https://github.com/Netflix/iceberg/issues/91. Unfortunately, Hadoop caches file system objects byURI
, notConfiguration
, so if one wants differentHadoopTableOperations
instances to load differently configured file system objects with the sameURI
, they will instead receive the sameFileSystem
object back every time, unless they disableFileSystem
caching.