The Cryptographic Computing for Clean Rooms (C3R) encryption client and software development kit (SDK) provide client-side tooling which allows users to participate in AWS Clean Rooms collaborations leveraging cryptographic computing by pre- and post-processing data.
The AWS Clean Rooms User Guide contains detailed information regarding how to use the C3R encryption client in conjunction with an AWS Clean Rooms collaboration.
NOTICE: This project is released as open source under the Apache 2.0 license but is only intended for use with AWS Clean Rooms. Any other use cases may result in errors or inconsistent results.
The C3R encryption client command line interface and related JARs can be downloaded from the Releases section of this repository. The SDK artifacts are also available on Maven's central repository.
Java Runtime Environment version 11 or newer.
Enough disk storage to hold cleartext data, temporary files, and the encrypted output. See the "Guidelines for the C3R encryption client" section of the user guide for details on how settings affect storage needs.
CSV and Parquet file formats are supported. For CSV files, the C3R encryption client treats all values as strings. For Parquet files, the data types are listed in What Parquet data types are supported?. See What data types can be encrypted? for information on encryption of particular data types. Further details and limitations are found in the "Supported file and data types" section of the user guide.
The core functionality of the C3R encryption client is format agnostic; the SDK can be used for any format by implementing an appropriate RowReader and RowWriter.
Modes which make API calls to AWS services feature optional --profile
and --region
flags, allowing for convenient selection of an AWS CLI named profile and AWS region respectively.
The C3R encryption client is an executable JAR with a command line interface (CLI). It has several modes of operation which are described in the usage help message, e.g.:
schema Generate an encryption schema for a tabular file.
encrypt Encrypt tabular file content for use in a secure collaboration.
decrypt Decrypt tabular file content derived from a secure collaboration.
These modes are briefly described in the subsequent portions of this README.
schema
modeFor the C3R encryption client to encrypt a tabular file for a collaboration, it must have a corresponding schema file specifying how the encrypted output should be derived from the input.
The C3R encryption client can help generate schema files for an INPUT
file using the schema
command. E.g.,
$ java -jar c3r-cli.jar schema --interactive INPUT
See the "Generate an encryption schema for a tabular file" section of the user guide for more information.
encrypt
modeGiven the following:
a tabular INPUT
file,
a corresponding SCHEMA
file,
a collaboration COLLABORATION_ID
in the form of a UUID, and
an environment variable C3R_SHARED_SECRET
containing a Base64-encoded 256-bit secret. See the "Preparing encrypted data tables" section of the user guide for details on how to generate a shared secret key.
An encrypted OUTPUT
file can be generated by running the C3R encryption client at the command line as follows:
$ java -jar c3r-cli.jar encrypt INPUT \
--schema=SCHEMA \
--id=COLLABORATION_ID \
--output=OUTPUT
See the "Encrypt data" section of the user guide for more information.
decrypt
modeOnce queries have been executed on encrypted data in an AWS Clean Rooms collaboration, that encrypted query results INPUT
file can be decrypted generating a cleartext OUTPUT
file using the same Base64-encoded 256-bit secret stored in the C3R_SHARED_SECRET
environment variable, and COLLABORATION_ID
as follows:
$ java -jar c3r-cli.jar decrypt INPUT \
--id=COLLABORATION_ID \
--output=OUTPUT
See the "Decrypting data tables with the C3R encryption client" section of the user guide.
SDK usage examples are available in the SDK packages' src/examples
directories.
The c3r-cli-spark
package is a version of c3r-cli
which must be submitted as a job to a running Apache Spark server.
The JAR's com.amazonaws.c3r.spark.cli.Main
class is submitted via the Apache Spark spark-submit
script and the JAR is then run using passed command line arguments. E.g., here is how to view the top-level usage information:
./spark-3.4.0-bin-hadoop3-scala2.13/bin/spark-submit \
--master SPARK_SERVER_URL \
... spark-specific options omitted ... \
--class com.amazonaws.c3r.spark.cli.Main \
c3r-cli-spark.jar \
--help
And here is how to submit a job for encryption:
AWS_REGION=... \
C3R_SHARED_SECRET=... \
./spark-3.4.0-bin-hadoop3-scala2.13/bin/spark-submit \
--master SPARK_SERVER_URL \
... spark-specific options omitted ... \
--class com.amazonaws.c3r.spark.cli.Main \
c3r-cli-spark.jar \
encrypt INPUT.parquet \
--schema=... \
--output=... \
--id=...
It is important to note that c3r-cli-spark
makes no effort to add additional encryption to data transmitted or stored in temporary files by Apache Spark. This means, for example, that on an Apache Spark server with no encryption enabled, sensitive info such as the C3R_SHARED_SECRET
will appear in plaintext RPC calls between the server and workers. It is up to users to ensure their Apache Spark server has been configured according to their specific security needs. See the Apache Spark security documentation for guidance on how to configure Apache Spark server security settings.
The following is a high level description of some security concerns to keep in mind when using the C3R encryption client to encrypt data.
The shared secret key and data-to-be-encrypted is by default consumed directly from disk by the C3R encryption client on a user’s machine. It is, therefore, left to users to take any and all necessary precautions to ensure those security concerns beyond what the C3R is capable of enforcing are met. For example:
the machine running the C3R encryption client meets the user’s needs as a trusted computing platform,
the C3R encryption client is run in a minimally privileged manner and not exposed to untrusted data/networks/etc., and
any post-encryption cleanup/wiping of keys and/or data is performed as needed on the system post encryption.
When encrypting a source file, the C3R encryption client will create temporary files on disk. These files will be deleted when the C3R encryption client finishes generating the encrypted output. Unexpected termination of the C3R encryption client execution may prevent the C3R encryption client or JVM from deleting these files, allowing them to persist on disk. These temporary files will have all columns of type fingerprint
or sealed
encrypted, but some additional privacy-enhancing post-processing may not have been completed. By default, the C3R encryption client will utilize the host operating system’s temporary directory for these temporary files. If a user prefers an explicit location for such files, the optional --tempDir=DIR
flag can specify a different location to create such files.
Currently, only string values are supported by sealed columns.
For fingerprint columns, types are grouped into equivalence classes. Equivalence classes allow identical fingerprints to be assigned to the same semantic value regardless of the original representation. For example, the integral value 42
will be assigned the same fingerprint regardless of whether it was originally an SmallInt
, Int
, or BigInt
. No non-integral values, however, will ever be assigned the same fingerprint as the integral value 42
.
The following equivalence classes are supported by fingerprint columns:
BOOLEAN
DATE
INTEGRAL
STRING
For CSV files, the C3R encryption client treats all values simply as UTF-8 encoded text and makes no attempt to interpret them differently prior to encryption.
For Parquet files, an error will be raised if a non-supported type for a particular column type is used.
The C3R encryption client can process any non-complex (i.e., primitive) data in a Parquet file that represents a data type supported by Clean Rooms. The following Parquet data types are supported:
Binary
with the following logical annotations:
--parquetBinaryAsString
is set (STRING
data type)Decimal(scale, precision)
(DECIMAL
data type)String
(STRING
data type)Boolean
with no logical annotation (BOOLEAN
data type)Double
with no logical annotation (DOUBLE
data type)Fixed_Len_Binary_Array
with the Decimal(scale, precision)
logical annotation (DECIMAL
data type)Float
with no logical annotation (FLOAT
data type)Int32
with the following logical annotations:
INT
data type)Date
(DATE
data type)Decimal(scale, precision)
(DECIMAL
data type)Int(16, true)
(SMALLINT
data type)Int(32, true)
(INT
data type)Int64
with the following logical annotations:
BIGINT
data type)Decimal(scale, precision)
(DECIMAL
data type)Int(64, true)
(BIGINT
data type)Timestamp(isUTCAdjusted, TimeUnit.MILLIS)
(TIMESTAMP
data type)Timestamp(isUTCAdjusted, TimeUnit.MICROS)
(TIMESTAMP
data type)Timestamp(isUTCAdjusted, TimeUnit.NANOS)
(TIMESTAMP
data type)An equivalence class is a set of data types that can be unambiguously compared for equality via a representative data type.
The equivalence classes are:
BOOLEAN
containing data types: BOOLEAN
DATE
containing data types: DATE
INTEGRAL
containing data types: BIGINT
, INT
, SMALLINT
STRING
containing data types: CHAR
, STRING
, VARCHAR
The C3R encryption client uses only NIST-standardized algorithms and-- with one exception-- only by calling their implementation in the Java standard cryptographic library. The sole exception is that the client has its own implementation of HKDF (from RFC5869), but using MAC algorithms from the Java standard cryptographic library.
Yes, the C3R encryption client supports FIPS endpoints. For more information, see the AWS documentation on Dual-stack and FIPS endpoints.
This project is licensed under the Apache-2.0 License.