Closed ifilonenko closed 7 years ago
rerun unit tests please
rerun integration tests please
rerun integration tests please
rerun unit tests please
@ifilonenko and I talked offline. I am doing a preliminary review on this end-to-end prototype. After this review, we want to break this into smaller PRs and add unit tests to them.
rerun integration tests please
rerun unit tests please
rerun integration tests please
If this is still having integration-test problems, rebasing from latest branch-2.2-kubernetes should fix that.
Fully functional Secure HDFS support. Adding extra integration test for $kinit
, to compliment --keytab
login and unit tests to complete PR
rerun integration tests please
rerun integration tests please
rerun integration tests please
rerun integration tests please
@kimoonkim @foxish @erikerlandson Please review before I merge into hdfs branch which should be rebased onto branch-2.2
Partial mocking of UGI functions has been done, with the exception of the FileSystem portion in the KeytabResolverStep.
Garbage Collection of the secret post job is already handled by the Client.scala OwnerReference.
Current failures in integration tests are due to issues found after rebasing PRs. Will be addressed before ready for merging
rerun integration tests please
rerun integration tests please
The latest commit addressed most of my comments. Looks great to me. Thanks @ifilonenko for the work so far.
@erikerlandson after all tests pass, can you give the final okay before merge?
LGTM, and passing CI. This is good to merge when we're ready!
Let's merge after cutting the new release and tagging.
important note. this PR will require refactoring upon merging because of most recent commits with renaming and unit test additions to the KubernetesSchedulerBackend.
These changes will be handled on the hdfs-kerberos-support
branch directly.
What changes were proposed in this pull request?
This it the on-going work of setting up Secure HDFS interaction with Spark-on-K8S. The architecture is discussed in this community-wide google doc This initiative can be broken down into 4 stages.
STAGE 1
HADOOP_CONF_DIR
environmental variable and using Config Maps to store all Hadoop config files locally, while also settingHADOOP_CONF_DIR
locally in the driver / executorsSTAGE 2
TGT
fromLTC
or using keytabs+principle and creating aDT
that will be mounted as a secretSTAGE 3
How was this patch tested?
Docs and Error Handling?