Open kimoonkim opened 6 years ago
One question we had was whether or not a mounted secret dir will automatically see a new token if the token was added to the secret by the token refresh server. https://kubernetes.io/docs/concepts/configuration/secret/ has this section:
Mounted Secrets are updated automatically When a secret being already consumed in a volume is updated, projected keys are eventually updated as well. Kubelet is checking whether the mounted secret is fresh on every periodic sync. However, it is using its local ttl-based cache for getting the current value of the secret. As a result, the total delay from the moment when the secret is updated to the moment when new keys are projected to the pod can be as long as kubelet sync period + ttl of secrets cache in kubelet.
So the answer seems to be yes.
And I just did a little experiment and confirmed the secret update behavior:
kubectl create secret generic mysecret --from-file=./username.txt
)$ kubectl exec -it mypod /bin/ls /etc/foo
password.txt username.txt
So the next step is to imagine how we can write the startCredentialUpdater
method in a subclass of SparkHadoopUtil
, mentioned in the issue description.
I think the key lines are the following, copied from yarn CredentialUpdater
:
val newCredentials = new Credentials()
newCredentials.readTokenStorageStream(stream)
UserGroupInformation.getCurrentUser.addCredentials(newCredentials)
So as long as we can point the stream to the new token's file path, we should be fine.
453 is implementing the HDFS token refresh server which will obtain brand new tokens when prior tokens completely expire after 7 days. For each supported job, the refresh server will write back the new token to the associated K8s secrets as an additional data item. The job's driver and executors should detect the new token and load it into the JVMs so they can continue to access the secure HDFS.
We should discuss how exactly this can be done. I can imagine two approaches:
I personally prefer (1), if it is possible.
One related note is that there is an existing hook in the base class
SparkHadoopUtil
both for the driver and executor for supporting this. We just need to subclass the base class and implement (1) or (2) in the subclass:Thoughts? Concerns?
@ifilonenko @liyinan926