Open TomPkuer opened 1 year ago
@TomPkuer Alluxio doesn't fully support the Flink since it needs append for many functions. Because of that, we don't have any tests that cover this use case
This issue has been automatically marked as stale because it has not had recent activity. It will be closed in two weeks if no further activity occurs. Thank you for your contributions.
Alluxio Version: v2.8.1,v2.8.0
Describe the bug I am using flink to write to alluxio in real time ,also using spark to write to alluxio for batch tasks.the target ufs is hdfs(3.0.0-cdh6.2.1) The spark tasks are fine.But the data writen by flink could not be persisted and the exception log from job_worker is follows:
TaskExecutor - Exception running task for job Persist(Undefined) : DestHost:destPort [namenode]:802 0 , LocalHost:localPort [alluxio node hostname]:0. Failed on local exception: java.io.IOException: javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
Kerberos ticket expires 24h, and renew is executed according to this cycle,it seems that The hdfs client credentials do not appear to be updated.
I've set up kinit as a separate script to kinit in every job worker,but it does not work.
To Reproduce Steps to reproduce the behavior (as minimally and precisely as possible)
Expected behavior I hope the hdfs client credentials will be updated
Urgency Describe the impact and urgency of the bug.
Are you planning to fix it I want to confirm whether there is any remedy,if not I could have a try