Open yaauie opened 6 years ago
@hero1122 - can you help us understand what you would like to do in this scenerio ?
When a client wants to write an HDFS file, it must obtain a lease, which is essentially a lock, to ensure the single-writer semantics. If a lease is not explicitly renewed or the client holding it dies, then it will expire. When this happens, HDFS will close the file and release the lease on behalf of the client.
The lease manager maintains a soft limit (1 minute) and hard limit (1 hour) for the expiration time. If you wait the lease will be released and the append will work....
--- nmaillard via: https://community.hortonworks.com/questions/58195/appendtofile-failed-to-append-filehdfslocationabcc.html
I encountered with the similar issue and this might help some people. With my research and tryouts, it turned out that if the replication factor for HDFS is greater than the number of datanodes available, append would fail to write as it needs to take care of multiple copies. Try setting a low replica factor for the log file. See Logstash and HDFS
In elastic/logstash#9712, user @hero1122 reports an issue with WebHDFS Output Plugin, indicating that there is some issue with HDFS support for append file:
@hero1122 can you please provide additional context to help us reproduce?