kzk / webhdfs

Ruby client for Hadoop WebHDFS
Other
81 stars 46 forks source link

Failed to open TCP connection to d3.node.hadoop:1022 #34

Closed whg517 closed 5 years ago

whg517 commented 5 years ago

ENV

Ambari: 2.6.2.0
HDP: 2.6.5.0-292

HDFS : 2.7.3

Gssapi : 1.2.2

Hello, the following error occurred when I used your sample code to write data to HDFS using webhdfs. If you have time, I hope to get your answer. thank you

require 'webhdfs'
client = WebHDFS::Client.new("192.168.10.1", 50070, "whg")
# or with pseudo username authentication
# # client = WebHDFS::Client.new(hostname, port, username)
client.kerberos = true
client.kerberos_keytab = "/root/keytab/whg.keytab"
a =  client.list("/user/whg/")
print "list:\t", a, "\n\n"
created = client.create('/user/whg/test/webhdfs.txt', 'webhdfs create file success')
print "created:\t", created, "\n\n"
appended = client.append('/user/whg/test/webhdfs.txt', 'webhdfs append success')
print "appended:\t", appended, "\n"
[root@localhost logstash-6.5.3]# ./vendor/jruby/bin/jruby a.rb 
list:   [{"accessTime"=>0, "blockSize"=>0, "childrenNum"=>0, "fileId"=>5243995, "group"=>"hadoop", "length"=>0, "modificationTime"=>1542304804276, "owner"=>"whg", "pathSuffix"=>".Trash", "permission"=>"700", "replication"=>0, "storagePolicy"=>0, "type"=>"DIRECTORY"}, {"accessTime"=>0, "blockSize"=>0, "childrenNum"=>2, "fileId"=>5243966, "group"=>"hadoop", "length"=>0, "modificationTime"=>1542850334598, "owner"=>"whg", "pathSuffix"=>".hiveJars", "permission"=>"755", "replication"=>0, "storagePolicy"=>0, "type"=>"DIRECTORY"}, {"accessTime"=>0, "blockSize"=>0, "childrenNum"=>1, "fileId"=>6883669, "group"=>"hadoop", "length"=>0, "modificationTime"=>1515032558152, "owner"=>"whg", "pathSuffix"=>".sparkStaging", "permission"=>"755", "replication"=>0, "storagePolicy"=>0, "type"=>"DIRECTORY"}, {"accessTime"=>0, "blockSize"=>0, "childrenNum"=>2, "fileId"=>5481118, "group"=>"hadoop", "length"=>0, "modificationTime"=>1514960910443, "owner"=>"whg", "pathSuffix"=>"Documents", "permission"=>"755", "replication"=>0, "storagePolicy"=>0, "type"=>"DIRECTORY"}, {"accessTime"=>0, "blockSize"=>0, "childrenNum"=>2, "fileId"=>5318473, "group"=>"hadoop", "length"=>0, "modificationTime"=>1545616632480, "owner"=>"whg", "pathSuffix"=>"data", "permission"=>"755", "replication"=>0, "storagePolicy"=>0, "type"=>"DIRECTORY"}, {"accessTime"=>0, "blockSize"=>0, "childrenNum"=>3, "fileId"=>6883203, "group"=>"hadoop", "length"=>0, "modificationTime"=>1515029729092, "owner"=>"whg", "pathSuffix"=>"test", "permission"=>"755", "replication"=>0, "storagePolicy"=>0, "type"=>"DIRECTORY"}]

WebHDFS::ServerError: Failed to connect to host d1.node.hadoop:1022, Failed to open TCP connection to d1.node.hadoop:1022 (initialize: name or service not known)
           request at /usr/local/logstash-6.5.3/vendor/jruby/lib/ruby/gems/shared/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:351
  operate_requests at /usr/local/logstash-6.5.3/vendor/jruby/lib/ruby/gems/shared/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:270
            create at /usr/local/logstash-6.5.3/vendor/jruby/lib/ruby/gems/shared/gems/webhdfs-0.8.0/lib/webhdfs/client_v1.rb:73
            <main> at a.rb:9
whg517 commented 5 years ago

Since the datanodes hostname mapping did not cause this problem in the Client configuration, the above case worked fine when I was working in m3.