Trex-Group / trex-bigdata

11 stars 6 forks source link

[Docker]作业手顺的启动Hbase不成功。Help! #31

Open chinndou opened 7 years ago

chinndou commented 7 years ago

在root的home下面看了一下start-hbase.sh。 里面的hadoop_home是/usr/local/hbase /usr/local/下面好像没有这个文件夹。我就改到了/opt/hbase。 启动后就报错。请帮忙看看。

------LOG IS HERE------------------------------------------------- root@master:~# ./start-hbase.sh ./start-hbase.sh: line 3: cd: /usr/local/hbase: No such file or directory starting hbase

./start-hbase.sh: line 5: ./bin/start-hbase.sh: No such file or directory starting hbase shell

./start-hbase.sh: line 18: ./bin/hbase: No such file or directory root@master:~# root@master:~# ls -ltr /usr/local/ total 32 drwxr-xr-x 2 root root 4096 Feb 14 23:28 src drwxr-xr-x 2 root root 4096 Feb 14 23:28 sbin lrwxrwxrwx 1 root root 9 Feb 14 23:28 man -> share/man drwxr-xr-x 2 root root 4096 Feb 14 23:28 include drwxr-xr-x 2 root root 4096 Feb 14 23:28 games drwxr-xr-x 2 root root 4096 Feb 14 23:28 etc drwxr-xr-x 2 root root 4096 Feb 14 23:28 bin drwxr-xr-x 4 root root 4096 Mar 19 13:24 share drwxr-xr-x 4 root root 4096 Mar 19 13:28 lib root@master:~# root@master:~# root@master:~# netstat -tunlp | grep java tcp 0 0 127.0.0.1:45043 0.0.0.0: LISTEN 2345/java tcp 0 0 0.0.0.0:50070 0.0.0.0: LISTEN 2213/java tcp 0 0 0.0.0.0:50010 0.0.0.0: LISTEN 2345/java tcp 0 0 0.0.0.0:50075 0.0.0.0: LISTEN 2345/java tcp 0 0 0.0.0.0:50020 0.0.0.0: LISTEN 2345/java tcp 0 0 172.17.0.5:9000 0.0.0.0: LISTEN 2213/java tcp 0 0 0.0.0.0:50090 0.0.0.0: LISTEN 2486/java tcp6 0 0 172.17.0.5:8050 ::: LISTEN 2765/java tcp6 0 0 :::8088 ::: LISTEN 2659/java tcp6 0 0 172.17.0.5:8025 ::: LISTEN 2659/java tcp6 0 0 :::13562 ::: LISTEN 2765/java tcp6 0 0 172.17.0.5:8060 ::: LISTEN 2765/java tcp6 0 0 172.17.0.5:8030 ::: LISTEN 2659/java tcp6 0 0 :::8033 ::: LISTEN 2659/java tcp6 0 0 172.17.0.5:8040 ::: LISTEN 2659/java tcp6 0 0 :::8042 ::: LISTEN 2765/java root@master:~# root@master:~# root@master:~# pw bash: pw: command not found root@master:~# pwd /root root@master:~# root@master:~# root@master:~# ls -tlr total 44 -rwxr-xr-x 1 root root 566 Mar 19 12:07 stop-hbase.sh -rwxr-xr-x 1 root root 95 Mar 19 12:07 stop-hadoop.sh -rwxr-xr-x 1 root root 212 Mar 19 12:07 start-ssh-serf.sh -rwxr-xr-x 1 root root 637 Mar 19 12:07 start-hbase.sh -rwxr-xr-x 1 root root 96 Mar 19 12:07 start-hadoop.sh -rwxr-xr-x 1 root root 697 Mar 19 12:07 run-wordcount.sh -rwxr-xr-x 1 root root 227 Mar 19 12:07 docker-entrypoint.sh -rwxr-xr-x 1 root root 1063 Mar 19 12:07 configure-members.sh drwxr-xr-x 2 root root 4096 Mar 19 18:00 zookeeper drwxr-xr-x 7 root root 4096 Mar 19 23:47 hdfs -rw-r--r-- 1 root root 1220 Mar 19 23:48 serf_log root@master:~# root@master:~# more start-hbase.sh

!/bin/bash

hadoop_home=/usr/local/hbase cd $hadoop_home echo -e "starting hbase \n" ./bin/start-hbase.sh

echo -e "starting local master beckup \n"

./bin/local-master-backup.sh start 1

The number at the end of the command signifies an offset that is added to the default

ports of 60000 for RPC and 60010 for the web-based UI. In this example, a new master

process would be started that reads the same configuration files as usual, but would

listen on ports 60001 and 60011 , respectively.

echo -e "starting local regionserver \n"

./bin/local-regionservers.sh start 3

sleep 5 echo -e "starting hbase shell \n" ./bin/hbase shell root@master:~# root@master:~# echo $HADOOP_HOME /opt/hadoop root@master:~# root@master:~# ls -tlr /opt/ total 16 drwxr-xr-x 8 uucp 143 4096 Mar 19 13:30 jdk1.8.0_111 lrwxrwxrwx 1 root root 17 Mar 19 13:30 jdk -> /opt/jdk1.8.0_111 drwxr-xr-x 9 root root 4096 Mar 19 17:43 hive drwxr-xr-x 8 root root 4096 Mar 19 18:00 hbase drwxr-xr-x 12 root root 4096 Mar 19 23:48 hadoop root@master:~# root@master:~# vi start-hbase.sh root@master:~# root@master:~# ./start-hbase.sh starting hbase

members: ssh: Could not resolve hostname members: Name or service not known starting master, logging to /opt/hbase/bin/../logs/hbase--master-master.trex.com.out Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0 Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0 starting regionserver, logging to /opt/hbase/bin/../logs/hbase--1-regionserver-master.trex.com.out starting hbase shell

SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 2017-03-20 00:05:46,813 ERROR [main] zookeeper.RecoverableZooKeeper: ZooKeeper exists failed after 4 attempts 2017-03-20 00:05:46,821 WARN [main] zookeeper.ZKUtil: hconnection-0x254f906e0x0, quorum=members:2181, baseZNode=/hbase Unable to set watcher on znode (/hbase/hbaseid) org.apache.zookeeper.KeeperException$OperationTimeoutException: KeeperErrorCode = OperationTimeout at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:144) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:880) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:636) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450) at org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:362) at org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:58) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.InstAsgnNode.interpret(InstAsgnNode.java:95) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148) at org.jruby.RubyClass.newInstance(RubyClass.java:822) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) at opt.hbase.bin.$_dotdot.bin.hirb.file(/opt/hbase/bin/../bin/hirb.rb:118) at opt.hbase.bin.$_dotdot.bin.hirb.load(/opt/hbase/bin/../bin/hirb.rb) at org.jruby.Ruby.runScript(Ruby.java:697) at org.jruby.Ruby.runScript(Ruby.java:690) at org.jruby.Ruby.runNormally(Ruby.java:597) at org.jruby.Ruby.runFromMain(Ruby.java:446) at org.jruby.Main.doRunFromMain(Main.java:369) at org.jruby.Main.internalRun(Main.java:258) at org.jruby.Main.run(Main.java:224) at org.jruby.Main.run(Main.java:208) at org.jruby.Main.main(Main.java:188) 2017-03-20 00:05:46,827 ERROR [main] zookeeper.ZooKeeperWatcher: hconnection-0x254f906e0x0, quorum=members:2181, baseZNode=/hbase Received unexpected KeeperException, re-throwing exception org.apache.zookeeper.KeeperException$OperationTimeoutException: KeeperErrorCode = OperationTimeout at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.checkZk(RecoverableZooKeeper.java:144) at org.apache.hadoop.hbase.zookeeper.RecoverableZooKeeper.exists(RecoverableZooKeeper.java:221) at org.apache.hadoop.hbase.zookeeper.ZKUtil.checkExists(ZKUtil.java:541) at org.apache.hadoop.hbase.zookeeper.ZKClusterId.readClusterIdZNode(ZKClusterId.java:65) at org.apache.hadoop.hbase.client.ZooKeeperRegistry.getClusterId(ZooKeeperRegistry.java:105) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.retrieveClusterId(ConnectionManager.java:880) at org.apache.hadoop.hbase.client.ConnectionManager$HConnectionImplementation.(ConnectionManager.java:636) at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method) at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62) at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45) at java.lang.reflect.Constructor.newInstance(Constructor.java:423) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:238) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:218) at org.apache.hadoop.hbase.client.ConnectionFactory.createConnection(ConnectionFactory.java:119) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(JavaMethod.java:450) at org.jruby.javasupport.JavaMethod.invokeStaticDirect(JavaMethod.java:362) at org.jruby.java.invokers.StaticMethodInvoker.call(StaticMethodInvoker.java:58) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:312) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:169) at org.jruby.ast.CallOneArgNode.interpret(CallOneArgNode.java:57) at org.jruby.ast.InstAsgnNode.interpret(InstAsgnNode.java:95) at org.jruby.ast.NewlineNode.interpret(NewlineNode.java:104) at org.jruby.ast.BlockNode.interpret(BlockNode.java:71) at org.jruby.evaluator.ASTInterpreter.INTERPRET_METHOD(ASTInterpreter.java:74) at org.jruby.internal.runtime.methods.InterpretedMethod.call(InterpretedMethod.java:169) at org.jruby.internal.runtime.methods.DefaultMethod.call(DefaultMethod.java:191) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:302) at org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:144) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:148) at org.jruby.RubyClass.newInstance(RubyClass.java:822) at org.jruby.RubyClass$i$newInstance.call(RubyClass$i$newInstance.gen:65535) at org.jruby.internal.runtime.methods.JavaMethod$JavaMethodZeroOrNBlock.call(JavaMethod.java:249) at org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:292) at org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:135) at opt.hbase.bin.$_dotdot.bin.hirb.file(/opt/hbase/bin/../bin/hirb.rb:118) at opt.hbase.bin.$_dotdot.bin.hirb.load(/opt/hbase/bin/../bin/hirb.rb) at org.jruby.Ruby.runScript(Ruby.java:697) at org.jruby.Ruby.runScript(Ruby.java:690) at org.jruby.Ruby.runNormally(Ruby.java:597) at org.jruby.Ruby.runFromMain(Ruby.java:446) at org.jruby.Main.doRunFromMain(Main.java:369) at org.jruby.Main.internalRun(Main.java:258) at org.jruby.Main.run(Main.java:224) at org.jruby.Main.run(Main.java:208) at org.jruby.Main.main(Main.java:188) HBase Shell; enter 'help' for list of supported commands. Type "exit" to leave the HBase Shell Version 1.1.3, r72bc50f5fafeb105b2139e42bbee3d61ca724989, Sat Jan 16 18:29:00 PST 2016

hbase(main):001:0>

LiuMing5489 commented 7 years ago

作业手顺不是docker base的,是vb镜像base的。 docker环境和vb镜像环境的设定不太一样。

第三节课镜像: https://drive.google.com/open?id=0B8xGViSvluxLa1ltZWN0NW1uUWc

chinndou commented 7 years ago

谢谢!第三课镜像昨天拷贝过了。 昨天的题目做了一些之后。打算把之前没做的作业给补补。 主要是VB上的GUI虚拟之后特别卡。就想把本地的环境给搭起来。b

LiuMing5489 commented 7 years ago

嗯那,我也是这么想的。

我是本地docker环境跑起来了。 不过用本地的idea运行wordcount remote时: 访问本地虚拟机 hadoop-server成功。 访问本地docker maser.trex.com失败。(配置文件的端口没改明白)

你要是只是VB上的GUI(hadoop-developer)卡的话, 可以试试不用VB上的GUI,在本地自己装idea,访问vb的hadoop-server。