gitava / Big-Data-Study

For studying big data
2 stars 0 forks source link

jps - process information unavailable #37

Closed gitava closed 4 years ago

gitava commented 4 years ago
[vagrant@hdp-node-03 ~/hbase/logs]$sudo jps
5720 -- process information unavailable
2113 DataNode
5595 -- process information unavailable
5922 Jps
gitava commented 4 years ago

http://www.ttlsa.com/linux/jps-process-information-unavailable/

解决process information unavailable
/tmp中找到hsperfdata_前缀的目录,并找出PID对应的文件并且删除即可。
gitava commented 4 years ago

ref:https://www.cnblogs.com/freeweb/p/5748424.html

 在Linux下执行 jps 是快速查看Java程序进程的命令,一般情况下hadoop,hbase,storm等进程都是通过jps查看,有些时候因为进程没有被正常结束,比如资源占用过大时挂掉或者没有结束进程就直接重启计算机,会导致原有进程变为-- process information unavailable这样的空值,有时候不用管它,一段时间后会自动消失,如果一直不消失的情况下,可以使用如下方法清理:

  进入/tmp目录 cd /tmp 可以看到有很多以hsperfdata_{用户名}这样的目录,比如:hsperfdata_hbase,hsperfdata_kafka,hsperfdataroot这样的目录,是因为进程虽然在内存中关闭了,但是Linux还会在/tmp下寻找这些临时文件,而此时临时文件并没有没正常删除,这时候直接执行 rm -rf hsperfdata* 删除这些目录,然后再次执行 jps 查看,以上那些进程就不存在了,

  总的来说直接执行 rm -rf /tmp/hsperfdata_* 可以快速清除那些残留进程

  如果有很多正常运行的进程时,其中夹杂部分残留进程,这个时候不建议全部删除上面的目录,这时候要根据目录和进程的对应关系分析出残留的单个目录来删除无用的进程即可

gitava commented 4 years ago
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$ls -l
total 64
-rw-------. 1 vagrant vagrant 32768 Jul  3 05:06 5595
-rw-------. 1 vagrant vagrant 32768 Jul  3 05:06 5720
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$mv 5595 5595X
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$mv 5720 5720X
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$jps
6021 Jps
2113 DataNode
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$ps -ef|grep -i 5595
vagrant   5595  5581  0 04:50 ?        00:00:04 /home/vagrant/jdk/bin/java -Dproc_zookeeper -XX:OnOutOfMemoryError=kill -9 %p -XX:+UseConcMarkSweepGC -Dhbase.log.dir=/home/vagrant/hbase/logs -Dhbase.log.file=hbase-vagrant-zookeeper-hdp-node-03.log -Dhbase.home.dir=/home/vagrant/hbase -Dhbase.id.str=vagrant -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.zookeeper.HQuorumPeer start
vagrant   6032  2061  0 05:07 pts/0    00:00:00 grep --color=auto -i 5595
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$ps -ef|grep -i 5720
vagrant   5720  5706  1 04:50 ?        00:00:16 /home/vagrant/jdk/bin/java -Dproc_regionserver -XX:OnOutOfMemoryError=kill -9 %p -XX:+UseConcMarkSweepGC -XX:PermSize=128m -XX:MaxPermSize=128m -XX:ReservedCodeCacheSize=256m -Dhbase.log.dir=/home/vagrant/hbase/logs -Dhbase.log.file=hbase-vagrant-regionserver-hdp-node-03.log -Dhbase.home.dir=/home/vagrant/hbase -Dhbase.id.str=vagrant -Dhbase.root.logger=INFO,RFA -Dhbase.security.logger=INFO,RFAS org.apache.hadoop.hbase.regionserver.HRegionServer start
vagrant   6034  2061  0 05:07 pts/0    00:00:00 grep --color=auto -i 5720
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$
gitava commented 4 years ago

https://blog.csdn.net/rainbow702/article/details/50987174

image

gitava commented 4 years ago

it's the jdk bug /issue...

gitava commented 4 years ago
vagrant@hdp-node-01 ~]$sudo ./jdk/bin/jps
3034 JobHistoryServer
11791 -- process information unavailable
11646 -- process information unavailable
11549 -- process information unavailable
2239 NameNode
12386 Jps
2550 SecondaryNameNode
2380 DataNode
[vagrant@hdp-node-01 ~]$sudo /bin/jps
2550 SecondaryNameNode
3034 JobHistoryServer
2380 DataNode
11549 HQuorumPeer
11646 HMaster
2239 NameNode
12415 Jps
11791 HRegionServer
gitava commented 4 years ago

it's jdk1.7 issue, openjdk 1.8 working fine.

vagrant@hdp-node-01 ~]$java -version
java version "1.7.0_80"
Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)

[vagrant@hdp-node-01 ~]$/bin/java -version
openjdk version "1.8.0_252"
OpenJDK Runtime Environment (build 1.8.0_252-b09)
OpenJDK 64-Bit Server VM (build 25.252-b09, mixed mode)
[vagrant@hdp-node-01 ~]$
gitava commented 4 years ago

as per #16, install this on hdp3 which hasn't done this yet

sudo yum install java-1.8.0-openjdk-devel.x86_64
gitava commented 4 years ago

then

[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$which jps
alias jps='sudo jps'
    /usr/bin/sudo
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$sudo which jps
/bin/jps
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$jps
2113 DataNode
6189 Jps
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$sudo jps
2113 DataNode
6201 Jps
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$ls
5595X  5720X
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$mv 5595X 5595
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$mv 5720X 5720
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$jps
2113 DataNode
6225 Jps
5720 HRegionServer
5595 HQuorumPeer
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$ls -l /bin/jps
lrwxrwxrwx. 1 root root 21 Jul  3 05:16 /bin/jps -> /etc/alternatives/jps
[vagrant@hdp-node-03 /tmp/hsperfdata_vagrant]$
gitava commented 4 years ago

Issue solved.