外部工具

2018-10-13 15:45 更新

外部工具

tail

tail是命令行工具,可以讓您查看文件的末尾。添加-f選項(xiàng),當(dāng)新數(shù)據(jù)可用時(shí),它將刷新。當(dāng)你想知道發(fā)生了什么時(shí),這很有用,例如,當(dāng)一個(gè)集群需要很長時(shí)間才能關(guān)閉或啟動(dòng)時(shí),因?yàn)槟憧梢杂|發(fā)一個(gè)新的終端并跟蹤主日志(可能還有一些RegionServers)。

top

在第一次嘗試查看計(jì)算機(jī)上運(yùn)行的內(nèi)容以及如何使用資源時(shí),top可能是最重要的工具之一。這是生產(chǎn)系統(tǒng)的一個(gè)例子:

top - 14:46:59 up 39 days, 11:55,  1 user,  load average: 3.75, 3.57, 3.84
Tasks: 309 total,   1 running, 308 sleeping,   0 stopped,   0 zombie
Cpu(s):  4.5%us,  1.6%sy,  0.0%ni, 91.7%id,  1.4%wa,  0.1%hi,  0.6%si,  0.0%st
Mem:  24414432k total, 24296956k used,   117476k free,     7196k buffers
Swap: 16008732k total,        14348k used, 15994384k free, 11106908k cached

  PID USER          PR  NI  VIRT  RES  SHR S %CPU %MEM        TIME+  COMMAND
15558 hadoop        18  -2 3292m 2.4g 3556 S   79 10.4   6523:52 java
13268 hadoop        18  -2 8967m 8.2g 4104 S   21 35.1   5170:30 java
 8895 hadoop        18  -2 1581m 497m 3420 S   11  2.1   4002:32 java
…

在這里我們可以看到在過去五分鐘內(nèi)的系統(tǒng)平均負(fù)載是3.75,這非常粗略地意味著在這5分鐘內(nèi)平均有3.75個(gè)線程在等待CPU時(shí)間。通常,完美的利用率等于內(nèi)核的數(shù)量,在該數(shù)量下機(jī)器未得到充分利用,并且超過此數(shù)量,機(jī)器被過度利用。這是一個(gè)重要的概念,請(qǐng)參閱此文章以了解更多信息:http://www.linuxjournal.com/article/9001。

除了負(fù)載之外,我們可以看到系統(tǒng)幾乎使用了所有可用的RAM,但大部分用于OS緩存(這很好)。交換只有幾KB,這是需要的,高數(shù)字表示交換活動(dòng),這是Java系統(tǒng)性能的克星。另一種檢測(cè)交換的方法是當(dāng)負(fù)載平均值通過roof時(shí)(盡管這也可能是由死磁盤等引起的)。

默認(rèn)情況下,進(jìn)程列表并不是非常有用,我們所知道的是3個(gè)java進(jìn)程正在使用大約111%的CPU。要知道哪個(gè)是哪個(gè),只需鍵入c,并擴(kuò)展每一行。鍵入1將為您提供每個(gè)CPU的使用方式的詳細(xì)信息,而不是所有CPU的平均值,如此處所示。

JPS

jps隨每個(gè)JDK一起提供,并為當(dāng)前用戶提供java進(jìn)程ID(如果是root,則為所有用戶提供id)。例:

hadoop@sv4borg12:~$ jps
1322 TaskTracker
17789 HRegionServer
27862 Child
1158 DataNode
25115 HQuorumPeer
2950 Jps
19750 ThriftServer
18776 jmx

按順序,我們看到:

  • Hadoop TaskTracker管理當(dāng)?shù)氐腃hilds
  • HBase RegionServer服務(wù)于區(qū)域
  • Child,它的MapReduce任務(wù),無法確切地分辨出哪種類型
  • Hadoop TaskTracker管理當(dāng)?shù)氐腃hilds
  • Hadoop DataNode服務(wù)塊
  • HQorumPeer,ZooKeeper組成成員
  • Jps,是當(dāng)前的過程
  • ThriftServer,它是一個(gè)特殊的將只在啟動(dòng)thrift時(shí)運(yùn)行
  • jmx,這是一個(gè)本地進(jìn)程,它是我們監(jiān)控平臺(tái)的一部分,你可能沒有。

然后,您可以執(zhí)行諸如檢出啟動(dòng)該過程的完整命令行之類的操作:

hadoop@sv4borg12:~$ ps aux | grep HRegionServer
hadoop   17789  155 35.2 9067824 8604364 ?     S<l  Mar04 9855:48 /usr/java/jdk1.6.0_14/bin/java -Xmx8000m -XX:+DoEscapeAnalysis -XX:+AggressiveOpts -XX:+UseConcMarkSweepGC -XX:NewSize=64m -XX:MaxNewSize=64m -XX:CMSInitiatingOccupancyFraction=88 -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCTimeStamps -Xloggc:/export1/hadoop/logs/gc-hbase.log -Dcom.sun.management.jmxremote.port=10102 -Dcom.sun.management.jmxremote.authenticate=true -Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.password.file=/home/hadoop/hbase/conf/jmxremote.password -Dcom.sun.management.jmxremote -Dhbase.log.dir=/export1/hadoop/logs -Dhbase.log.file=hbase-hadoop-regionserver-sv4borg12.log -Dhbase.home.dir=/home/hadoop/hbase -Dhbase.id.str=hadoop -Dhbase.root.logger=INFO,DRFA -Djava.library.path=/home/hadoop/hbase/lib/native/Linux-amd64-64 -classpath /home/hadoop/hbase/bin/../conf:[many jars]:/home/hadoop/hadoop/conf org.apache.hadoop.hbase.regionserver.HRegionServer start

jstack

jstack在嘗試找出除了查看日志之外,java進(jìn)程正在做什么,它是最重要的工具之一。它必須與jps一起使用才能為其提供進(jìn)程ID。它顯示了一個(gè)線程列表,每個(gè)線程都有一個(gè)名稱,它們按照創(chuàng)建的順序出現(xiàn)(所以最頂層的線程是最新的線程)。以下是一些示例:

RegionServer的主線程等待主服務(wù)器執(zhí)行的操作:

"regionserver60020" prio=10 tid=0x0000000040ab4000 nid=0x45cf waiting on condition [0x00007f16b6a96000..0x00007f16b6a96a70]
java.lang.Thread.State: TIMED_WAITING (parking)
    at sun.misc.Unsafe.park(Native Method)
    - parking to wait for  <0x00007f16cd5c2f30> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
    at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
    at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:1963)
    at java.util.concurrent.LinkedBlockingQueue.poll(LinkedBlockingQueue.java:395)
    at org.apache.hadoop.hbase.regionserver.HRegionServer.run(HRegionServer.java:647)
    at java.lang.Thread.run(Thread.java:619)

當(dāng)前正在刷新到文件的MemStore刷新線程:

"regionserver60020.cacheFlusher" daemon prio=10 tid=0x0000000040f4e000 nid=0x45eb in Object.wait() [0x00007f16b5b86000..0x00007f16b5b87af0]
java.lang.Thread.State: WAITING (on object monitor)
    at java.lang.Object.wait(Native Method)
    at java.lang.Object.wait(Object.java:485)
    at org.apache.hadoop.ipc.Client.call(Client.java:803)
    - locked <0x00007f16cb14b3a8> (a org.apache.hadoop.ipc.Client$Call)
    at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:221)
    at $Proxy1.complete(Unknown Source)
    at sun.reflect.GeneratedMethodAccessor38.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
    at $Proxy1.complete(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.closeInternal(DFSClient.java:3390)
    - locked <0x00007f16cb14b470> (a org.apache.hadoop.hdfs.DFSClient$DFSOutputStream)
    at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.close(DFSClient.java:3304)
    at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:61)
    at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:86)
    at org.apache.hadoop.hbase.io.hfile.HFile$Writer.close(HFile.java:650)
    at org.apache.hadoop.hbase.regionserver.StoreFile$Writer.close(StoreFile.java:853)
    at org.apache.hadoop.hbase.regionserver.Store.internalFlushCache(Store.java:467)
    - locked <0x00007f16d00e6f08> (a java.lang.Object)
    at org.apache.hadoop.hbase.regionserver.Store.flushCache(Store.java:427)
    at org.apache.hadoop.hbase.regionserver.Store.access$100(Store.java:80)
    at org.apache.hadoop.hbase.regionserver.Store$StoreFlusherImpl.flushCache(Store.java:1359)
    at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:907)
    at org.apache.hadoop.hbase.regionserver.HRegion.internalFlushcache(HRegion.java:834)
    at org.apache.hadoop.hbase.regionserver.HRegion.flushcache(HRegion.java:786)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:250)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.flushRegion(MemStoreFlusher.java:224)
    at org.apache.hadoop.hbase.regionserver.MemStoreFlusher.run(MemStoreFlusher.java:146)

一個(gè)處理程序線程正在等待要做的事情(如put,delete,scan等):

"IPC Server handler 16 on 60020" daemon prio=10 tid=0x00007f16b011d800 nid=0x4a5e waiting on condition [0x00007f16afefd000..0x00007f16afefd9f0]
   java.lang.Thread.State: WAITING (parking)
          at sun.misc.Unsafe.park(Native Method)
          - parking to wait for  <0x00007f16cd3f8dd8> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
          at java.util.concurrent.locks.LockSupport.park(LockSupport.java:158)
          at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:1925)
          at java.util.concurrent.LinkedBlockingQueue.take(LinkedBlockingQueue.java:358)
          at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1013)

還有一個(gè)正在忙著增加一個(gè)計(jì)數(shù)器(它正處于嘗試創(chuàng)建掃描器以讀取最后一個(gè)值的階段):

"IPC Server handler 66 on 60020" daemon prio=10 tid=0x00007f16b006e800 nid=0x4a90 runnable [0x00007f16acb77000..0x00007f16acb77cf0]
   java.lang.Thread.State: RUNNABLE
          at org.apache.hadoop.hbase.regionserver.KeyValueHeap.<init>(KeyValueHeap.java:56)
          at org.apache.hadoop.hbase.regionserver.StoreScanner.<init>(StoreScanner.java:79)
          at org.apache.hadoop.hbase.regionserver.Store.getScanner(Store.java:1202)
          at org.apache.hadoop.hbase.regionserver.HRegion$RegionScanner.<init>(HRegion.java:2209)
          at org.apache.hadoop.hbase.regionserver.HRegion.instantiateInternalScanner(HRegion.java:1063)
          at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1055)
          at org.apache.hadoop.hbase.regionserver.HRegion.getScanner(HRegion.java:1039)
          at org.apache.hadoop.hbase.regionserver.HRegion.getLastIncrement(HRegion.java:2875)
          at org.apache.hadoop.hbase.regionserver.HRegion.incrementColumnValue(HRegion.java:2978)
          at org.apache.hadoop.hbase.regionserver.HRegionServer.incrementColumnValue(HRegionServer.java:2433)
          at sun.reflect.GeneratedMethodAccessor20.invoke(Unknown Source)
          at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
          at java.lang.reflect.Method.invoke(Method.java:597)
          at org.apache.hadoop.hbase.ipc.HBaseRPC$Server.call(HBaseRPC.java:560)
          at org.apache.hadoop.hbase.ipc.HBaseServer$Handler.run(HBaseServer.java:1027)

從HDFS接收數(shù)據(jù)的線程:

"IPC Client (47) connection to sv4borg9/10.4.24.40:9000 from hadoop" daemon prio=10 tid=0x00007f16a02d0000 nid=0x4fa3 runnable [0x00007f16b517d000..0x00007f16b517dbf0]
   java.lang.Thread.State: RUNNABLE
          at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
          at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:215)
          at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:65)
          at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:69)
          - locked <0x00007f17d5b68c00> (a sun.nio.ch.Util$1)
          - locked <0x00007f17d5b68be8> (a java.util.Collections$UnmodifiableSet)
          - locked <0x00007f1877959b50> (a sun.nio.ch.EPollSelectorImpl)
          at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:80)
          at org.apache.hadoop.net.SocketIOWithTimeout$SelectorPool.select(SocketIOWithTimeout.java:332)
          at org.apache.hadoop.net.SocketIOWithTimeout.doIO(SocketIOWithTimeout.java:157)
          at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:155)
          at org.apache.hadoop.net.SocketInputStream.read(SocketInputStream.java:128)
          at java.io.FilterInputStream.read(FilterInputStream.java:116)
          at org.apache.hadoop.ipc.Client$Connection$PingInputStream.read(Client.java:304)
          at java.io.BufferedInputStream.fill(BufferedInputStream.java:218)
          at java.io.BufferedInputStream.read(BufferedInputStream.java:237)
          - locked <0x00007f1808539178> (a java.io.BufferedInputStream)
          at java.io.DataInputStream.readInt(DataInputStream.java:370)
          at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:569)
          at org.apache.hadoop.ipc.Client$Connection.run(Client.java:477)

這是一個(gè)master試圖在RegionServer死后恢復(fù)lease:

"LeaseChecker" daemon prio=10 tid=0x00000000407ef800 nid=0x76cd waiting on condition [0x00007f6d0eae2000..0x00007f6d0eae2a70]
--
   java.lang.Thread.State: WAITING (on object monitor)
          at java.lang.Object.wait(Native Method)
          at java.lang.Object.wait(Object.java:485)
          at org.apache.hadoop.ipc.Client.call(Client.java:726)
          - locked <0x00007f6d1cd28f80> (a org.apache.hadoop.ipc.Client$Call)
          at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
          at $Proxy1.recoverBlock(Unknown Source)
          at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.processDatanodeError(DFSClient.java:2636)
          at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.<init>(DFSClient.java:2832)
          at org.apache.hadoop.hdfs.DFSClient.append(DFSClient.java:529)
          at org.apache.hadoop.hdfs.DistributedFileSystem.append(DistributedFileSystem.java:186)
          at org.apache.hadoop.fs.FileSystem.append(FileSystem.java:530)
          at org.apache.hadoop.hbase.util.FSUtils.recoverFileLease(FSUtils.java:619)
          at org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1322)
          at org.apache.hadoop.hbase.regionserver.wal.HLog.splitLog(HLog.java:1210)
          at org.apache.hadoop.hbase.master.HMaster.splitLogAfterStartup(HMaster.java:648)
          at org.apache.hadoop.hbase.master.HMaster.joinCluster(HMaster.java:572)
          at org.apache.hadoop.hbase.master.HMaster.run(HMaster.java:503)

OpenTSDB

OpenTSDB是Ganglia的絕佳替代品,因?yàn)樗褂肁pache HBase存儲(chǔ)所有時(shí)間序列,而不必向下采樣。監(jiān)控您的包含OpenTSDB的HBase集群是一個(gè)很好的練習(xí)。

這是一個(gè)集群的例子,它幾乎同時(shí)發(fā)生了數(shù)百個(gè)壓縮,這嚴(yán)重影響了IO性能:( TODO:插入圖表繪制compactionQueueSize)

建立儀表板是一個(gè)很好的做法,每個(gè)機(jī)器和每個(gè)集群都包含所有重要的圖表,因此只需一個(gè)快速查看就可以完成調(diào)試問題。例如,在StumbleUpon上,每個(gè)集群有一個(gè)儀表板,其中包含來自操作系統(tǒng)和Apache HBase的最重要指標(biāo)。然后,您可以在機(jī)器級(jí)別下載并獲得更詳細(xì)的指標(biāo)。

clusterssh +top

clusterssh + top,它就像一個(gè)窮人的監(jiān)控系統(tǒng),當(dāng)你只有幾臺(tái)機(jī)器時(shí),它非常有用,因?yàn)樗苋菀自O(shè)置。啟動(dòng)clusterssh將為每臺(tái)機(jī)器提供一個(gè)終端,而在另一個(gè)終端中,您鍵入的任何內(nèi)容都將在每個(gè)窗口中重新鍵入。這意味著您可以鍵入top一次,同時(shí)為所有計(jì)算機(jī)啟動(dòng)它,同時(shí)為您提供群集當(dāng)前狀態(tài)的完整視圖。您還可以同時(shí)拖尾所有日志,編輯文件等。

以上內(nèi)容是否對(duì)您有幫助:
在線筆記
App下載
App下載

掃描二維碼

下載編程獅App

公眾號(hào)
微信公眾號(hào)

編程獅公眾號(hào)