HDFS HA+YARN 执行命令 bin/hdfs haadmin -transitionToActive nn1 时出错

一步步按照讲义10, HDFS HA+YARN部署这个pdf做的。
 当在chinahadoop1上 执行到9.4.1这一步   bin/hdfs haadmin -transitionToActive nn1 时
 [chinahadoop@chinahadoop1 hadoop-2.5.2]$ bin/hdfs haadmin -transitionToActive nn1
16/12/18 21:56:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Operation failed: Failed on local exception: java.io.EOFException; Host Details : local host is: "chinahadoop1/192.168.1.118"; destination host is: "chinahadoop1":8020;
 
不知道有可能是哪里设置错误了呢?谢谢!
 
备注:
 
按照pdf上面的方式,我每一台机器的/etc/hosts内容都为
 
192.168.1.145 win
192.168.1.118 chinahadoop1
192.168.1.117 chinahadoop2
192.168.1.104 chinahadoop3
192.168.1.141 chinahadoop4
127.0.0.1 localhost
 
/home/chinahadoop/hadoop/ha/hadoop-2.5.2/etc/hadoop/slaves 的内容都为
 
chinahadoop1
chinahadoop2
chinahadoop3
chinahadoop4

wangxiaolei

赞同来自: xiaomaogy

简单理解格式化就是格式化文件系统,符合hdfs系统格式的文件系统。 比如电脑硬盘和U盘的格式化。

wangxiaolei

赞同来自: xiaomaogy

因为高可用模式的主和备的服务器数据格式要一致才能相互通信, 所以要执行命令bin/hdfs namenode -bootstrapStandby

fish - Hadooper

赞同来自:

有异常么?

xiaomaogy

赞同来自:

异常就是:   [chinahadoop@chinahadoop1 hadoop-2.5.2]$ bin/hdfs haadmin -transitionToActive nn1 16/12/18 22:18:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable Operation failed: Failed on local exception: java.io.EOFException; Host Details : local host is: "chinahadoop1/192.168.1.118"; destination host is: "chinahadoop1":8020; [chinahadoop@chinahadoop1 hadoop-2.5.2]$   java.io.EOFException应该就是异常?别的没有了。。。。

fish - Hadooper

赞同来自:

这机器的/etc/hosts是怎么配置的?

xiaomaogy

赞同来自:

按照pdf上面的方式,我每一台机器的/etc/hosts内容都为 192.168.1.145 win 192.168.1.118 chinahadoop1 192.168.1.117 chinahadoop2 192.168.1.104 chinahadoop3 192.168.1.141 chinahadoop4 127.0.0.1 localhost  然后 hadoop-chinahadoop-namenode-chinahadoop1.log有如下异常   2016-12-18 22:23:24,265 FATAL org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [192.168.1.117:8485, 192.168.1.104:8485, 192.168.1.141:8485], stream=null)) org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown: 192.168.1.117:8485: Incompatible namespaceID for journal Storage Directory /tmp/hadoop/dfs/journalnode/chinahadoop: NameNode has nsId 2053464723 but storage has nsId 1174552736         at org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:234)         at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:290)         at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:135)         at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)         at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)         at java.security.AccessController.doPrivileged(Native Method)         at javax.security.auth.Subject.doAs(Subject.java:415)         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)   192.168.1.104:8485: Incompatible namespaceID for journal Storage Directory /tmp/hadoop/dfs/journalnode/chinahadoop: NameNode has nsId 2053464723 but storage has nsId 1174552736         at org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:234)         at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:290)         at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:135)         at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)         at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)         at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)         at java.security.AccessController.doPrivileged(Native Method)         at javax.security.auth.Subject.doAs(Subject.java:415)         at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)      

xiaomaogy

赞同来自:

哦。。。。我试出来了。。。。之前我按照教程上面  
Screen_Shot_2016-12-18_at_10.42_.36_PM_.png
  所以我就在9.3.1 (chinahadoop1)和9.3.2(chinahadoop3)上分别执行了一遍bin/hdfs namenode -format   于是就出了那个错   现在我只在chinahadoop1 上面格式化了一遍,就行了。。   其实还是不是太懂,这边的格式化到底在干什么。。。bin/hdfs namenode -bootstrapStandby 又到底在干什么。。。。

xiaomaogy

赞同来自:

谢谢你!!

要回复问题请先登录注册