hdfs namenode -bootstrapStandby 的时候报错。

centos7.4 hadoop2.7.7 jdk1.8
执行 hdfs namenode -format
hadoop-daemon.sh start namenode
都成功了。
然后再另一台机器执行
hdfs namenode -bootstrapStandby 报错。如下图。什么原因呀?
 
 
 
 
 
配置文件
 
 
<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://mycluster</value>  
    </property>
    <property>
         <name>hadoop.tmp.dir</name>
         <value>/home/hadoop-2.7.7/tmp</value>
    </property>
     <property>
            <name>ha.zookeeper.quorum</name>
           <value>172.16.128.163:2181,172.16.128.126:2181,172.16.128.166:2181</value>
    </property>
</configuration>


<configuration>
    <property>
        <name>dfs.nameservices</name>
        <value>mycluster</value>
    </property>
    <property>
        <name>dfs.ha.namenodes.mycluster</name>
        <value>nn1,nn2</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn1</name>
        <value>172.16.128.15:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn1</name>
        <value>172.16.128.15:50070</value>
    </property>
    <property>
        <name>dfs.namenode.rpc-address.mycluster.nn2</name>
        <value>172.16.128.16:9000</value>
    </property>
    <property>
        <name>dfs.namenode.http-address.mycluster.nn2</name>
        <value>172.16.128.16:50070</value>
    </property>
    <property>
        <name>dfs.namenode.shared.edits.dir</name>
        <value>qjournal://172.16.128.18:8485;172.16.128.19:8485;172.16.128.20:8485/mycluster</value>
    </property>
    <property>
        <name>dfs.journalnode.edits.dir</name>
        <value>/home/hadoop-2.7.7/journaldata</value>
    </property>
    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/home/hadoop-2.7.7/dfs/name</value>
    </property>
    <property>
        <name>dfs.namenode.data.dir</name>
        <value>/home/hadoop-2.7.7/dfs/data</value>
    </property>
    <property>
        <name>dfs.ha.automatic-failover.enabled</name>
        <value>true</value>
    </property>
    <property>
        <name>dfs.client.failover.proxy.provider.mycluster</name>
        <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/root/.ssh/id_rsa</value>
    </property>
    <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
    </property>
</configuration>


 
微信图片_20181213162132.png

离别钩

赞同来自:

没人知道原因吗?

离别钩

赞同来自:

自己找到原因了。scp的时候 文件hdfs-site.xml  忘记传了。

要回复问题请先登录注册