hadoop yarn,运行pi卡在“INFO mapreduce.Job: map 0% reduce 0%”

2个节点,1台master,1台slave,2核8G内存
启动服务正常,没有报错
运行pi,卡在“INFO mapreduce.Job:  map 0% reduce 0%”
 bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0-cdh5.7.0.jar pi 2 100
Number of Maps  = 2
Samples per Map = 100
16/05/25 01:00:57 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Wrote input for Map #0
Wrote input for Map #1
Starting Job
16/05/25 01:01:00 INFO client.RMProxy: Connecting to ResourceManager at host1/211.68.36.131:8032
16/05/25 01:01:01 INFO input.FileInputFormat: Total input paths to process : 2
16/05/25 01:01:02 INFO mapreduce.JobSubmitter: number of splits:2
16/05/25 01:01:02 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1464109182046_0001
16/05/25 01:01:03 INFO impl.YarnClientImpl: Submitted application application_1464109182046_0001
16/05/25 01:01:03 INFO mapreduce.Job: The url to track the job: http://host1:8088/proxy/applic ... 0001/
16/05/25 01:01:03 INFO mapreduce.Job: Running job: job_1464109182046_0001
16/05/25 01:01:16 INFO mapreduce.Job: Job job_1464109182046_0001 running in uber mode : false
16/05/25 01:01:16 INFO mapreduce.Job:  map 0% reduce 0%
 
yarn-site.xml文件
<configuration>

  <!-- Resource Manager Configs -->
  <property>
    <description>The hostname of the RM.</description>
    <name>yarn.resourcemanager.hostname</name>
    <value>host1</value>
  </property>

  <property>
    <description>The address of the applications manager interface in the RM.</description>
    <name>yarn.resourcemanager.address</name>
    <value>${yarn.resourcemanager.hostname}:8032</value>
  </property>

  <property>
    <description>The address of the scheduler interface.</description>
    <name>yarn.resourcemanager.scheduler.address</name>
    <value>${yarn.resourcemanager.hostname}:8030</value>
  </property>

  <property>
    <description>The http address of the RM web application.</description>
    <name>yarn.resourcemanager.webapp.address</name>
    <value>${yarn.resourcemanager.hostname}:8088</value>
  </property>

  <property>
    <description>The https adddress of the RM web application.</description>
    <name>yarn.resourcemanager.webapp.https.address</name>
    <value>${yarn.resourcemanager.hostname}:8090</value>
  </property>

  <property>
    <name>yarn.resourcemanager.resource-tracker.address</name>
    <value>${yarn.resourcemanager.hostname}:8031</value>
  </property>

  <property>
    <description>The address of the RM admin interface.</description>
    <name>yarn.resourcemanager.admin.address</name>
    <value>${yarn.resourcemanager.hostname}:8033</value>
  </property>

  <property>
    <description>List of directories to store localized files in. An
      application's localized file directory will be found in:
      ${yarn.nodemanager.local-dirs}/usercache/${user}/appcache/application_${appid}.
      Individual containers' work directories, called container_${contid}, will
      be subdirectories of this.
   </description>
    <name>yarn.nodemanager.local-dirs</name>
    <value>/home/hadoop/yarn/local</value>
  </property>

  <property>
    <description>Whether to enable log aggregation</description>
    <name>yarn.log-aggregation-enable</name>
    <value>true</value>
  </property>

  <property>
    <description>Where to aggregate logs to.</description>
    <name>yarn.nodemanager.remote-app-log-dir</name>
    <value>/tmp/logs</value>
  </property>

  <property>
    <description>Amount of physical memory, in MB, that can be allocated
    for containers.</description>
    <name>yarn.nodemanager.resource.memory-mb</name>
    <value>4096</value>
  </property>

  <property>
    <description>Number of CPU cores that can be allocated
    for containers.</description>
    <name>yarn.nodemanager.resource.cpu-vcores</name>
    <value>1</value>
  </property>

  <property>
    <description>the valid service name should only contain a-zA-Z0-9_ and can not start with numbers</description>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
</configuration>
 
logs中yarn-hadoop-nodemanager-host2.log文件
2016-05-25 01:01:08,113 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://host1:8020/tmp/hadoop-yarn/staging/hadoop/.staging/job_1464109182046_0001/job.splitmetainfo(->/home/hadoop/yarn/local/usercache/hadoop/appcache/application_1464109182046_0001/filecache/10/job.splitmetainfo) transitioned from DOWNLOADING to LOCALIZED
2016-05-25 01:01:08,245 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://host1:8020/tmp/hadoop-yarn/staging/hadoop/.staging/job_1464109182046_0001/job.jar(->/home/hadoop/yarn/local/usercache/hadoop/appcache/application_1464109182046_0001/filecache/11/job.jar) transitioned from DOWNLOADING to LOCALIZED
2016-05-25 01:01:08,331 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://host1:8020/tmp/hadoop-yarn/staging/hadoop/.staging/job_1464109182046_0001/job.split(->/home/hadoop/yarn/local/usercache/hadoop/appcache/application_1464109182046_0001/filecache/12/job.split) transitioned from DOWNLOADING to LOCALIZED
2016-05-25 01:01:08,394 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.localizer.LocalizedResource: Resource hdfs://host1:8020/tmp/hadoop-yarn/staging/hadoop/.staging/job_1464109182046_0001/job.xml(->/home/hadoop/yarn/local/usercache/hadoop/appcache/application_1464109182046_0001/filecache/13/job.xml) transitioned from DOWNLOADING to LOCALIZED
2016-05-25 01:01:08,396 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1464109182046_0001_01_000001 transitioned from LOCALIZING to LOCALIZED
2016-05-25 01:01:08,474 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.container.Container: Container container_1464109182046_0001_01_000001 transitioned from LOCALIZED to RUNNING
2016-05-25 01:01:08,541 INFO org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor: launchContainer: [bash, /home/hadoop/yarn/local/usercache/hadoop/appcache/application_1464109182046_0001/container_1464109182046_0001_01_000001/default_container_executor.sh]
2016-05-25 01:01:09,716 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Starting resource-monitoring for container_1464109182046_0001_01_000001
2016-05-25 01:01:09,943 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 27215 for container-id container_1464109182046_0001_01_000001: 74.4 MB of 2 GB physical memory used; 2.7 GB of 4.2 GB virtual memory used
2016-05-25 01:01:13,004 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 27215 for container-id container_1464109182046_0001_01_000001: 145.4 MB of 2 GB physical memory used; 2.8 GB of 4.2 GB virtual memory used
2016-05-25 01:01:16,034 INFO org.apache.hadoop.yarn.server.nodemanager.containermanager.monitor.ContainersMonitorImpl: Memory usage of ProcessTree 27215 for container-id container_1464109182046_0001_01_000001: 293.1 MB of 2 GB physical memory used; 2.9 GB of 4.2 GB virtual memory used

wangxiaolei

赞同来自:

执行yarn logs -applicationId application_1464109182046_0001显示什么

wangxiaolei

赞同来自:

在有问题的节点上执行命令dmesg

wangxiaolei

赞同来自:

mapred-site.xml文件配置上
  <property>
    <name>yarn.app.mapreduce.am.resource.mb</name>
    <value>200</value>
  </property>
 

张伟

赞同来自:

微信截图_20160524195323.png
还是卡在那个位置 INFO mapreduce.Job:  map 0% reduce 0%

fish - Hadooper

赞同来自:

参考http://wenda.chinahadoop.cn/question/2291,其中最考上的这个回复。 各个配置项都按这个差不多正确配置了么?

目田

赞同来自:

我也遇到这个错了,因为配置文件是COPY的董西成老师提供的,然后在上面加以修改,检查确认了一遍后,确定无错误,然后看了日志,发现日志中HADOOP在找一些hostname,这些hostname并不是我在按照视频在/etc/hosts下配置的SY-0225之类的,而是我机器默认的hostname,所以我参考这篇帖子http://wenda.chinahadoop.cn/question/2704 修改hostname后,重启机器,在重新执行任务就通过了。

要回复问题请先登录注册