运行wordcount时,报Container exited with a non-zero exit code 1

2015-12-05 11:59:53,790 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: Application application_1449287843771_0001 failed 2 times due to AM Container for appattempt_1449287843771_0001_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://YARN001:8088/cluster/ap ... 1Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1449287843771_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
        at org.apache.hadoop.util.Shell.run(Shell.java:456)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.
2015-12-05 11:59:53,796 INFO org.apache.hadoop.yarn.server.resourcemanager.rmapp.RMAppImpl: application_1449287843771_0001 State change from FINAL_SAVING to FAILED
2015-12-05 11:59:53,798 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop   OPERATION=Application Finished - Failed       TARGET=RMAppManager     RESULT=FAILURE  DESCRIPTION=App failed with state: FAILED       PERMISSIONS=Application application_1449287843771_0001 failed 2 times due to AM Container for appattempt_1449287843771_0001_000002 exited with  exitCode: 1
For more detailed output, check application tracking page:http://YARN001:8088/cluster/ap ... 1Then, click on links to logs of each attempt.
Diagnostics: Exception from container-launch.
Container id: container_1449287843771_0001_02_000001
Exit code: 1
Stack trace: ExitCodeException exitCode=1: 
        at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)
        at org.apache.hadoop.util.Shell.run(Shell.java:456)
        at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)
        at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)
        at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)


Container exited with a non-zero exit code 1
Failing this attempt. Failing the application.  APPID=application_1449287843771_0001
 
运行命令: bin/hadoop jar ./wordcount.jar WordCount /test/input /test/output

wangxiaolei

赞同来自:

有可能是wordcount代码中写的有问题或者你的jar包有问题,你可以发下wordcount代码。

moji

赞同来自:

wordcount用的是hadoop example中的

wangxiaolei

赞同来自:

看看你的配置文件mapred-site.xml和hdfs-site.xml内容

moji

赞同来自:

2015-12-05 15:07:12,654 WARN org.apache.hadoop.yarn.server.resourcemanager.RMAuditLogger: USER=hadoop   OPERATION=Application Finished - Failed TARGET=RMAppManagerRESULT=FAILURE   DESCRIPTION=App failed with state: FAILED       PERMISSIONS=Application application_1449299150466_0001 failed 2 times due to AM Container for appattempt_1449299150466_0001_000002 exited with  exitCode: 1 For more detailed output, check application tracking page:http://YARN001:8088/cluster/ap ... 1Then, click on links to logs of each attempt. Diagnostics: Exception from container-launch. Container id: container_1449299150466_0001_02_000001 Exit code: 1 Stack trace: ExitCodeException exitCode=1:          at org.apache.hadoop.util.Shell.runCommand(Shell.java:545)         at org.apache.hadoop.util.Shell.run(Shell.java:456)         at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:722)         at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211)         at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302)         at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82)         at java.util.concurrent.FutureTask.run(FutureTask.java:266)         at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)         at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)         at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 Failing this attempt. Failing the application.  APPID=application_1449299150466_0001   我调整了mapred-site.xml和hdfs-site.xml内容,还是报错

wangxiaolei

赞同来自:

是将mapred-site.xml和hdfs-site.xml内容贴出来

moji

赞同来自:

mapred-site.xml  <?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera Manager--> <configuration>   <property>     <name>mapred.job.tracker</name>     <value>YARN001:8021</value>   </property>   <property>     <name>mapreduce.job.split.metainfo.maxsize</name>     <value>10000000</value>   </property>   <property>     <name>mapred.local.dir</name>     <value>/mnt/mapred_local</value>   </property>   <property>     <name>mapreduce.job.counters.max</name>     <value>120</value>   </property>   <property>     <name>mapreduce.output.fileoutputformat.compress</name>     <value>false</value>   </property>   <property>     <name>mapreduce.output.fileoutputformat.compress.type</name>     <value>BLOCK</value>   </property>   <property>     <name>mapreduce.output.fileoutputformat.compress.codec</name>     <value>org.apache.hadoop.io.compress.DefaultCodec</value>   </property>   <property>     <name>mapreduce.map.output.compress.codec</name>     <value>org.apache.hadoop.io.compress.SnappyCodec</value>   </property>   <property>     <name>mapreduce.map.output.compress</name>     <value>true</value>   </property>   <property>     <name>zlib.compress.level</name>     <value>DEFAULT_COMPRESSION</value>   </property>   <property>     <name>mapreduce.task.io.sort.factor</name>     <value>64</value>   </property>   <property>     <name>mapreduce.map.sort.spill.percent</name>     <value>0.8</value>   </property>   <property>     <name>mapreduce.reduce.shuffle.parallelcopies</name>     <value>10</value>   </property>   <property>     <name>mapreduce.task.timeout</name>     <value>600000</value>   </property>   <property>     <name>mapreduce.client.submit.file.replication</name>     <value>1</value>   </property>   <property>     <name>mapreduce.job.reduces</name>     <value>1</value>   </property>   <property>     <name>mapreduce.task.io.sort.mb</name>     <value>30</value>   </property>   <property>     <name>mapreduce.map.speculative</name>     <value>false</value>   </property>   <property>     <name>mapreduce.reduce.speculative</name>     <value>false</value>   </property>   <property>     <name>mapreduce.job.reduce.slowstart.completedmaps</name>     <value>0.8</value>   </property>   <property>     <name>mapreduce.jobhistory.address</name>     <value>YARN002:10020</value>   </property>   <property>     <name>mapreduce.jobhistory.webapp.address</name>     <value>YARN002:19888</value>   </property>   <property>     <name>mapreduce.framework.name</name>     <value>yarn</value>   </property>   <property>     <name>yarn.app.mapreduce.am.staging-dir</name>     <value>/user</value>   </property>   <property>     <name>yarn.app.mapreduce.am.resource.mb</name>     <value>200</value>   </property>   <property>     <name>yarn.app.mapreduce.am.resource.cpu-vcores</name>     <value>1</value>   </property>   <property>     <name>mapreduce.job.ubertask.enabled</name>     <value>false</value>   </property>   <property>     <name>yarn.app.mapreduce.am.command-opts</name>     <value>-Djava.net.preferIPv4Stack=true -Xmx100m</value>   </property>   <property>     <name>mapreduce.map.java.opts</name>     <value>-Djava.net.preferIPv4Stack=true -Xmx100m</value>   </property>   <property>     <name>mapreduce.reduce.java.opts</name>     <value>-Djava.net.preferIPv4Stack=true -Xmx100m</value>   </property>   <property>     <name>mapreduce.map.memory.mb</name>     <value>200</value>   </property>   <property>     <name>mapreduce.map.cpu.vcores</name>     <value>1</value>   </property>   <property>     <name>mapreduce.reduce.memory.mb</name>     <value>200</value>   </property>   <property>     <name>mapreduce.reduce.cpu.vcores</name>     <value>1</value>   </property>   <property>     <name>mapreduce.application.classpath</name>     <value>$HADOOP_MAPRED_HOME/*,$HADOOP_MAPRED_HOME/lib/*,$MR2_CLASSPATH</value>   </property>   <property>     <name>mapreduce.admin.user.env</name>     <value>LD_LIBRARY_PATH=$HADOOP_COMMON_HOME/lib/native:$JAVA_LIBRARY_PATH</value>   </property>   <property>     <name>mapreduce.shuffle.max.connections</name>     <value>80</value>   </property> </configuration>

moji

赞同来自:

hdfs-site.xml   <?xml version="1.0" encoding="UTF-8"?> <!--Autogenerated by Cloudera Manager--> <configuration>   <!--property>     <name>dfs.namenode.servicerpc-address</name>     <value>DX3-1:8022</value>   </property-->   <property>       <name>dfs.nameservices</name>       <value>hadoop-cluster1,hadoop-cluster2</value>   </property>   <property>       <name>dfs.ha.namenodes.hadoop-cluster1</name>       <value>nn1,nn2</value>   </property>   <property>       <name>dfs.namenode.rpc-address.hadoop-cluster1.nn1</name>       <value>YARN001:8020</value>   </property>   <property>       <name>dfs.namenode.rpc-address.hadoop-cluster1.nn2</name>       <value>YARN002:8020</value>   </property>   <property>       <name>dfs.namenode.http-address.hadoop-cluster1.nn1</name>       <value>YARN001:50070</value>   </property>   <property>       <name>dfs.namenode.http-address.hadoop-cluster1.nn2</name>       <value>YARN002:50070</value>   </property>   <property>       <name>dfs.ha.namenodes.hadoop-cluster2</name>       <value>nn3,nn4</value>   </property>   <property>       <name>dfs.namenode.rpc-address.hadoop-cluster2.nn3</name>       <value>YARN003:8020</value>   </property>   <property>       <name>dfs.namenode.rpc-address.hadoop-cluster2.nn4</name>       <value>YARN004:8020</value>   </property>   <property>       <name>dfs.namenode.http-address.hadoop-cluster2.nn3</name>       <value>YARN003:50070</value>   </property>   <property>       <name>dfs.namenode.http-address.hadoop-cluster2.nn4</name>       <value>YARN004:50070</value>   </property>   <property>     <name>dfs.namenode.name.dir</name>     <value>file:///home/hadoop/hadoop/hdfs/name</value>   </property>   <property>       <name>dfs.namenode.shared.edits.dir</name>       <value>qjournal://YARN002:8485;YARN003:8485;YARN004:8485/hadoop-cluster1</value>   </property>   <property>     <name>dfs.datanode.data.dir</name>     <value>file:///home/hadoop/hadoop/hdfs/data</value>   </property>   <property>     <name>dfs.namenode.checkpoint.dir</name>     <value>file:///home/hadoop/hadoop/dfs_secondname</value>   </property>   <property>     <name>dfs.replication</name>     <value>3</value>   </property>   <property>     <name>dfs.blocksize</name>     <value>134217728</value>   </property>   <property>     <name>dfs.client.use.datanode.hostname</name>     <value>false</value>   </property>   <property>     <name>fs.permissions.umask-mode</name>     <value>022</value>   </property>   <property>     <name>dfs.client.read.shortcircuit</name>     <value>false</value>   </property>   <property>     <name>dfs.domain.socket.path</name>     <value>/var/run/hdfs-sockets/dn</value>   </property>   <property>     <name>dfs.client.read.shortcircuit.skip.checksum</name>     <value>false</value>   </property>   <property>     <name>dfs.client.domain.socket.data.traffic</name>     <value>false</value>   </property>   <property>     <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>     <value>true</value>   </property>   <property>     <name>dfs.permissions.enabled</name>     <value>true</value>   </property>   <property>     <name>dfs.ha.automatic-failover.enabled</name>     <value>false</value>   </property>   <property>     <name>dfs.journalnode.edits.dir</name>     <value>/home/hadoop/hadoop/hdfs/journal/</value>   </property>   <!--   <property>     <name>dfs.permissions.superusergroup</name>     <value>hdfs</value>   </property>   --> </configuration>

wangxiaolei

赞同来自:

私信发我IP和密码,我上去看看你的集群。

fish - Hadooper

赞同来自:

执行yarn logs是否能看到应用的完整日志?

neimengguzn

赞同来自:

这个问题解决了吗我也遇到了  

西门吹水之城

赞同来自:

问一下,你这个问题解决了吗?我也遇到这个问题了,不知道怎么弄了

要回复问题请先登录注册