用hadoop streaming运行MapReduce任务出现的异常以及相关疑问,请老师解答,谢谢。

##### 问题: 1.运行结果中的异常(请查阅下文),麻烦老师帮忙解答一下; 2.为何在运行过程中mapper还没运行完,redecuer就已经有进度,这是合理的吗? 16/08/23 14:28:04 INFO mapreduce.Job: map 44% reduce 3% 16/08/23 14:28:18 INFO mapreduce.Job: map 45% reduce 3% 16/08/23 14:28:38 INFO mapreduce.Job: map 46% reduce 3% 16/08/23 14:29:00 INFO mapreduce.Job: map 47% reduce 3% 16/08/23 14:29:25 INFO mapreduce.Job: map 48% reduce 3% 3.参数-input不支持目录遍历吗,我只有配置成INPUT_PATH=/test/input/\*/\*才能访问到文件; 4.如何查看当前有多少个mapper任务和reducer任务? 谢谢! ##### 运行环境: hadoop用户 xx004 : /hadoop/work/yangxy/test 主机相关信息已发给雷雷老师。 ##### 相关脚本: go.sh ```bash #! /bin/bash HADOOP_HOME=/hadoop/hadoop-2.6.2 INPUT_PATH=/test/input/*/* OUTPUT_PATH=/test/output echo "clear OUTPUT_PATH:$OUTPUT_PATH..." $HADOOP_HOME/bin/hadoop fs -rmr $OUTPUT_PATH $HADOOP_HOME/bin/hadoop jar\ $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.6.2.jar\ -files mapper.sh,reducer.sh\ -input $INPUT_PATH\ -output $OUTPUT_PATH\ -mapper "sh mapper.sh"\ -reducer "sh reducer.sh"\ ``` mapper.sh ```bash #! /bin/bash while read line do word=`echo $line|awk '{print $1}'` echo "$word 1" done ``` reducer.sh ```bash #! /bin/bash count=0 started=0 word="" while read line do newword=`echo $line|cut -d ' ' -f 1` if [ "$word" != "$newword" ] then [ $started -ne 0 ] && echo -e "$word\t$count" word=$newword count=1 started=1 else count=$(($count + 1)) fi done echo -e "$word\t$count" ``` ##### 运行结果: ```shell [hadoop@xx004 test]$ sh ./go.sh clear OUTPUT_PATH:/test/output... rmr: DEPRECATED: Please use 'rm -r' instead. 16/08/23 14:18:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 16/08/23 14:18:26 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes. Deleted /test/output 16/08/23 14:18:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable packageJobJar: [/tmp/hadoop-unjar5922895213828761379/] [] /tmp/streamjob453960317955214828.jar tmpDir=null 16/08/23 14:18:29 INFO client.RMProxy: Connecting to ResourceManager at xx002/10.163.241.201:8032 16/08/23 14:18:29 INFO client.RMProxy: Connecting to ResourceManager at xx002/10.163.241.201:8032 16/08/23 14:18:31 INFO mapred.FileInputFormat: Total input paths to process : 10 16/08/23 14:18:31 INFO mapreduce.JobSubmitter: number of splits:10 16/08/23 14:18:31 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1471879749109_0006 16/08/23 14:18:32 INFO impl.YarnClientImpl: Submitted application application_1471879749109_0006 16/08/23 14:18:32 INFO mapreduce.Job: The url to track the job: http://xx002:8088/proxy/application_1471879749109_0006/ 16/08/23 14:18:32 INFO mapreduce.Job: Running job: job_1471879749109_0006 16/08/23 14:18:41 INFO mapreduce.Job: Job job_1471879749109_0006 running in uber mode : false 16/08/23 14:18:41 INFO mapreduce.Job: map 0% reduce 0% 16/08/23 14:19:06 INFO mapreduce.Job: map 1% reduce 0% 16/08/23 14:19:21 INFO mapreduce.Job: map 2% reduce 0% 16/08/23 14:19:36 INFO mapreduce.Job: map 3% reduce 0% 16/08/23 14:19:49 INFO mapreduce.Job: map 4% reduce 0% 16/08/23 14:20:04 INFO mapreduce.Job: map 5% reduce 0% 16/08/23 14:20:19 INFO mapreduce.Job: map 6% reduce 0% 16/08/23 14:20:32 INFO mapreduce.Job: map 7% reduce 0% 16/08/23 14:20:46 INFO mapreduce.Job: map 8% reduce 0% 16/08/23 14:21:00 INFO mapreduce.Job: map 9% reduce 0% 16/08/23 14:21:15 INFO mapreduce.Job: map 10% reduce 0% 16/08/23 14:21:28 INFO mapreduce.Job: map 11% reduce 0% 16/08/23 14:21:42 INFO mapreduce.Job: map 12% reduce 0% 16/08/23 14:21:57 INFO mapreduce.Job: map 13% reduce 0% 16/08/23 14:22:11 INFO mapreduce.Job: map 14% reduce 0% 16/08/23 14:22:25 INFO mapreduce.Job: map 15% reduce 0% 16/08/23 14:22:38 INFO mapreduce.Job: map 16% reduce 0% 16/08/23 14:22:50 INFO mapreduce.Job: map 17% reduce 0% 16/08/23 14:23:04 INFO mapreduce.Job: map 18% reduce 0% 16/08/23 14:23:17 INFO mapreduce.Job: map 19% reduce 0% 16/08/23 14:23:30 INFO mapreduce.Job: map 20% reduce 0% 16/08/23 14:23:43 INFO mapreduce.Job: map 21% reduce 0% 16/08/23 14:23:56 INFO mapreduce.Job: map 22% reduce 0% 16/08/23 14:24:11 INFO mapreduce.Job: map 23% reduce 0% 16/08/23 14:24:23 INFO mapreduce.Job: map 24% reduce 0% 16/08/23 14:24:38 INFO mapreduce.Job: map 25% reduce 0% 16/08/23 14:24:51 INFO mapreduce.Job: map 26% reduce 0% 16/08/23 14:25:03 INFO mapreduce.Job: map 27% reduce 0% 16/08/23 14:25:15 INFO mapreduce.Job: map 28% reduce 0% 16/08/23 14:25:28 INFO mapreduce.Job: map 29% reduce 0% 16/08/23 14:25:40 INFO mapreduce.Job: map 30% reduce 0% 16/08/23 14:25:52 INFO mapreduce.Job: map 31% reduce 0% 16/08/23 14:26:07 INFO mapreduce.Job: map 32% reduce 0% 16/08/23 14:26:19 INFO mapreduce.Job: map 33% reduce 0% 16/08/23 14:26:30 INFO mapreduce.Job: map 34% reduce 0% 16/08/23 14:26:43 INFO mapreduce.Job: map 35% reduce 0% 16/08/23 14:26:55 INFO mapreduce.Job: map 36% reduce 0% 16/08/23 14:27:08 INFO mapreduce.Job: map 37% reduce 0% 16/08/23 14:27:20 INFO mapreduce.Job: map 38% reduce 0% 16/08/23 14:27:32 INFO mapreduce.Job: map 39% reduce 0% 16/08/23 14:27:40 INFO mapreduce.Job: map 42% reduce 0% 16/08/23 14:27:44 INFO mapreduce.Job: map 43% reduce 0% 16/08/23 14:27:59 INFO mapreduce.Job: map 44% reduce 0% 16/08/23 14:28:04 INFO mapreduce.Job: map 44% reduce 3% 16/08/23 14:28:18 INFO mapreduce.Job: map 45% reduce 3% 16/08/23 14:28:38 INFO mapreduce.Job: map 46% reduce 3% 16/08/23 14:29:00 INFO mapreduce.Job: map 47% reduce 3% 16/08/23 14:29:25 INFO mapreduce.Job: map 48% reduce 3% 16/08/23 14:29:25 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000002_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:29:26 INFO mapreduce.Job: map 43% reduce 3% 16/08/23 14:29:45 INFO mapreduce.Job: map 44% reduce 3% 16/08/23 14:29:46 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000002_1, Status : FAILED Exception from container-launch. Container id: container_1471879749109_0006_01_000022 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/08/23 14:30:04 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000005_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:30:05 INFO mapreduce.Job: map 39% reduce 3% 16/08/23 14:30:18 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000003_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:30:19 INFO mapreduce.Job: map 36% reduce 3% 16/08/23 14:30:20 INFO mapreduce.Job: map 37% reduce 3% 16/08/23 14:30:27 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000003_1, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:30:28 INFO mapreduce.Job: map 36% reduce 3% 16/08/23 14:30:44 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000006_1, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:30:49 INFO mapreduce.Job: map 37% reduce 3% 16/08/23 14:30:50 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000007_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:30:51 INFO mapreduce.Job: map 31% reduce 3% 16/08/23 14:30:55 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000004_1, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:31:03 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000009_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:31:04 INFO mapreduce.Job: map 26% reduce 3% 16/08/23 14:31:06 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000000_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:31:07 INFO mapreduce.Job: map 25% reduce 3% 16/08/23 14:31:07 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000006_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:31:08 INFO mapreduce.Job: map 21% reduce 3% 16/08/23 14:31:15 INFO mapreduce.Job: map 27% reduce 3% 16/08/23 14:31:15 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000001_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:31:15 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000001_1, Status : FAILED Error: java.lang.RuntimeException: PipeMapRed.waitOutputThreads(): subprocess failed with code 137 at org.apache.hadoop.streaming.PipeMapRed.waitOutputThreads(PipeMapRed.java:322) at org.apache.hadoop.streaming.PipeMapRed.mapRedFinished(PipeMapRed.java:535) at org.apache.hadoop.streaming.PipeMapper.close(PipeMapper.java:130) at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:61) at org.apache.hadoop.streaming.PipeMapRunner.run(PipeMapRunner.java:34) at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:450) at org.apache.hadoop.mapred.MapTask.run(MapTask.java:343) at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:163) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1656) at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158) 16/08/23 14:31:16 INFO mapreduce.Job: map 17% reduce 3% 16/08/23 14:31:26 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000004_0, Status : FAILED Container killed on request. Exit code is 137 Container exited with a non-zero exit code 137 Killed by external signal 16/08/23 14:31:27 INFO mapreduce.Job: map 11% reduce 3% 16/08/23 14:31:29 INFO mapreduce.Job: map 12% reduce 3% 16/08/23 14:31:31 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000003_2, Status : FAILED Exception from container-launch. Container id: container_1471879749109_0006_01_000027 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/08/23 14:31:33 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000007_1, Status : FAILED Exception from container-launch. Container id: container_1471879749109_0006_01_000029 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/08/23 14:31:36 INFO mapreduce.Job: Task Id : attempt_1471879749109_0006_m_000006_2, Status : FAILED Exception from container-launch. Container id: container_1471879749109_0006_01_000030 Exit code: 1 Stack trace: ExitCodeException exitCode=1: at org.apache.hadoop.util.Shell.runCommand(Shell.java:538) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.yarn.server.nodemanager.DefaultContainerExecutor.launchContainer(DefaultContainerExecutor.java:211) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) at java.lang.Thread.run(Thread.java:745) Container exited with a non-zero exit code 1 16/08/23 14:31:38 INFO mapreduce.Job: map 100% reduce 100% 16/08/23 14:31:39 INFO mapreduce.Job: Job job_1471879749109_0006 failed with state FAILED due to: Task failed task_1471879749109_0006_m_000003 Job failed as tasks failed. failedMaps:1 failedReduces:0 16/08/23 14:31:40 INFO mapreduce.Job: Counters: 40 File System Counters FILE: Number of bytes read=0 FILE: Number of bytes written=1668687 FILE: Number of read operations=0 FILE: Number of large read operations=0 FILE: Number of write operations=0 HDFS: Number of bytes read=17715604 HDFS: Number of bytes written=0 HDFS: Number of read operations=3 HDFS: Number of large read operations=0 HDFS: Number of write operations=0 Job Counters Failed map tasks=18 Killed map tasks=10 Killed reduce tasks=1 Launched map tasks=29 Launched reduce tasks=1 Other local map tasks=11 Data-local map tasks=18 Total time spent by all maps in occupied slots (ms)=8746526 Total time spent by all reduces in occupied slots (ms)=236262 Total time spent by all map tasks (ms)=8746526 Total time spent by all reduce tasks (ms)=236262 Total vcore-seconds taken by all map tasks=8746526 Total vcore-seconds taken by all reduce tasks=236262 Total megabyte-seconds taken by all map tasks=8956442624 Total megabyte-seconds taken by all reduce tasks=241932288 Map-Reduce Framework Map input records=80825 Map output records=80825 Map output bytes=1396039 Map output materialized bytes=1557695 Input split bytes=122 Combine input records=0 Spilled Records=80825 Failed Shuffles=0 Merged Map outputs=0 GC time elapsed (ms)=353 CPU time spent (ms)=23990 Physical memory (bytes) snapshot=206168064 Virtual memory (bytes) snapshot=2080190464 Total committed heap usage (bytes)=135335936 File Input Format Counters Bytes Read=17715482 16/08/23 14:31:40 ERROR streaming.StreamJob: Job not successful! Streaming Command Failed! ```

fish - Hadooper

赞同来自: 桎梏

reduce的进度,是reduce task的进度,在map还没有完全结束的时候,用户自定义的reduce不会被调用,但reduce task会提前启动,提前将map输出的数据往reduce端拖取意图节省网络数据拷贝的时间。   map完成多少的时候开始执行reduce的数据拖取,可以通过mapreduce.job.reduce.slowstart.completedmaps这个参数进行控制。设置为1表示map全完成了,才开始启动reduce。默认值为0.05。

wangxiaolei

赞同来自: 桎梏

目录递归加上参数 -Dmapreduce.input.fileinputformat.input.dir.recursive=true

wangxiaolei

赞同来自:

Container killed on request Container运行过程中被强制杀死或者终止(返回码为137或者143)。 关于Container可以详细看下董老师的博客 http://dongxicheng.org/mapreduce-nextgen/yarnmrv2-node-manager-container-state-machine/ 配置文件,需要调整下mapreduce的资源配置。 解决方法,修改配置文件:yarn-site.xml和mapred-site.xml 在第三周第一次直播课中讲了配置文件中每个属性的含义。  

fish - Hadooper

赞同来自:

退出码为137,实际是收到了操作系统的信号(137 - 128 = 9),相当于task被kill -9了。   是否是map/reduce占用的资源过多被kill,或者dmesg看看是否任务机上出现了oom killer。 执行普通的wordcount之类的任务能成功吧?失败任务的完整log是什么?在你提交的map/reduce中多加些log以获取更多的信息。

wangxiaolei

赞同来自:

dmesg命令是能发现oom killer的如下: Out of memory: Kill process 20469 (java) score 96 or sacrifice child Killed process 20469, UID 504, (java) total-vm:1925356kB, anon-rss:181532kB, file-rss:4076kB 修改的属性都在文件里写上了。在xx001上再次执行go.sh
16/08/23 16:10:07 INFO mapreduce.Job: Running job: job_1471939769887_0001
16/08/23 16:10:20 INFO mapreduce.Job: Job job_1471939769887_0001 running in uber mode : false
16/08/23 16:10:20 INFO mapreduce.Job:  map 0% reduce 0%
16/08/23 16:10:45 INFO mapreduce.Job:  map 1% reduce 0%
16/08/23 16:11:11 INFO mapreduce.Job:  map 2% reduce 0%
16/08/23 16:11:38 INFO mapreduce.Job:  map 3% reduce 0%
16/08/23 16:12:06 INFO mapreduce.Job:  map 4% reduce 0%
16/08/23 16:12:34 INFO mapreduce.Job:  map 5% reduce 0%
16/08/23 16:13:01 INFO mapreduce.Job:  map 6% reduce 0%
16/08/23 16:13:25 INFO mapreduce.Job:  map 7% reduce 0%
16/08/23 16:13:52 INFO mapreduce.Job:  map 8% reduce 0%
16/08/23 16:14:16 INFO mapreduce.Job:  map 9% reduce 0%
16/08/23 16:14:43 INFO mapreduce.Job:  map 10% reduce 0%
16/08/23 16:15:08 INFO mapreduce.Job:  map 11% reduce 0%
16/08/23 16:15:35 INFO mapreduce.Job:  map 12% reduce 0%
16/08/23 16:15:43 INFO mapreduce.Job:  map 15% reduce 0%
16/08/23 16:15:56 INFO mapreduce.Job:  map 16% reduce 0%
16/08/23 16:16:22 INFO mapreduce.Job:  map 17% reduce 0%
16/08/23 16:16:46 INFO mapreduce.Job:  map 18% reduce 0%
16/08/23 16:17:11 INFO mapreduce.Job:  map 19% reduce 0%
16/08/23 16:17:17 INFO mapreduce.Job:  map 22% reduce 0%
16/08/23 16:17:32 INFO mapreduce.Job:  map 23% reduce 0%
16/08/23 16:17:53 INFO mapreduce.Job:  map 24% reduce 0%
16/08/23 16:18:14 INFO mapreduce.Job:  map 25% reduce 0%
16/08/23 16:18:38 INFO mapreduce.Job:  map 26% reduce 0%
16/08/23 16:18:59 INFO mapreduce.Job:  map 27% reduce 0%
16/08/23 16:19:20 INFO mapreduce.Job:  map 28% reduce 0%
16/08/23 16:19:44 INFO mapreduce.Job:  map 29% reduce 0%
16/08/23 16:20:05 INFO mapreduce.Job:  map 30% reduce 0%
16/08/23 16:20:19 INFO mapreduce.Job:  map 33% reduce 0%
16/08/23 16:20:24 INFO mapreduce.Job:  map 34% reduce 0%
16/08/23 16:20:46 INFO mapreduce.Job:  map 35% reduce 0%
16/08/23 16:21:10 INFO mapreduce.Job:  map 36% reduce 0%
16/08/23 16:21:31 INFO mapreduce.Job:  map 37% reduce 0%
16/08/23 16:21:53 INFO mapreduce.Job:  map 38% reduce 0%
16/08/23 16:22:14 INFO mapreduce.Job:  map 39% reduce 0%
16/08/23 16:22:36 INFO mapreduce.Job:  map 40% reduce 0%
16/08/23 16:22:57 INFO mapreduce.Job:  map 41% reduce 0%
16/08/23 16:22:59 INFO mapreduce.Job:  map 44% reduce 0%
16/08/23 16:23:15 INFO mapreduce.Job:  map 45% reduce 0%
16/08/23 16:23:36 INFO mapreduce.Job:  map 46% reduce 0%
16/08/23 16:23:57 INFO mapreduce.Job:  map 47% reduce 0%
16/08/23 16:24:18 INFO mapreduce.Job:  map 48% reduce 0%
16/08/23 16:24:41 INFO mapreduce.Job:  map 52% reduce 0%
16/08/23 16:25:00 INFO mapreduce.Job:  map 53% reduce 0%
16/08/23 16:25:18 INFO mapreduce.Job:  map 54% reduce 0%
16/08/23 16:25:37 INFO mapreduce.Job:  map 55% reduce 0%
16/08/23 16:25:55 INFO mapreduce.Job:  map 56% reduce 0%
16/08/23 16:26:13 INFO mapreduce.Job:  map 57% reduce 0%
16/08/23 16:26:31 INFO mapreduce.Job:  map 58% reduce 0%
16/08/23 16:26:49 INFO mapreduce.Job:  map 59% reduce 0%
16/08/23 16:27:04 INFO mapreduce.Job:  map 60% reduce 0%
16/08/23 16:27:22 INFO mapreduce.Job:  map 61% reduce 0%
16/08/23 16:27:43 INFO mapreduce.Job:  map 62% reduce 0%
16/08/23 16:27:58 INFO mapreduce.Job:  map 63% reduce 0%
16/08/23 16:28:11 INFO mapreduce.Job:  map 66% reduce 0%
16/08/23 16:28:13 INFO mapreduce.Job:  map 67% reduce 0%
16/08/23 16:28:23 INFO mapreduce.Job:  map 70% reduce 20%
16/08/23 16:28:26 INFO mapreduce.Job:  map 70% reduce 23%
16/08/23 16:28:51 INFO mapreduce.Job:  map 71% reduce 23%
16/08/23 16:29:38 INFO mapreduce.Job:  map 72% reduce 23%
16/08/23 16:30:20 INFO mapreduce.Job:  map 73% reduce 23%
16/08/23 16:30:59 INFO mapreduce.Job:  map 74% reduce 23%
16/08/23 16:31:41 INFO mapreduce.Job:  map 75% reduce 23%
16/08/23 16:32:24 INFO mapreduce.Job:  map 76% reduce 23%
16/08/23 16:33:03 INFO mapreduce.Job:  map 77% reduce 23%
16/08/23 16:33:12 INFO mapreduce.Job:  map 80% reduce 27%
16/08/23 16:33:38 INFO mapreduce.Job:  map 81% reduce 27%
16/08/23 16:34:11 INFO mapreduce.Job:  map 82% reduce 27%
16/08/23 16:34:44 INFO mapreduce.Job:  map 83% reduce 27%
16/08/23 16:35:14 INFO mapreduce.Job:  map 84% reduce 27%
16/08/23 16:35:48 INFO mapreduce.Job:  map 85% reduce 27%
16/08/23 16:36:18 INFO mapreduce.Job:  map 86% reduce 27%
16/08/23 16:36:51 INFO mapreduce.Job:  map 87% reduce 27%
16/08/23 16:36:57 INFO mapreduce.Job:  map 90% reduce 27%
16/08/23 16:36:58 INFO mapreduce.Job:  map 90% reduce 30%
16/08/23 16:37:22 INFO mapreduce.Job:  map 91% reduce 30%
16/08/23 16:37:54 INFO mapreduce.Job:  map 92% reduce 30%
16/08/23 16:38:27 INFO mapreduce.Job:  map 93% reduce 30%
16/08/23 16:39:00 INFO mapreduce.Job:  map 94% reduce 30%
16/08/23 16:39:33 INFO mapreduce.Job:  map 95% reduce 30%
16/08/23 16:40:03 INFO mapreduce.Job:  map 96% reduce 30%
16/08/23 16:40:37 INFO mapreduce.Job:  map 97% reduce 30%
16/08/23 16:40:45 INFO mapreduce.Job:  map 100% reduce 30%
16/08/23 16:40:48 INFO mapreduce.Job:  map 100% reduce 67%
16/08/23 16:41:21 INFO mapreduce.Job:  map 100% reduce 68%
16/08/23 16:42:27 INFO mapreduce.Job:  map 100% reduce 69%
16/08/23 16:43:27 INFO mapreduce.Job:  map 100% reduce 70%
16/08/23 16:45:08 INFO mapreduce.Job:  map 100% reduce 71%
16/08/23 16:45:21 INFO mapreduce.Job:  map 100% reduce 72%
16/08/23 16:46:06 INFO mapreduce.Job:  map 100% reduce 73%
16/08/23 16:47:06 INFO mapreduce.Job:  map 100% reduce 74%
16/08/23 16:47:57 INFO mapreduce.Job:  map 100% reduce 75%
16/08/23 16:49:06 INFO mapreduce.Job:  map 100% reduce 76%
16/08/23 16:50:12 INFO mapreduce.Job:  map 100% reduce 77%
16/08/23 16:51:01 INFO mapreduce.Job:  map 100% reduce 78%
16/08/23 16:52:10 INFO mapreduce.Job:  map 100% reduce 79%
16/08/23 16:53:01 INFO mapreduce.Job:  map 100% reduce 80%
16/08/23 16:54:10 INFO mapreduce.Job:  map 100% reduce 81%
16/08/23 16:55:07 INFO mapreduce.Job:  map 100% reduce 82%
16/08/23 16:56:14 INFO mapreduce.Job:  map 100% reduce 83%
16/08/23 16:57:38 INFO mapreduce.Job:  map 100% reduce 84%
16/08/23 16:58:17 INFO mapreduce.Job:  map 100% reduce 85%
16/08/23 16:59:47 INFO mapreduce.Job:  map 100% reduce 86%
16/08/23 17:00:26 INFO mapreduce.Job:  map 100% reduce 87%
16/08/23 17:02:21 INFO mapreduce.Job:  map 100% reduce 88%
16/08/23 17:02:48 INFO mapreduce.Job:  map 100% reduce 89%
16/08/23 17:03:38 INFO mapreduce.Job:  map 100% reduce 90%
16/08/23 17:04:30 INFO mapreduce.Job:  map 100% reduce 91%
16/08/23 17:06:00 INFO mapreduce.Job:  map 100% reduce 92%
16/08/23 17:06:21 INFO mapreduce.Job:  map 100% reduce 93%
16/08/23 17:07:30 INFO mapreduce.Job:  map 100% reduce 94%
16/08/23 17:08:30 INFO mapreduce.Job:  map 100% reduce 95%
16/08/23 17:09:24 INFO mapreduce.Job:  map 100% reduce 96%
16/08/23 17:10:28 INFO mapreduce.Job:  map 100% reduce 97%
16/08/23 17:11:22 INFO mapreduce.Job:  map 100% reduce 98%
16/08/23 17:12:37 INFO mapreduce.Job:  map 100% reduce 99%
16/08/23 17:13:28 INFO mapreduce.Job:  map 100% reduce 100%
16/08/23 17:14:15 INFO mapreduce.Job: Job job_1471939769887_0001 completed successfully
16/08/23 17:14:15 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=22230481
		FILE: Number of bytes written=45688547
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=260407709
		HDFS: Number of bytes written=226740
		HDFS: Number of read operations=33
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=10
		Launched reduce tasks=1
		Data-local map tasks=10
		Total time spent by all maps in occupied slots (ms)=11511604
		Total time spent by all reduces in occupied slots (ms)=11048220
		Total time spent by all map tasks (ms)=2877901
		Total time spent by all reduce tasks (ms)=2762055
		Total vcore-seconds taken by all map tasks=2877901
		Total vcore-seconds taken by all reduce tasks=2762055
		Total megabyte-seconds taken by all map tasks=575580200
		Total megabyte-seconds taken by all reduce tasks=552411000
	Map-Reduce Framework
		Map input records=1152866
		Map output records=1152866
		Map output bytes=19924743
		Map output materialized bytes=22230535
		Input split bytes=1220
		Combine input records=0
		Combine output records=0
		Reduce input groups=13349
		Reduce shuffle bytes=22230535
		Reduce input records=1152866
		Reduce output records=13349
		Spilled Records=2305732
		Shuffled Maps =10
		Failed Shuffles=0
		Merged Map outputs=10
		GC time elapsed (ms)=2171
		CPU time spent (ms)=556000
		Physical memory (bytes) snapshot=1485029376
		Virtual memory (bytes) snapshot=21736505344
		Total committed heap usage (bytes)=676171776
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=260406489
	File Output Format Counters 
		Bytes Written=226740
16/08/23 17:14:15 INFO streaming.StreamJob: Output directory: /test/output

wangxiaolei

赞同来自:

我在配置文件中新增的内容加注释了,你看一下便知道我加了什么。

要回复问题请先登录注册