集群上运行hadoop程序时,当数据量大的时候会报错Error: org.bson.BsonDocument.clone()Lorg/bson/BsonDocument;

打印信息:19/10/10 15:28:35 INFO mapreduce.Job: Running job: job_1570688363286_0005
19/10/10 15:29:06 INFO mapreduce.Job: Job job_1570688363286_0005 running in uber mode : false
19/10/10 15:29:06 INFO mapreduce.Job:  map 0% reduce 0%
19/10/10 15:29:22 INFO mapreduce.Job: Task Id : attempt_1570688363286_0005_m_000000_0, Status : FAILED
Error: org.bson.BsonDocument.clone()Lorg/bson/BsonDocument;
 
dmesg信息:
Out of memory: Kill process 3025 (java) score 83 or sacrifice child
Killed process 3025, UID 0, (java) total-vm:2855880kB, anon-rss:265996kB, file-rss:96kB
java invoked oom-killer: gfp_mask=0x200da, order=0, oom_adj=0, oom_score_adj=0
java cpuset=/ mems_allowed=0
Pid: 4931, comm: java Not tainted 2.6.32-642.el6.x86_64 #1
 
输入1000条数据,是内存不够杀掉进程然后报错了吗

weixinm86sd

赞同来自:

内存配置都是默认的

要回复问题请先登录注册