在eclipse上执行spark程序,如何设置block manager 的内存?

代码:
package simpletest;

import java.util.Arrays;
import java.util.List;

import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;

public class Expractice {
	public static void main(String[] args) {
		SparkConf conf = new SparkConf().setAppName("sparkexpr");
		conf.setMaster("spark://hadoop01:7077");
		conf.set("spark.driver.memory", "50m");
		conf.setExecutorEnv("spark.executor.memory", "100m");
		JavaSparkContext sc = new JavaSparkContext(conf);
		sc.addJar("D:\\workspace\\myeclipse\\spark\\spark-test\\target\\spark-test-0.0.1-SNAPSHOT.jar");
		int[] ints = new int[] { 3, 1, 4, 2, 5 };
		JavaRDD<int[]> rddFromList = sc.parallelize(Arrays.asList(ints));
		List<int[]> top3 = rddFromList.takeOrdered(3);
		System.out.println(rddFromList.count());
		sc.close();
		System.exit(1);
	}
}
 在eclipse上执行spark时,一直报错7/11/22 17:01:45 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources   设置了内存: conf.set("spark.driver.memory", "50m"); conf.setExecutorEnv("spark.executor.memory", "100m"); 还是报上面的错误。   仔细看了下日志,发现了这么一句: 17/11/22 17:01:30 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.174.1:52491 with 896.4 MB RAM, BlockManagerId(driver, 192.168.174.1, 52491, None) 虚拟机本身内存只有1G,感觉是block manager这个东西占了所有内存,不知该如何设置下?   详细日志: Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties 17/11/22 17:01:27 INFO SparkContext: Running Spark version 2.1.0 17/11/22 17:01:27 WARN SparkContext: Support for Java 7 is deprecated as of Spark 2.0.0 17/11/22 17:01:28 INFO SecurityManager: Changing view acls to: Fiona 17/11/22 17:01:28 INFO SecurityManager: Changing modify acls to: Fiona 17/11/22 17:01:28 INFO SecurityManager: Changing view acls groups to:  17/11/22 17:01:28 INFO SecurityManager: Changing modify acls groups to:  17/11/22 17:01:28 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users  with view permissions: Set(Fiona); groups with view permissions: Set(); users  with modify permissions: Set(Fiona); groups with modify permissions: Set() 17/11/22 17:01:29 INFO Utils: Successfully started service 'sparkDriver' on port 52469. 17/11/22 17:01:29 INFO SparkEnv: Registering MapOutputTracker 17/11/22 17:01:29 INFO SparkEnv: Registering BlockManagerMaster 17/11/22 17:01:29 INFO BlockManagerMasterEndpoint: Using org.apache.spark.storage.DefaultTopologyMapper for getting topology information 17/11/22 17:01:29 INFO BlockManagerMasterEndpoint: BlockManagerMasterEndpoint up 17/11/22 17:01:29 INFO DiskBlockManager: Created local directory at C:\Users\Fiona\AppData\Local\Temp\blockmgr-e786baf2-f88a-4d8f-bb29-19fef92a2c3d 17/11/22 17:01:29 INFO MemoryStore: MemoryStore started with capacity 896.4 MB 17/11/22 17:01:29 INFO SparkEnv: Registering OutputCommitCoordinator 17/11/22 17:01:29 INFO Utils: Successfully started service 'SparkUI' on port 4040. 17/11/22 17:01:29 INFO SparkUI: Bound SparkUI to 0.0.0.0, and started at http://192.168.174.1:4040 17/11/22 17:01:30 INFO StandaloneAppClient$ClientEndpoint: Connecting to master spark://hadoop01:7077... 17/11/22 17:01:30 INFO TransportClientFactory: Successfully created connection to hadoop01/192.168.174.20:7077 after 26 ms (0 ms spent in bootstraps) 17/11/22 17:01:30 INFO StandaloneSchedulerBackend: Connected to Spark cluster with app ID app-20171122170135-0002 17/11/22 17:01:30 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 52491. 17/11/22 17:01:30 INFO NettyBlockTransferService: Server created on 192.168.174.1:52491 17/11/22 17:01:30 INFO BlockManager: Using org.apache.spark.storage.RandomBlockReplicationPolicy for block replication policy 17/11/22 17:01:30 INFO BlockManagerMaster: Registering BlockManager BlockManagerId(driver, 192.168.174.1, 52491, None) 17/11/22 17:01:30 INFO BlockManagerMasterEndpoint: Registering block manager 192.168.174.1:52491 with 896.4 MB RAM, BlockManagerId(driver, 192.168.174.1, 52491, None) 17/11/22 17:01:30 INFO BlockManagerMaster: Registered BlockManager BlockManagerId(driver, 192.168.174.1, 52491, None) 17/11/22 17:01:30 INFO BlockManager: Initialized BlockManager: BlockManagerId(driver, 192.168.174.1, 52491, None) 17/11/22 17:01:30 INFO StandaloneSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.0 17/11/22 17:01:30 INFO SparkContext: Added JAR D:\workspace\myeclipse\spark\spark-test\target\spark-test-0.0.1-SNAPSHOT.jar at spark://192.168.174.1:52469/jars/spark-test-0.0.1-SNAPSHOT.jar with timestamp 1511341290386 17/11/22 17:01:30 INFO SparkContext: Starting job: takeOrdered at Expractice.java:20 17/11/22 17:01:30 INFO DAGScheduler: Got job 0 (takeOrdered at Expractice.java:20) with 2 output partitions 17/11/22 17:01:30 INFO DAGScheduler: Final stage: ResultStage 0 (takeOrdered at Expractice.java:20) 17/11/22 17:01:30 INFO DAGScheduler: Parents of final stage: List() 17/11/22 17:01:30 INFO DAGScheduler: Missing parents: List() 17/11/22 17:01:30 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[1] at takeOrdered at Expractice.java:20), which has no missing parents 17/11/22 17:01:30 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.6 KB, free 896.4 MB) 17/11/22 17:01:30 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1640.0 B, free 896.4 MB) 17/11/22 17:01:30 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.174.1:52491 (size: 1640.0 B, free: 896.4 MB) 17/11/22 17:01:30 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996 17/11/22 17:01:30 INFO DAGScheduler: Submitting 2 missing tasks from ResultStage 0 (MapPartitionsRDD[1] at takeOrdered at Expractice.java:20) 17/11/22 17:01:30 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks 17/11/22 17:01:45 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 17/11/22 17:02:00 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources 17/11/22 17:02:15 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources      

wangxiaolei

赞同来自:

WARN是警告,不是ERROR,不需要关心

要回复问题请先登录注册