spark的一个错误。

我跑一个demo。发现错误一直在。wordcount.tx是本地的一个文件。(最后几行是错误信息。老师帮我看看什么问题。)
 Spark context available as sc.
15/11/10 12:18:01 INFO SparkILoop: Created sql context (with Hive support)..
SQL context available as sqlContext.

scala> val a = sc.textFile("/wordcount.txt")
15/11/10 12:18:25 INFO MemoryStore: ensureFreeSpace(159118) called with curMem=0, maxMem=280248975
15/11/10 12:18:25 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 155.4 KB, free 267.1 MB)
15/11/10 12:18:26 INFO MemoryStore: ensureFreeSpace(22692) called with curMem=159118, maxMem=280248975
15/11/10 12:18:26 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 22.2 KB, free 267.1 MB)
15/11/10 12:18:26 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on SY-0226:35226 (size: 22.2 KB, free: 267.2 MB)
15/11/10 12:18:26 INFO BlockManagerMaster: Updated info of block broadcast_0_piece0
15/11/10 12:18:26 INFO SparkContext: Created broadcast 0 from textFile at <console>:21
a: org.apache.spark.rdd.RDD[String] = /wordcount.txt MapPartitionsRDD[1] at textFile at <console>:21

scala> a.count
15/11/10 12:18:28 INFO FileInputFormat: Total input paths to process : 1
15/11/10 12:18:28 INFO SparkContext: Starting job: count at <console>:24
15/11/10 12:18:28 INFO DAGScheduler: Got job 0 (count at <console>:24) with 2 output partitions (allowLocal=false)
15/11/10 12:18:28 INFO DAGScheduler: Final stage: Stage 0(count at <console>:24)
15/11/10 12:18:28 INFO DAGScheduler: Parents of final stage: List()
15/11/10 12:18:28 INFO DAGScheduler: Missing parents: List()
15/11/10 12:18:28 INFO DAGScheduler: Submitting Stage 0 (/wordcount.txt MapPartitionsRDD[1] at textFile at <console>:21), which has no missing parents
15/11/10 12:18:28 INFO MemoryStore: ensureFreeSpace(2632) called with curMem=181810, maxMem=280248975
15/11/10 12:18:28 INFO MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.6 KB, free 267.1 MB)
15/11/10 12:18:28 INFO MemoryStore: ensureFreeSpace(1929) called with curMem=184442, maxMem=280248975
15/11/10 12:18:28 INFO MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1929.0 B, free 267.1 MB)
15/11/10 12:18:28 INFO BlockManagerInfo: Added broadcast_1_piece0 in memory on SY-0226:35226 (size: 1929.0 B, free: 267.2 MB)
15/11/10 12:18:28 INFO BlockManagerMaster: Updated info of block broadcast_1_piece0
15/11/10 12:18:28 INFO SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:839
15/11/10 12:18:28 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0 (/wordcount.txt MapPartitionsRDD[1] at textFile at <console>:21)
15/11/10 12:18:28 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks
15/11/10 12:18:43 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/11/10 12:18:58 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/11/10 12:19:13 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources
15/11/10 12:19:28 WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

mengmeng - 大数据工程师

赞同来自: 编程小梦

他感觉头皮要裂

fish - Hadooper

赞同来自:

看到错误提示有什么感觉么?

编程小梦 - 大数据

赞同来自:

学习spark之前,茂源老师应该已经很详细的讲解的如何分析问题,解决问题。如何查看log,如何使用各种工具。 茂源老师都提示到这份上。。。 要学会首先自己尝试去解决问题,重要的是解决问题的方法和思路,否则你进了企业怎么办。 加油!

fish - Hadooper

赞同来自:

是否还有问题? 告诉我IP地址,登录方法以及执行任务的目录及命令。

要回复问题请先登录注册