That's because Spark by default provides only accumulators of type Long
, Double
and Float
. If you need something else you have to extend AccumulatorParam
.
import org.apache.spark.AccumulatorParam
object StringAccumulatorParam extends AccumulatorParam[String] {
def zero(initialValue: String): String = {
""
}
def addInPlace(s1: String, s2: String): String = {
s"$s1 $s2"
}
}
val stringAccum = sc.accumulator("")(StringAccumulatorParam)
val rdd = sc.parallelize("foo" :: "bar" :: Nil, 2)
rdd.foreach(s => stringAccum += s)
stringAccum.value
Note:
In general you should avoid using accumulators for tasks where data may grow significantly over time. Its behavior will similar to group
an collect
and in the worst case scenario can fail due to lack of resources. Accumulators are useful mostly for simple diagnostics tasks like keeping track of basic statistics.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…