Spark will process the data in parallel, but not the operations. In my DAG I want to call a function per column like
Spark processing columns in parallel the values for each column could be calculated independently from other columns. Is there any way to achieve such parallelism via spark-SQL API? Utilizing window functions Spark dynamic DAG is a lot slower and different from hard coded DAG helped to optimize the DAG by a lot but only executes in a serial fashion.
An example which contains a little bit more information can be found https://github.com/geoHeil/sparkContrastCoding
The minimum example below:
val df = Seq(
(0, "A", "B", "C", "D"),
(1, "A", "B", "C", "D"),
(0, "d", "a", "jkl", "d"),
(0, "d", "g", "C", "D"),
(1, "A", "d", "t", "k"),
(1, "d", "c", "C", "D"),
(1, "c", "B", "C", "D")
).toDF("TARGET", "col1", "col2", "col3TooMany", "col4")
val inputToDrop = Seq("col3TooMany")
val inputToBias = Seq("col1", "col2")
val targetCounts = df.filter(df("TARGET") === 1).groupBy("TARGET").agg(count("TARGET").as("cnt_foo_eq_1"))
val newDF = df.toDF.join(broadcast(targetCounts), Seq("TARGET"), "left")
newDF.cache
def handleBias(df: DataFrame, colName: String, target: String = target) = {
val w1 = Window.partitionBy(colName)
val w2 = Window.partitionBy(colName, target)
df.withColumn("cnt_group", count("*").over(w2))
.withColumn("pre2_" + colName, mean(target).over(w1))
.withColumn("pre_" + colName, coalesce(min(col("cnt_group") / col("cnt_foo_eq_1")).over(w1), lit(0D)))
.drop("cnt_group")
}
val joinUDF = udf((newColumn: String, newValue: String, codingVariant: Int, results: Map[String, Map[String, Seq[Double]]]) => {
results.get(newColumn) match {
case Some(tt) => {
val nestedArray = tt.getOrElse(newValue, Seq(0.0))
if (codingVariant == 0) {
nestedArray.head
} else {
nestedArray.last
}
}
case None => throw new Exception("Column not contained in initial data frame")
}
})
Now I want to apply my handleBias
function to all the columns, unfortunately, this is not executed in parallel.
val res = (inputToDrop ++ inputToBias).toSet.foldLeft(newDF) {
(currentDF, colName) =>
{
logger.info("using col " + colName)
handleBias(currentDF, colName)
}
}
.drop("cnt_foo_eq_1")
val combined = ((inputToDrop ++ inputToBias).toSet).foldLeft(res) {
(currentDF, colName) =>
{
currentDF
.withColumn("combined_" + colName, map(col(colName), array(col("pre_" + colName), col("pre2_" + colName))))
}
}
val columnsToUse = combined
.select(combined.columns
.filter(_.startsWith("combined_"))
map (combined(_)): _*)
val newNames = columnsToUse.columns.map(_.split("combined_").last)
val renamed = columnsToUse.toDF(newNames: _*)
val cols = renamed.columns
val localData = renamed.collect
val columnsMap = cols.map { colName =>
colName -> localData.flatMap(_.getAs[Map[String, Seq[Double]]](colName)).toMap
}.toMap
See Question&Answers more detail:
os