spark.sql.functions.explode
explode
function creates a new row for each element in the given array or map column (in a DataFrame).
val signals: DataFrame = spark.read.json(signalsJson)
signals.withColumn("element", explode($"data.datapayload"))
explode
creates a Column.
See functions object and the example in How to unwind array in DataFrame (from JSON)?
Dataset<Row> explode
/ flatMap
operator (method)
explode
operator is almost the explode
function.
From the scaladoc:
explode
returns a new Dataset where a single column has been expanded to zero or more rows by the provided function. This is similar to a LATERAL VIEW in HiveQL. All columns of the input row are implicitly joined with each value that is output by the function.
ds.flatMap(_.words.split(" "))
Please note that (again quoting the scaladoc):
Deprecated (Since version 2.0.0) use flatMap()
or select()
with functions.explode()
instead
See Dataset API and the example in How to split multi-value column into separate rows using typed Dataset?
Despite explode
being deprecated (that we could then translate the main question to the difference between explode
function and flatMap
operator), the difference is that the former is a function while the latter is an operator. They have different signatures, but can give the same results. That often leads to discussions what's better and usually boils down to personal preference or coding style.
One could also say that flatMap
(i.e. explode
operator) is more Scala-ish given how ubiquitous flatMap
is in Scala programming (mainly hidden behind for-comprehension).
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…