I would like to calculate group quantiles on a Spark dataframe (using PySpark). Either an approximate or exact result would be fine. I prefer a solution that I can use within the context of groupBy
/ agg
, so that I can mix it with other PySpark aggregate functions. If this is not possible for some reason, a different approach would be fine as well.
This question is related but does not indicate how to use approxQuantile
as an aggregate function.
I also have access to the percentile_approx
Hive UDF but I don't know how to use it as an aggregate function.
For the sake of specificity, suppose I have the following dataframe:
from pyspark import SparkContext
import pyspark.sql.functions as f
sc = SparkContext()
df = sc.parallelize([
['A', 1],
['A', 2],
['A', 3],
['B', 4],
['B', 5],
['B', 6],
]).toDF(('grp', 'val'))
df_grp = df.groupBy('grp').agg(f.magic_percentile('val', 0.5).alias('med_val'))
df_grp.show()
Expected result is:
+----+-------+
| grp|med_val|
+----+-------+
| A| 2|
| B| 5|
+----+-------+
See Question&Answers more detail:
os 与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…