Aim for around 1GB per file (spark partition) (1).
Ideally, you would use snappy compression (default) due to snappy compressed parquet files being splittable (2).
Using snappy instead of gzip will significantly increase the file size, so if storage space is an issue, that needs to be considered.
.option("compression", "gzip")
is the option to override the default snappy compression.
If you need to resize/repartition your Dataset/DataFrame/RDD, call the .coalesce(<num_partitions>
or worst case .repartition(<num_partitions>)
function. Warning: repartition especially but also coalesce can cause a reshuffle of the data, so use with some caution.
Also, parquet file size and for that matter all files generally should be greater in size than the HDFS block size (default 128MB).
1) https://forums.databricks.com/questions/101/what-is-an-optimal-size-for-file-partitions-using.html
2) http://boristyukin.com/is-snappy-compressed-parquet-file-splittable/
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…