Spark SQL自定义collect_list分组排序

想要在spark sql中对group by + concat_ws()的字段进行排序,可以参考如下方法。

原始数据如下:

java 复制代码
+---+-----+----+
|id |name |type|
+---+-----+----+
|1  |name1|p   |
|2  |name2|p   |
|3  |name3|p   |
|1  |x1   |q   |
|2  |x2   |q   |
|3  |x3   |q   |
+---+-----+----+

目标数据如下:

java 复制代码
+----+---------------------+
|type|value_list           |
+----+---------------------+
|p   |[name3, name2, name1]|
|q   |[x3, x2, x1]         |
+----+---------------------+

spark-shell:

java 复制代码
val df=Seq((1,"name1","p"),(2,"name2","p"),(3,"name3","p"),(1,"x1","q"),(2,"x2","q"),(3,"x3","q")).toDF("id","name","type")
df.show(false)

1.使用开窗函数

java 复制代码
df.createOrReplaceTempView("test")
spark.sql("select type,max(c) as c1 from (select type,concat_ws('&',collect_list(trim(name)) over(partition by type order by id desc)) as c  from test) as x group by type ")

因为使用开窗函数本身会使用比较多的资源,

这种方式在大数据量下性能会比较慢,所以尝试下面的操作。

2.使用struct和sort_array(array,asc?true,flase)的方式来进行,效率高些:

java 复制代码
val df3=spark.sql("select type, concat_ws('&',sort_array(collect_list(struct(id,name)),false).name) as c from test group by type ")
df3.show(false)

例如:计算一个结果形如:

java 复制代码
user_id    stk_id:action_type:amount:price:time   stk_id:action_type:amount:price:time   stk_id:action_type:amount:price:time   stk_id:action_type:amount:price:time 

需要按照time 升序排,则:

java 复制代码
Dataset<Row> splitStkView = session.sql("select client_id, innercode, entrust_bs, business_amount, business_price, trade_date from\n" +
                "(select client_id,\n" +
                "       split(action,':')[0] as innercode,\n" +
                "       split(action,':')[1] as entrust_bs,\n" +
                "       split(action,':')[2] as business_amount,\n" +
                "       split(action,':')[3] as business_price,\n" +
                "       split(action,':')[4] as trade_date,\n" +
                "       ROW_NUMBER() OVER(PARTITION BY split(action,':')[0] ORDER BY split(action,':')[4] DESC) AS rn\n" +
                "from stk_temp)\n" +
                "where rn <= 5000");
        splitStkView.createOrReplaceTempView("splitStkView");
        Dataset<Row> groupStkView = session.sql("select client_id, CONCAT(innercode, ':', entrust_bs, ':', business_amount, ':', business_price, ':', trade_date) as behive, trade_date from splitStkView");
        groupStkView.createOrReplaceTempView("groupStkView");
        Dataset<Row> resultData = session.sql("SELECT client_id, concat_ws('\t',sort_array(collect_list(struct(trade_date, behive)),true).behive) as behives FROM groupStkView GROUP BY client_id");
        

3.udf的方式

java 复制代码
import org.apache.spark.sql.functions._
import org.apache.spark.sql._
val sortUdf = udf((rows: Seq[Row]) => {
  rows.map { case Row(id:Int, value:String) => (id, value) }
    .sortBy { case (id, value) => -id } //id if asc
    .map { case (id, value) => value }
})

val grouped = df.groupBy(col("type")).agg(collect_list(struct("id", "name")) as "id_name")
val r1 = grouped.select(col("type"), sortUdf(col("id_name")).alias("value_list"))
r1.show(false)
相关推荐
Light602 小时前
数据要素与数据知识产权交易中心建设专项方案——以领码 SPARK 融合平台为技术底座,构建可评估、可验证、可交易、可监管的数据要素工程体系
大数据·分布式·spark
薛不痒3 小时前
MySQL中使用SQL语言
数据库·sql·mysql
五阿哥永琪3 小时前
SQL中的函数--开窗函数
大数据·数据库·sql
码农阿豪4 小时前
告别兼容焦虑:电科金仓 KES 如何把 Oracle 的 PL/SQL 和 JSON 业务“接住”
数据库·sql·oracle·json·金仓数据库
曹牧4 小时前
Oracle SQL 中,& 字符
数据库·sql·oracle
小小测试开发5 小时前
实战派SQL性能优化:从语法层面攻克项目中的性能瓶颈
android·sql·性能优化
Hello.Reader6 小时前
Flink SQL 的 UNLOAD MODULE 模块卸载、会话隔离与常见坑
大数据·sql·flink
Bug.ink6 小时前
BUUCTF——WEB(2)
数据库·sql·网络安全·buuctf
代码or搬砖8 小时前
SQL核心语法总结:从基础操作到高级窗口函数
java·数据库·sql
Hello.Reader9 小时前
Flink SQL 的 LOAD MODULE 深度实战——加载 Hive 模块、理解模块发现与常见坑
hive·sql·flink