hive中windows子句的使用

概述

1,windows子句是对窗口的结果做更细粒度的划分

2、windows子句中有两种方式

rows :按照相邻的几行进行开窗

range:按照某个值的范围进行开窗

使用方式

(rows | range) between (UNBOUNDED | [num]) PRECEDING AND ([num] PRECEDING | CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
(rows | range) between current row AND (CURRENT ROW | (UNBOUNDED | [num]) FOLLOWING)
(rows | range) between [num] FOLLOWING AND (UNBOUNDED | [num]) FOLLOWING

1 PRECEDING 表示前一个窗口
1 FOLLOWING 表示后一个窗口
current row 表示当前窗口
UNBOUNDED:起点,
UNBOUNDED PRECEDING:表示从前面的起点, 
UNBOUNDED FOLLOWING 表示到后面的终点 

简单使用

姓名,购买日期,购买数量
saml,2018-01-01,10
saml,2018-01-08,55
tony,2018-01-07,50
saml,2018-01-05,46
tony,2018-01-04,29
tony,2018-01-02,15
saml,2018-02-03,23
mart,2018-04-13,94
saml,2018-04-06,42
mart,2018-04-11,75
mart,2018-04-09,68
mart,2018-04-08,62
neil,2018-05-10,12
neil,2018-06-12,80

create table sample(
    name string,
    dt string,
    num int
)
row format delimited
fields terminated by ",";
load data local inpath '/home/homedata/sample.txt' into table sample;

rows

问题:找出最近三次的购买的数量之和(也就是这次和上两次)

select *,sum(num) over (partition by name order by dt rows between 2 PRECEDING and current row ) sum from sample ;

结果展示

range

注:1、在hive中range是不支持INTERVAL关键字的使用

2、时间字段需要转为秒值

3、值必须是计算好的 (不能是6*3600这种,需要是21600)

问题:获取每个用户最近3天购买数量

select *,sum(num) over (partition by name order by unix_timestamp(dt,"yyyy-MM-dd") range between  259200  PRECEDING and current row ) sum from sample ;

案例

rows

id           dt

1    2024-04-25 
1    2024-04-26 
1    2024-04-27
1    2024-04-28
1    2024-04-30
1    2024-05-01
1    2024-05-02
1    2024-05-04
1    2024-05-05
2    2024-04-25
2    2024-04-28
2    2024-05-02
2    2024-05-03
2    2024-05-04

create table sql2_20(
    id int,
    dt string
)
row format serde 'org.apache.hadoop.hive.serde2.RegexSerDe'
with serdeproperties(
  'input.regex'='(\\d+)\\s+(.+?)',
    'output.format.string'='%1$s %2$s'
);
load data local inpath '/home/homedata/sql2/sql2_20.txt' into table sql2_20;

问题:现有用户登录记录表,请查询出用户连续三天登录的所有数据记录

期望结果

答案:

with t1 as (
    select *,date_sub(dt,row_number() over (partition by id order by dt)) p from sql2_20
), t2 as (
    select id,dt,count(*) over (partition by id,p order by dt rows between 2 PRECEDING and current row) days from t1
),t3 as (
    select id,concat(date_sub(dt,2),",",date_sub(dt,1),",",dt) dts from t2 where days = 3
)
select id,dt from t3 lateral view explode(split(dts,",")) d as dt;

range

create table sql1_21(
    order_id int,
    user_id string,
    order_status string,
    operate_time string
)
row format serde 'org.apache.hadoop.hive.serde2.RegexSerDe'
with serdeproperties(
  'input.regex'='(\\d+)\\s+(.+?)\\s+(.+?)\\s+(.+?)'
);
load data local inpath '/home/homedata/sql_1/sql1_21.txt' into table sql1_21;
 
order_id    user_id    order_status     operate_time
1101         a         已支付        2023-01-01 10:00:00
1102         a         已取消        2023-01-01 10:10:00
1103         a         待支付        2023-01-01 10:20:00
1104         b         已取消        2023-01-01 10:30:00
1105         a         待确认        2023-01-01 10:50:00
1106         a         已取消        2023-01-01 11:00:00
1107         b         已取消        2023-01-01 11:40:00
1108         b         已取消        2023-01-01 11:50:00
1109         b         已支付        2023-01-01 12:00:00
1110         b         已取消        2023-01-01 12:11:00
1111         c         已取消        2023-01-01 12:20:00
1112         c         已取消        2023-01-01 12:30:00
1113         c         已取消        2023-01-01 12:55:00
1114         c         已取消        2023-01-01 13:00:00

问题:找出恶意购买的用户------同一个用户,在任意半小时内(含),取消订单次数>=3次的就被视为恶意买家。

结果:

with t1 as (
    select order_id, user_id, unix_timestamp(operate_time) operate_time
    from sql1_21 where order_status = "已取消"
)
select user_id,
       count(*) over (partition by user_id order by operate_time range between 1800 preceding and current row )
from t1;
相关推荐
开发者每周简报34 分钟前
当微软windows的记事本被AI加持
人工智能·windows·microsoft
命里有定数41 分钟前
windows工具 -- 使用rustdesk和云服务器自建远程桌面服务, 手机, PC, Mac, Linux远程桌面 (简洁明了)
linux·运维·服务器·windows·ubuntu·远程工作
阿里云大数据AI技术1 小时前
DataWorks on EMR StarRocks,打造标准湖仓新范式
大数据·olap·emr·dataworks·湖仓一体
遥遥晚风点点1 小时前
spark 设置hive.exec.max.dynamic.partition不生效
大数据·hive·spark
东方巴黎~Sunsiny1 小时前
如何优化Kafka消费者的性能
分布式·kafka
NAMELZX1 小时前
Kafka常见问题及处理
分布式·kafka
huaqianzkh2 小时前
了解Hadoop:大数据处理的核心框架
大数据·hadoop·分布式
Kika写代码2 小时前
【Hadoop】【hdfs】【大数据技术基础】实验三 HDFS 基础编程实验
大数据·hadoop·hdfs
我的K84092 小时前
Flink整合Hive、Mysql、Hbase、Kafka
hive·mysql·flink
jlting1953 小时前
Kafka--关于broker的夺命连环问
分布式·kafka