Hive数据仓库行转列

查了很多资料发现网上很多文章都是转发和抄袭,有些问题。这里分享一个自己项目中使用的行转列例子,供大家参考。代码如下:

sql 复制代码
SELECT
  my_id,
  nm_cd_map['A'] AS my_cd_a,
  nm_cd_map['B'] AS my_cd_b,
  nm_cd_map['C'] AS my_cd_c,
  nm_num_map['A'] AS my_num_a,
  nm_num_map['B'] AS my_num_b,
  nm_num_map['C'] AS my_num_c
FROM
  (
    SELECT
      t.my_id,
      STR_TO_MAP(my_nm_cds,';',':') AS nm_cd_map,
      STR_TO_MAP(my_nm_nums,';',':') AS nm_num_map
    FROM
      (
        SELECT
          my_id,
          CONCAT_WS(';',COLLECT_LIST(CONCAT(my_nm,':',my_cd))) AS my_nm_cds,
          CONCAT_WS(';',COLLECT_LIST(CONCAT(my_nm,':',my_num))) AS my_nm_nums
        FROM
          (
            SELECT '1' AS my_id,'A' AS my_nm,'D01' AS my_cd,19 AS my_num
            UNION ALL
            SELECT '1' AS my_id,'B' AS my_nm,'D04' AS my_cd,18 AS my_num
            UNION ALL
            SELECT '1' AS my_id,'C' AS my_nm,'D02' AS my_cd,17 AS my_num
            UNION ALL
            SELECT '2' AS my_id,'A' AS my_nm,'D03' AS my_cd,16 AS my_num
            UNION ALL
            SELECT '2' AS my_id,'B' AS my_nm,'D05' AS my_cd,15 AS my_num
            UNION ALL
            SELECT '2' AS my_id,'C' AS my_nm,'D06' AS my_cd,14 AS my_num
          )
        GROUP BY my_id
      ) t
  ) t
WHERE 1=1;

如果是在SparkSQL或Presto平台,或者阿里云的MaxCompute平台,还可使用如下方式:

sql 复制代码
-- 其实也可使用CONCAT然后STR_TO_MAP的方式,或者用MAP_FROM_ARRAYS,再或者用数组排序后ARRAY[n] AS的方式
SELECT
  my_id,
  nm_cd_map['A'] AS my_cd_a,
  nm_cd_map['B'] AS my_cd_b,
  nm_cd_map['C'] AS my_cd_c,
  nm_num_map['A'] AS my_num_a,
  nm_num_map['B'] AS my_num_b,
  nm_num_map['C'] AS my_num_c
FROM
  (
    SELECT
      t.my_id,
      MAP_FROM_ENTRIES(COLLECT_LIST(nm_cd)) AS nm_cd_map,
      MAP_FROM_ENTRIES(COLLECT_LIST(nm_num)) AS nm_num_map
    FROM
      (
        SELECT
          my_id,
          my_nm,
          my_cd,
          my_num,
          STRUCT(my_nm,my_cd) AS nm_cd,
          STRUCT(my_nm,my_num) AS nm_num
        FROM
          (
            SELECT '1' AS my_id,'A' AS my_nm,'D01' AS my_cd,19 AS my_num
            UNION ALL
            SELECT '1' AS my_id,'B' AS my_nm,'D04' AS my_cd,18 AS my_num
            UNION ALL
            SELECT '1' AS my_id,'C' AS my_nm,'D02' AS my_cd,17 AS my_num
            UNION ALL
            SELECT '2' AS my_id,'A' AS my_nm,'D03' AS my_cd,16 AS my_num
            UNION ALL
            SELECT '2' AS my_id,'B' AS my_nm,'D05' AS my_cd,15 AS my_num
            UNION ALL
            SELECT '2' AS my_id,'C' AS my_nm,'D06' AS my_cd,14 AS my_num
          )
      ) t
    GROUP BY my_id
  ) t
WHERE 1=1;
相关推荐
2501_938773991 天前
大数据离线处理:使用 Airflow 调度 Hive 脚本的工作流设计
大数据·hive·hadoop
Timer_Cooker1 天前
Hive 分区表变更字段长度不生效
数据仓库·hive·hadoop
2501_938782092 天前
《Ubuntu 系统下 MySQL 安装前的环境检查与依赖准备指南》
hive·mysql·ubuntu·adb
梦里不知身是客112 天前
hive的SQL语句练习2
hive·hadoop·sql
梦里不知身是客112 天前
hive的SQL练习3
hive·hadoop·sql
yumgpkpm3 天前
CMP(类ClouderaCDP7.3(404次编译) )完全支持华为鲲鹏Aarch64(ARM)POC报告
大数据·hive·hadoop·python·elasticsearch·hbase·cloudera
二进制_博客5 天前
sqoop从hive导出mysql常见错误汇总
hive·mysql·sqoop
hzp6666 天前
spark动态分区参数spark.sql.sources.partitionOverwriteMode
大数据·hive·分布式·spark·etl·partitionover
yumgpkpm6 天前
CMP(类ClouderaCDP7.3(404次编译) )完全支持华为鲲鹏Aarch64(ARM),粉丝数超过200就开源下载
hive·hadoop·redis·mongodb·elasticsearch·hbase·big data
励志成为糕手7 天前
Hive数据仓库:架构原理与实践指南
大数据·数据仓库·hive·1024程序员节·hql