hive 之select 中文乱码

此处的中文乱码和mysql的库表 编码 latin utf 无关。

直接上案例。

有时候我们需要自定义一列,有时是汉字有时是字母,结果遇到这种情况了。

说实话看到这真是糟心。这谁受得了。

单独select 没有任何问题。

这是怎么回事呢? 经过一番检查,发现有个地方类似与 "境内" as col但是没乱码,

此时怀疑就是if 函数起了作用,但是一时间不知道是为啥。。

经过多方面测试 concat("境内") concat_ws("","境内")没用,

concat_ws("",arrary("境内")) 有用,此时也不知道如何下手,只有掏出大杀器 explain.

起作用的

Plan optimized by CBO.

""

Vertex dependency in root stage

Map 1 <- Map 3 (BROADCAST_EDGE)

Reducer 2 <- Map 1 (SIMPLE_EDGE)

""

Stage-0

Fetch Operator

limit:-1

Stage-1

Reducer 2

File Output Operator [FS_14]

Select Operator [SEL_13] (rows=105 width=273)

" Output:[""_col0"",""_col1"",""_col2"",""_col3"",""_col4"",""_col5"",""_col6"",""_col7""]"

Group By Operator [GBY_12] (rows=105 width=273)

" Output:[""_col0"",""_col1"",""_col2"",""_col3"",""_col4""],keys:KEY._col0, KEY._col1, KEY._col2, KEY._col3, KEY._col4"

<-Map 1 [SIMPLE_EDGE] vectorized

SHUFFLE [RS_28]

" PartitionCols:_col0, _col1, _col2, _col3, _col4"

Group By Operator [GBY_27] (rows=211 width=273)

" Output:[""_col0"",""_col1"",""_col2"",""_col3"",""_col4""],keys:_col1, _col2, _col3, _col4, _col5"

Map Join Operator [MAPJOIN_26] (rows=211 width=273)

" Conds:SEL_25._col0=RS_23._col0(Inner),Output:[""_col1"",""_col2"",""_col3"",""_col4"",""_col5""]"

<-Map 3 [BROADCAST_EDGE] vectorized

BROADCAST [RS_23]

PartitionCols:_col0

Select Operator [SEL_22] (rows=1 width=736)

" Output:[""_col0"",""_col1"",""_col2"",""_col3""]"

Filter Operator [FIL_21] (rows=1 width=736)

predicate:bank_code is not null

TableScan [TS_3] (rows=1 width=736)

" dwapsdata@dw_conf_ce_bank_dict_v,t1,Tbl:COMPLETE,Col:NONE,Output:[""bank_code"",""bank_name"",""bank_short_name"",""bank_onshore_flag""]"

<-Select Operator [SEL_25] (rows=192 width=273)

" Output:[""_col0"",""_col1""]"

Filter Operator [FIL_24] (rows=192 width=273)

predicate:bank_code is not null

TableScan [TS_0] (rows=192 width=273)

" dwdmdata@dm_ce_f_portrait_credit_line,t,Tbl:COMPLETE,Col:COMPLETE,Output:[""bank_code""]"

""

没有作用的

Plan optimized by CBO.

""

Vertex dependency in root stage

Map 1 <- Map 3 (BROADCAST_EDGE)

Reducer 2 <- Map 1 (SIMPLE_EDGE)

""

Stage-0

Fetch Operator

limit:-1

Stage-1

Reducer 2 vectorized

File Output Operator [FS_31]

Select Operator [SEL_30] (rows=105 width=273)

" Output:[""_col0"",""_col1"",""_col2"",""_col3"",""_col4"",""_col5"",""_col6""]"

Group By Operator [GBY_29] (rows=105 width=273)

" Output:[""_col0"",""_col1"",""_col2"",""_col3"",""_col4""],keys:KEY._col0, KEY._col1, KEY._col2, KEY._col3, KEY._col4"

<-Map 1 [SIMPLE_EDGE] vectorized

SHUFFLE [RS_28]

" PartitionCols:_col0, _col1, _col2, _col3, _col4"

Group By Operator [GBY_27] (rows=211 width=273)

" Output:[""_col0"",""_col1"",""_col2"",""_col3"",""_col4""],keys:_col1, _col2, _col3, _col4, _col5"

Map Join Operator [MAPJOIN_26] (rows=211 width=273)

" Conds:SEL_25._col0=RS_23._col0(Inner),Output:[""_col1"",""_col2"",""_col3"",""_col4"",""_col5""]"

<-Map 3 [BROADCAST_EDGE] vectorized

BROADCAST [RS_23]

PartitionCols:_col0

Select Operator [SEL_22] (rows=1 width=736)

" Output:[""_col0"",""_col1"",""_col2"",""_col3""]"

Filter Operator [FIL_21] (rows=1 width=736)

predicate:bank_code is not null

TableScan [TS_3] (rows=1 width=736)

" dwapsdata@dw_conf_ce_bank_dict_v,t1,Tbl:COMPLETE,Col:NONE,Output:[""bank_code"",""bank_name"",""bank_short_name"",""bank_onshore_flag""]"

<-Select Operator [SEL_25] (rows=192 width=273)

" Output:[""_col0"",""_col1""]"

Filter Operator [FIL_24] (rows=192 width=273)

predicate:bank_code is not null

TableScan [TS_0] (rows=192 width=273)

" dwdmdata@dm_ce_f_portrait_credit_line,t,Tbl:COMPLETE,Col:COMPLETE,Output:[""bank_code""]"

""

对比发现

vectorzied 这个单词一出来我就知道怎么回事了。

hive decimal bug, nvl(decimal,1)=0_cclovezbf的博客-CSDN博客

这个b参数好处没体会到一点,bug到是一堆。

复制代码
set hive.vectorized.execution.enabled=false; 即可解决中文乱码问题!!!!!!!

其实还有别的办法,但是和concat_ws(array(""))一样比较丑陋,我就不说了

相关推荐
Lx35216 小时前
Hadoop容错机制深度解析:保障作业稳定运行
大数据·hadoop
IT毕设梦工厂1 天前
大数据毕业设计选题推荐-基于大数据的客户购物订单数据分析与可视化系统-Hadoop-Spark-数据可视化-BigData
大数据·hadoop·数据分析·spark·毕业设计·源码·bigdata
大数据CLUB1 天前
基于spark的澳洲光伏发电站选址预测
大数据·hadoop·分布式·数据分析·spark·数据开发
计算机编程小央姐1 天前
跟上大数据时代步伐:食物营养数据可视化分析系统技术前沿解析
大数据·hadoop·信息可视化·spark·django·课程设计·食物
IT学长编程2 天前
计算机毕业设计 基于Hadoop的健康饮食推荐系统的设计与实现 Java 大数据毕业设计 Hadoop毕业设计选题【附源码+文档报告+安装调试】
java·大数据·hadoop·毕业设计·课程设计·推荐算法·毕业论文
Lx3522 天前
Hadoop数据一致性保障:处理分布式系统常见问题
大数据·hadoop
IT学长编程2 天前
计算机毕业设计 基于Hadoop豆瓣电影数据可视化分析设计与实现 Python 大数据毕业设计 Hadoop毕业设计选题【附源码+文档报告+安装调试
大数据·hadoop·python·django·毕业设计·毕业论文·豆瓣电影数据可视化分析
Dobby_052 天前
【Hadoop】Yarn:Hadoop 生态的资源操作系统
大数据·hadoop·分布式·yarn
笨蛋少年派2 天前
安装Hadoop中遇到的一些问题和解决
大数据·hadoop·分布式