hive udf 执行一次调用多次问题

sql 复制代码
SELECT AppMktApi(
      'http://xx.xx.xx.xx:8282/uph/sou',
      'POST',
      concat('{','"topicId": "', 'RZRQ', '",','"auditorId": "','ngw', '",','"channels": "', 'ol', '",','"content": "', '融资融券业务指标变动提醒","longtext":"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注', '",','"destIds": ', '[11500435,11500433,11304133,11000126,11000619,11504802]', ',','"destType": "', '1', '",','"priority": "', '5','",','"providerId": "','ngw', '",','"rTopic": "', 'Push_Result','",','"ack": "','1','"}'),
    -- '{"topicId":"RZRQ","auditorId":"ngw","channels":"ol","content":"融资融券业务指标变动提醒","longtext":"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注","destIds":[11500435,11500433,11304133,11000126,11000619,11504802],"destType":"1","priority":"5","providerId":"ngw","rTopic":"Push_Result","ack":"1"}',
      'Content-Type: application/json'
    ) AS result

执行一次调用了两次,查看执行计划

sql 复制代码
explain extended SELECT AppMktApi(
      'http://xx.xx.xx.xx:8282/uph/sou',
      'POST',
        '{\"topicId\":\"RZRQ\",\"auditorId\":\"ngw\",\"channels\":\"ol\",\"content\":\"融资融券业务指标变动提醒\",\"longtext\":\"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注\",\"destIds\":[11500435,11500433,11304133,11000126,11000619,11504802],\"destType\":\"1\",\"priority\":\"5\",\"providerId\":\"ngw\",\"rTopic\":\"Push_Result\",\"ack\":\"1\"}',
      'Content-Type: application/json'
    ) AS result

结果显示

sql 复制代码
STAGE DEPENDENCIES:
Stage-0 is a root stage
STAGE PLANS:
Stage: Stage-0
Fetch Operator
limit: -1
Processor Tree:
TableScan
alias: _dummy_table
Row Limit Per Split: 1
GatherStats: false
Select Operator
expressions: '{"response":"{\"errcode\":\"0\",\"msgId\":\"2025090239116000001\"}","curlCommand":"curl -X POST -H \"Content-Type: application/json\" -d '{\"topicId\":\"RZRQ\",\"auditorId\":\"ngw\",\"channels\":\"ol\",\"content\":\"融资融券业务指标变动提醒\",\"longtext\":\"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注\",\"destIds\":[11500435,11500433,11304133,11000126,11000619,11504802],\"destType\":\"1\",\"priority\":\"5\",\"providerId\":\"ngw\",\"rTopic\":\"Push_Result\",\"ack\":\"1\"}' http://xx.xx.xx.xx:8282/uph/sou"}' (type: string)
outputColumnNames: _col0
ListSink

执行计划里仅显示 Stage-0 (属于客户端本地阶段,无分布式计算),且扫描的是 dummy_table (Hive 虚拟表,仅用于返回固定值,无需 MapReduce 处理 )。

原因:

  1. Hive 执行计划的"预执行 + 实际执行"

Hive 在解析含 UDF(这里 AppMktApi; 可理解为自定义函数逻辑)的查询时,可能会先做语法/类型校验,执行计划中的 (type: string),触发一次接口调用;实际执行查询(扫描 _dummy_table; 后处理数据)时,再触发一次。尤其当 UDF 里有固定参数拼接(像你 SQL 里构造固定请求体),Hive 校验阶段可能会提前执行 UDF 逻辑,导致额外调用。

2 解决方案

方案 1:避免 UDF 在校验阶段执行

Hive 校验阶段会执行 UDF(因固定参数/语法验证),可通过延迟 UDF 逻辑执行规避:

方法:参数化请求体(从固定值改为表字段)

把 UDF 里固定拼接的请求体,改成从表中读取字段动态生成。这样 Hive 校验阶段无实际参数,不会触发接口调用。

示例改造:

原 SQL(固定请求体,触发校验调用):

sql 复制代码
SELECT AppMktApi(
      'http://xx.xx.xx.xx:8282/uph/sou',
      'POST',
      concat('{','"topicId": "', 'RZRQ', '",','"auditorId": "','ngw', '",','"channels": "', 'ol', '",','"content": "', '融资融券业务指标变动提醒","longtext":"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注', '",','"destIds": ', '[11500435,11500433,11304133,11000126,11000619,11504802]', ',','"destType": "', '1', '",','"priority": "', '5','",','"providerId": "','ngw', '",','"rTopic": "', 'Push_Result','",','"ack": "','1','"}'),
    -- '{"topicId":"RZRQ","auditorId":"ngw","channels":"ol","content":"融资融券业务指标变动提醒","longtext":"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注","destIds":[11500435,11500433,11304133,11000126,11000619,11504802],"destType":"1","priority":"5","providerId":"ngw","rTopic":"Push_Result","ack":"1"}',
      'Content-Type: application/json'
    ) AS result

改造后(请求体存表,动态读取):

  1. 建辅助表存请求体(一次插入,复用):
sql 复制代码
create table tmp.request_body(
name string
)

insert into table tmp.request_body 
select '{"topicId": "RZRQ","auditorId": "ngw","channels": "ol","content": "融资融券业务指标变动提醒","longtext":"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注","destIds": [11500435,11500433,11304133,11000126,11000619,11504802],"destType": "1","priority": "5","providerId": "ngw","rTopic": "Push_Result","ack": "1"}
' as name;
  1. 关联查询动态传参(Hive 校验阶段仅解析逻辑,不执行 UDF):
sql 复制代码
SELECT AppMktApi(
      'http://xx.xx.xx.xx:8282/uph/sou',
      'POST',
    --   concat('{','"topicId": "', 'RZRQ', '",','"auditorId": "','ngw', '",','"channels": "', 'ol', '",','"content": "', '融资融券业务指标变动提醒","longtext":"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注', '",','"destIds": ', '[11500435,11500433,11304133,11000126,11000619,11504802]', ',','"destType": "', '1', '",','"priority": "', '5','",','"providerId": "','ngw', '",','"rTopic": "', 'Push_Result','",','"ack": "','1','"}'),
    -- '{"topicId":"RZRQ","auditorId":"ngw","channels":"ol","content":"融资融券业务指标变动提醒","longtext":"尊敬的投资者您好!您信用账户持有的科泰电源(300153)折算率已调整,敬请关注","destIds":[11500435,11500433,11304133,11000126,11000619,11504802],"destType":"1","priority":"5","providerId":"ngw","rTopic":"Push_Result","ack":"1"}',
    name,
      'Content-Type: application/json'
    ) AS resul from tmp.request_body 

原理:Hive 校验阶段仅检查表字段引用,不会实际执行 AppMktApi ,仅实际执行阶段(扫描表时)触发 1 次调用。

相关推荐
风亦辰73918 小时前
大数据开发环境搭建(Linux + Hadoop + Spark + Flink + Hive + Kafka)
大数据·linux·hadoop
风亦辰73918 小时前
大数据生态系统全景图:Hadoop、Spark、Flink、Hive、Kafka 的关系
大数据·hadoop·spark
Lx35220 小时前
HDFS数据备份与恢复:保障数据安全
大数据·hadoop
王小王-1231 天前
基于Hadoop与层次聚类技术的电子游戏销售分析系统的设计与实现
大数据·hadoop·聚类·层次聚类·电子游戏销售分析·游戏数据分析
BYSJMG1 天前
计算机Python毕业设计推荐:基于Django+Vue用户评论挖掘旅游系统
大数据·vue.js·hadoop·python·spark·django·课程设计
IT毕设梦工厂1 天前
大数据毕业设计选题推荐-基于大数据的电脑硬件数据分析系统-Hadoop-Spark-数据可视化-BigData
大数据·hadoop·课程设计
BYSJMG1 天前
计算机毕设推荐:基于python的农产品价格数据分析与预测的可视化系统的设计与实现 基于Python农产品管理系统【源码+文档+调试】
大数据·开发语言·hadoop·python·数据分析·django·课程设计