Doris的Routine Load方式消费Kafka数据进入Doris

假设kafka已有嵌套JSON数据格式为

{
    "appId": "10000",
    "platform": "YY",
    "userId": "007",
    "userAgent": "6",
    "event": "login",
    "package": "org.apache.doris",
    "properties": {
        "phoneNumber": "13814516235",
        "actionTime": "1694928000",
        "deviceID": "device123",
        "deviceType": "smartphone",
        "appVersion": "1.0.0",
        "networkType": "WiFi",
        "os": "Android",
        "userUID": "user123",
        "nickname": "小王",
        "clientIp": "192.168.1.1"
    },
    "clientIp": "10.225.36.85",
    "timestamp": "1694928000",
    "source": "mobileApp",
    "sessionId": "12554122524422"
}

1、创建建表语句

CREATE TABLE test_app_dwh.rt_ods_log_app_loginout (
    appId VARCHAR(20) NOT NULL COMMENT "应用ID",
    userId VARCHAR(20) NOT NULL COMMENT "用户ID",
    timestamp BIGINT COMMENT "时间戳",
    platform VARCHAR(20) NOT NULL COMMENT "平台",
    userAgent VARCHAR(20) NOT NULL COMMENT "用户代理",
    event VARCHAR(20) NOT NULL COMMENT "事件类型",
    package VARCHAR(100) NOT NULL COMMENT "包名",
    phoneNumber VARCHAR(20) COMMENT "电话号码",
    actionTime BIGINT COMMENT "动作时间戳",
    deviceID VARCHAR(50) COMMENT "设备ID",
    deviceType VARCHAR(20) COMMENT "设备类型",
    appVersion VARCHAR(20) COMMENT "应用版本",
    networkType VARCHAR(20) COMMENT "网络类型",
    os VARCHAR(20) COMMENT "操作系统",
    userUID VARCHAR(50) COMMENT "用户唯一标识",
    nickname VARCHAR(50) COMMENT "昵称",
    clientIp VARCHAR(20) COMMENT "客户端IP",
    source VARCHAR(20) COMMENT "来源",
    sessionId VARCHAR(50) COMMENT "会话ID"
)
DUPLICATE KEY(appId, userId, timestamp)
DISTRIBUTED BY HASH(appId) BUCKETS 1;

2、导入命令

CREATE ROUTINE LOAD test_game_dwh.kafkajob_rt_ods_log_app_loginout ON rt_ods_log_app_loginout
COLUMNS(appId, userId, timestamp, platform, userAgent, event, package, phoneNumber, actionTime, deviceID, deviceType, appVersion, networkType, os, userUID, nickname, clientIp, source, sessionId)
PROPERTIES
(
    "desired_concurrent_number" = "1",
    "format" = "json",
    "strict_mode" = "false",
    "jsonpaths" = "[\"$.appId\",\"$.userId\",\"$.timestamp\",\"$.platform\",\"$.userAgent\",\"$.event\",\"$.package\",\"$.properties.phoneNumber\",\"$.properties.actionTime\",\"$.properties.deviceID\",\"$.properties.deviceType\",\"$.properties.appVersion\",\"$.properties.networkType\",\"$.properties.os\",\"$.properties.userUID\",\"$.properties.nickname\",\"$.properties.clientIp\",\"$.source\",\"$.sessionId\"]"
)
FROM KAFKA
(
    "kafka_broker_list" = "ip1:9092,ip2:9092,ip3:9092",
    "kafka_topic" = "loginout",
    "property.group.id" = "kafka_job",
    "property.kafka_default_offsets" = "OFFSET_BEGINNING"
);

最后kafka的数据就可以源源不断的存储到doris表里面了

相关推荐
运维&陈同学1 小时前
【zookeeper01】消息队列与微服务之zookeeper工作原理
运维·分布式·微服务·zookeeper·云原生·架构·消息队列
时差9531 小时前
Flink Standalone集群模式安装部署
大数据·分布式·flink·部署
菠萝咕噜肉i1 小时前
超详细:Redis分布式锁
数据库·redis·分布式·缓存·分布式锁
Mephisto.java2 小时前
【大数据学习 | Spark】Spark的改变分区的算子
大数据·elasticsearch·oracle·spark·kafka·memcache
只因在人海中多看了你一眼5 小时前
分布式缓存 + 数据存储 + 消息队列知识体系
分布式·缓存
zhixingheyi_tian7 小时前
Spark 之 Aggregate
大数据·分布式·spark
KevinAha9 小时前
Kafka 3.5 源码导读
kafka
求积分不加C9 小时前
-bash: ./kafka-topics.sh: No such file or directory--解决方案
分布式·kafka
nathan05299 小时前
javaer快速上手kafka
分布式·kafka
激流丶12 小时前
【Kafka 实战】Kafka 如何保证消息的顺序性?
java·后端·kafka