Lookup Join 是流式查询中的一种 Join,Join 要求一个表具有处理时间属性,另一个表由lookup source connector支持。
Paimon支持在主键表和附加表上进行Lookup Join。
a) 准备
创建一个Paimon表并实时更新它。
-- Create a paimon catalog
CREATE CATALOG my_catalog WITH (
'type'='paimon',
'warehouse'='hdfs://nn:8020/warehouse/path' -- or 'file://tmp/foo/bar'
);
USE CATALOG my_catalog;
-- Create a table in paimon catalog
CREATE TABLE customers (
id INT PRIMARY KEY NOT ENFORCED,
name STRING,
country STRING,
zip STRING
);
-- Launch a streaming job to update customers table
INSERT INTO customers ...
-- Create a temporary left table, like from kafka
CREATE TEMPORARY TABLE Orders (
order_id INT,
total INT,
customer_id INT,
proc_time AS PROCTIME()
) WITH (
'connector' = 'kafka',
'topic' = '...',
'properties.bootstrap.servers' = '...',
'format' = 'csv'
...
);
b) Normal Lookup(正常查找)
可以在lookup join query中使用customers
。
-- enrich each order with customer information
SELECT o.order_id, o.total, c.country, c.zip
FROM Orders AS o
JOIN customers
FOR SYSTEM_TIME AS OF o.proc_time AS c
ON o.customer_id = c.id;
c) Retry Lookup(重试查找)
在 Flink 1.16+ ,如果Orders
记录(主表)没有 Join 上,是因为相应的customers
数据(查找表)尚未准备就绪,可以使用Flink的延迟重试策略进行查找。
-- enrich each order with customer information
SELECT /*+ LOOKUP('table'='c', 'retry-predicate'='lookup_miss', 'retry-strategy'='fixed_delay', 'fixed-delay'='1s', 'max-attempts'='600') */
o.order_id, o.total, c.country, c.zip
FROM Orders AS o
JOIN customers
FOR SYSTEM_TIME AS OF o.proc_time AS c
ON o.customer_id = c.id;
d) Async Retry Lookup(异步重试查找)
同步重试的问题是,一条记录没返回会阻塞后续记录,导致整个作业被阻塞,可以使用async + allow_unordered以避免阻塞。
-- enrich each order with customer information
SELECT /*+ LOOKUP('table'='c', 'retry-predicate'='lookup_miss', 'output-mode'='allow_unordered', 'retry-strategy'='fixed_delay', 'fixed-delay'='1s', 'max-attempts'='600') */
o.order_id, o.total, c.country, c.zip
FROM Orders AS o
JOIN customers /*+ OPTIONS('lookup.async'='true', 'lookup.async-thread-number'='16') */
FOR SYSTEM_TIME AS OF o.proc_time AS c
ON o.customer_id = c.id;
如果主表(Orders
)是CDC流,allow_unordered
将被Flink SQL忽略(仅支持附加流),可能阻塞流式任务,可以尝试使用Paimon的audit_log
系统表功能(将CDC流转换为附加流)。
8)Query Service
可以运行Flink流作业来启动表的查询服务,当QueryService存在时,Flink Lookup Join将优先从中获取数据,这将有效地提高查询性能。
Flink SQL
CALL sys.query_service('database_name.table_name', parallelism);
Flink Action
<FLINK_HOME>/bin/flink run \
/path/to/paimon-flink-action-0.7.0-incubating.jar \
query_service \
--warehouse <warehouse-path> \
--database <database-name> \
--table <table-name> \
[--parallelism <parallelism>] \
[--catalog_conf <paimon-catalog-conf> [--catalog_conf <paimon-catalog-conf> ...]]