
TDengine Java 连接器进阶指南
本文档面向熟悉 TDengine 的专业开发人员,提供 Java 连接器的进阶使用指南。内容涵盖性能优化、高可用架构、复杂场景处理等高级主题,帮助您充分发挥 TDengine 的性能潜力。
前置条件
阅读本文档前,请确保您已熟悉 Java 连接器基础文档 中的基本概念和 API 使用方法。
连接策略选择
Native vs WebSocket:如何选择
| 维度 | Native 连接 | WebSocket 连接 |
|---|---|---|
| 性能 | 更高(直接调用 C 库) | 略低(HTTP 协议开销) |
| 部署依赖 | 需要客户端驱动 | 无需客户端驱动 |
| 跨平台 | 受限于客户端驱动支持 | 支持所有 Java 平台 |
| 防火墙 | 需开放 6030 端口 | 仅需 6041 端口(HTTP) |
| 负载均衡 | 不支持多端点 | 支持多端点负载均衡 |
| 云环境 | 部分云环境受限 | 云原生友好 |
| 数据类型 | 不支持无符号类型 | 完整支持所有数据类型 |
推荐选择策略:
java
// 场景1:高性能写入场景,本地或内网部署 → Native 连接
String nativeUrl = "jdbc:TAOS://192.168.1.100:6030/power?user=root&password=taosdata";
// 场景2:云环境、跨网络、需要高可用 → WebSocket 连接
String wsUrl = "jdbc:TAOS-WS://node1:6041,node2:6041,node3:6041/power?user=root&password=taosdata";
// 场景3:需要无符号类型或 Decimal 类型 → WebSocket 连接
String wsUrlWithTypes = "jdbc:TAOS-WS://192.168.1.100:6041/power?user=root&password=taosdata";
多端点负载均衡配置
WebSocket 连接支持配置多个 taosAdapter 端点,实现连接级别的负载均衡:
java
Properties props = new Properties();
props.setProperty("user", "root");
props.setProperty("password", "taosdata");
// 启用自动重连,配合多端点实现故障转移
props.setProperty(TSDBDriver.PROPERTY_KEY_ENABLE_AUTO_RECONNECT, "true");
props.setProperty(TSDBDriver.PROPERTY_KEY_RECONNECT_INTERVAL_MS, "2000");
props.setProperty(TSDBDriver.PROPERTY_KEY_RECONNECT_RETRY_COUNT, "3");
// 多端点配置,连接时随机选择
String url = "jdbc:TAOS-WS://adapter1:6041,adapter2:6041,adapter3:6041/power";
Connection conn = DriverManager.getConnection(url, props);
负载均衡高级配置(节点探活与重平衡):
java
Properties props = new Properties();
// 基础配置
props.setProperty("user", "root");
props.setProperty("password", "taosdata");
props.setProperty(TSDBDriver.PROPERTY_KEY_ENABLE_AUTO_RECONNECT, "true");
// 节点探活配置
props.setProperty(TSDBDriver.PROPERTY_KEY_HEALTH_CHECK_INIT_INTERVAL, "10"); // 初始探活间隔(秒)
props.setProperty(TSDBDriver.PROPERTY_KEY_HEALTH_CHECK_MAX_INTERVAL, "300"); // 最大探活间隔(秒)
props.setProperty(TSDBDriver.PROPERTY_KEY_HEALTH_CHECK_CON_TIMEOUT, "1"); // 探活连接超时(秒)
props.setProperty(TSDBDriver.PROPERTY_KEY_HEALTH_CHECK_CMD_TIMEOUT, "5"); // 探活命令超时(秒)
props.setProperty(TSDBDriver.PROPERTY_KEY_HEALTH_CHECK_RECOVERY_COUNT, "3"); // 确认恢复所需成功次数
props.setProperty(TSDBDriver.PROPERTY_KEY_HEALTH_CHECK_RECOVERY_INTERVAL, "60"); // 多次探活间隔(秒)
// 重平衡配置
props.setProperty(TSDBDriver.PROPERTY_KEY_REBALANCE_THRESHOLD, "20"); // 触发重平衡的阈值(%)
props.setProperty(TSDBDriver.PROPERTY_KEY_REBALANCE_CON_BASE_COUNT, "30"); // 触发重平衡的最小连接数
高效写入模式
高效写入模式原理
高效写入模式(Async Write Mode)是 WebSocket 连接特有的高性能写入方案,通过以下机制提升写入性能:
-
异步批量提交:数据先缓存到本地,达到阈值后批量提交
-
多线程并发写入:后台线程池并发处理写入任务
-
智能重试机制:写入失败自动重试
┌─────────────┐ ┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ Application │───>│ Local Cache │───>│ Write Thread│───>│ TDengine │
│ addBatch │ │ (by row) │ │ Pool │ │ Server │
└─────────────┘ └─────────────┘ └─────────────┘ └─────────────┘
│ │
batchSizeByRow backendWriteThreadNum
cacheSizeByRow
高效写入配置详解
java
Properties props = new Properties();
props.setProperty("user", "root");
props.setProperty("password", "taosdata");
// 启用高效写入模式(目前仅支持 stmt 方式)
props.setProperty(TSDBDriver.PROPERTY_KEY_ASYNC_WRITE, "stmt");
// 后台写入线程数(根据 CPU 核心数和服务端 vnode 数量调整)
// 建议值:min(CPU核心数, vnode数量)
props.setProperty(TSDBDriver.PROPERTY_KEY_BACKEND_WRITE_THREAD_NUM, "10");
// 每批写入的行数(影响单次网络传输数据量)
// 建议值:1000-10000,根据单行数据大小调整
props.setProperty(TSDBDriver.PROPERTY_KEY_BATCH_SIZE_BY_ROW, "1000");
// 本地缓存大小(行数),达到此值会阻塞 addBatch
// 建议值:batchSizeByRow 的 5-10 倍
props.setProperty(TSDBDriver.PROPERTY_KEY_CACHE_SIZE_BY_ROW, "10000");
// 是否拷贝二进制数据(如果应用会复用 byte[] 对象,设为 true)
props.setProperty(TSDBDriver.PROPERTY_KEY_COPY_DATA, "false");
// 是否严格校验表名和数据长度
props.setProperty(TSDBDriver.PROPERTY_KEY_STRICT_CHECK, "false");
// 写入失败重试次数
props.setProperty(TSDBDriver.PROPERTY_KEY_RETRY_TIMES, "3");
String url = "jdbc:TAOS-WS://192.168.1.100:6041/power";
Connection conn = DriverManager.getConnection(url, props);
高效写入最佳实践
java
public class HighPerformanceWriter {
private static final int BATCH_SIZE = 1000;
public void writeData(Connection conn, List<SensorData> dataList) throws SQLException {
String sql = "INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)";
try (PreparedStatement pstmt = conn.prepareStatement(sql)) {
// 强制转换为扩展接口以使用高效写入功能
AbstractStatement stmt = pstmt.unwrap(AbstractStatement.class);
int count = 0;
for (SensorData data : dataList) {
// 设置表名
stmt.setTableName(data.getTableName());
// 设置 Tags(仅首次写入子表时需要)
stmt.setTagInt(0, data.getGroupId());
stmt.setTagString(1, data.getLocation());
// 设置列值
pstmt.setTimestamp(1, data.getTs());
pstmt.setFloat(2, data.getCurrent());
pstmt.setInt(3, data.getVoltage());
pstmt.setFloat(4, data.getPhase());
pstmt.addBatch();
count++;
// 定期执行批量写入,避免内存积压
if (count % BATCH_SIZE == 0) {
pstmt.executeBatch();
}
}
// 处理剩余数据
if (count % BATCH_SIZE != 0) {
pstmt.executeBatch();
}
}
}
}
参数绑定序列化优化(实验性)
当每个子表仅绑定一条数据时,可启用 line 模式进一步优化性能:
java
Properties props = new Properties();
props.setProperty("user", "root");
props.setProperty("password", "taosdata");
// 实验性功能,适用于每个子表每批仅一条数据的场景
// 注意:不支持与高效写入模式同时使用
props.setProperty(TSDBDriver.PROPERTY_KEY_PBS_MODE, "line");
参数绑定高级用法
批量列式绑定
相比逐行绑定,列式绑定可显著减少 JNI/WebSocket 调用次数:
java
public void columnBinding(Connection conn) throws SQLException {
String sql = "INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)";
try (PreparedStatement pstmt = conn.prepareStatement(sql)) {
// 准备列数据
int batchSize = 10000;
ArrayList<Long> tsCol = new ArrayList<>(batchSize);
ArrayList<Float> currentCol = new ArrayList<>(batchSize);
ArrayList<Integer> voltageCol = new ArrayList<>(batchSize);
ArrayList<Float> phaseCol = new ArrayList<>(batchSize);
long baseTs = System.currentTimeMillis();
for (int i = 0; i < batchSize; i++) {
tsCol.add(baseTs + i);
currentCol.add(10.0f + (float)(Math.random() * 5));
voltageCol.add(200 + (int)(Math.random() * 20));
phaseCol.add(0.3f + (float)(Math.random() * 0.1));
}
// 获取扩展接口
AbstractStatement stmt = pstmt.unwrap(AbstractStatement.class);
// 设置表名和 Tags
stmt.setTableName("d1001");
stmt.setTagInt(0, 1);
stmt.setTagString(1, "California.SanFrancisco");
// 列式绑定(一次性绑定整列数据)
stmt.setTimestamp(0, tsCol);
stmt.setFloat(1, currentCol);
stmt.setInt(2, voltageCol);
stmt.setFloat(3, phaseCol);
// 添加批次并执行
stmt.columnDataAddBatch();
stmt.columnDataExecuteBatch();
}
}
多表交替写入
实际场景中常需要向多个子表交替写入数据:
java
public void multiTableWrite(Connection conn, Map<String, List<SensorData>> tableDataMap)
throws SQLException {
String sql = "INSERT INTO ? USING meters TAGS(?, ?) VALUES(?, ?, ?, ?)";
try (PreparedStatement pstmt = conn.prepareStatement(sql)) {
AbstractStatement stmt = pstmt.unwrap(AbstractStatement.class);
for (Map.Entry<String, List<SensorData>> entry : tableDataMap.entrySet()) {
String tableName = entry.getKey();
List<SensorData> dataList = entry.getValue();
// 切换表名
stmt.setTableName(tableName);
// 设置 Tags(建议使用缓存避免重复设置)
SensorData first = dataList.get(0);
stmt.setTagInt(0, first.getGroupId());
stmt.setTagString(1, first.getLocation());
// 准备该表的列数据
ArrayList<Long> tsCol = new ArrayList<>();
ArrayList<Float> currentCol = new ArrayList<>();
ArrayList<Integer> voltageCol = new ArrayList<>();
ArrayList<Float> phaseCol = new ArrayList<>();
for (SensorData data : dataList) {
tsCol.add(data.getTs().getTime());
currentCol.add(data.getCurrent());
voltageCol.add(data.getVoltage());
phaseCol.add(data.getPhase());
}
stmt.setTimestamp(0, tsCol);
stmt.setFloat(1, currentCol);
stmt.setInt(2, voltageCol);
stmt.setFloat(3, phaseCol);
stmt.columnDataAddBatch();
}
// 统一执行所有表的写入
stmt.columnDataExecuteBatch();
}
}
GEOMETRY 类型处理
使用 JTS 库处理 GEOMETRY 类型数据:
xml
<!-- pom.xml 添加依赖 -->
<dependency>
<groupId>org.locationtech.jts</groupId>
<artifactId>jts-core</artifactId>
<version>1.19.0</version>
</dependency>
java
import org.locationtech.jts.geom.*;
import org.locationtech.jts.io.WKBWriter;
public class GeometryExample {
private final GeometryFactory geometryFactory = new GeometryFactory();
private final WKBWriter wkbWriter = new WKBWriter(2, ByteOrderValues.LITTLE_ENDIAN);
public void insertGeometry(Connection conn) throws SQLException {
// 创建 Point
Point point = geometryFactory.createPoint(new Coordinate(116.397428, 39.90923));
byte[] pointWkb = wkbWriter.write(point);
// 创建 LineString
Coordinate[] lineCoords = {
new Coordinate(116.0, 39.0),
new Coordinate(117.0, 40.0),
new Coordinate(118.0, 39.5)
};
LineString line = geometryFactory.createLineString(lineCoords);
byte[] lineWkb = wkbWriter.write(line);
// 创建 Polygon
Coordinate[] polygonCoords = {
new Coordinate(116.0, 39.0),
new Coordinate(117.0, 39.0),
new Coordinate(117.0, 40.0),
new Coordinate(116.0, 40.0),
new Coordinate(116.0, 39.0) // 闭合
};
Polygon polygon = geometryFactory.createPolygon(polygonCoords);
byte[] polygonWkb = wkbWriter.write(polygon);
String sql = "INSERT INTO geo_table VALUES(?, ?, ?, ?)";
try (PreparedStatement pstmt = conn.prepareStatement(sql)) {
pstmt.setTimestamp(1, new Timestamp(System.currentTimeMillis()));
pstmt.setBytes(2, pointWkb);
pstmt.setBytes(3, lineWkb);
pstmt.setBytes(4, polygonWkb);
pstmt.executeUpdate();
}
}
}
连接池配置
HikariCP 推荐配置
java
import com.zaxxer.hikari.HikariConfig;
import com.zaxxer.hikari.HikariDataSource;
public class HikariCPConfig {
public static HikariDataSource createDataSource() {
HikariConfig config = new HikariConfig();
// 基础配置
config.setJdbcUrl("jdbc:TAOS-WS://192.168.1.100:6041/power");
config.setUsername("root");
config.setPassword("taosdata");
config.setDriverClassName("com.taosdata.jdbc.ws.WebSocketDriver");
// 连接池大小配置
// 公式参考:connections = ((core_count * 2) + effective_spindle_count)
// 对于 SSD 和高并发场景,建议 20-50
config.setMaximumPoolSize(30);
config.setMinimumIdle(10);
// 连接生命周期配置
config.setMaxLifetime(1800000); // 30分钟
config.setIdleTimeout(600000); // 10分钟
config.setConnectionTimeout(30000); // 30秒
// 连接有效性检测
config.setConnectionTestQuery("SELECT SERVER_VERSION()");
config.setValidationTimeout(5000); // 5秒
// TDengine 特定配置
config.addDataSourceProperty("httpConnectTimeout", "60000");
config.addDataSourceProperty("messageWaitTimeout", "60000");
config.addDataSourceProperty("enableAutoReconnect", "true");
config.addDataSourceProperty("reconnectRetryCount", "3");
// 性能优化
config.addDataSourceProperty("enableCompression", "true");
return new HikariDataSource(config);
}
}
Druid 连接池配置
java
import com.alibaba.druid.pool.DruidDataSource;
public class DruidConfig {
public static DruidDataSource createDataSource() {
DruidDataSource dataSource = new DruidDataSource();
// 基础配置
dataSource.setUrl("jdbc:TAOS-WS://192.168.1.100:6041/power");
dataSource.setUsername("root");
dataSource.setPassword("taosdata");
dataSource.setDriverClassName("com.taosdata.jdbc.ws.WebSocketDriver");
// 连接池配置
dataSource.setInitialSize(10);
dataSource.setMinIdle(10);
dataSource.setMaxActive(30);
dataSource.setMaxWait(60000);
// 连接有效性检测
dataSource.setValidationQuery("SELECT SERVER_VERSION()");
dataSource.setTestWhileIdle(true);
dataSource.setTestOnBorrow(false);
dataSource.setTestOnReturn(false);
// 连接泄漏检测
dataSource.setRemoveAbandoned(true);
dataSource.setRemoveAbandonedTimeout(300);
dataSource.setLogAbandoned(true);
// 定时检测配置
dataSource.setTimeBetweenEvictionRunsMillis(60000);
dataSource.setMinEvictableIdleTimeMillis(300000);
// 连接属性
dataSource.setConnectionProperties(
"httpConnectTimeout=60000;messageWaitTimeout=60000;" +
"enableAutoReconnect=true;reconnectRetryCount=3"
);
return dataSource;
}
}
连接池监控
java
public class ConnectionPoolMonitor {
private final HikariDataSource dataSource;
private final ScheduledExecutorService scheduler = Executors.newScheduledThreadPool(1);
public ConnectionPoolMonitor(HikariDataSource dataSource) {
this.dataSource = dataSource;
}
public void startMonitoring() {
scheduler.scheduleAtFixedRate(() -> {
HikariPoolMXBean poolMXBean = dataSource.getHikariPoolMXBean();
System.out.printf(
"Pool Stats - Active: %d, Idle: %d, Total: %d, Waiting: %d%n",
poolMXBean.getActiveConnections(),
poolMXBean.getIdleConnections(),
poolMXBean.getTotalConnections(),
poolMXBean.getThreadsAwaitingConnection()
);
// 告警阈值检测
if (poolMXBean.getThreadsAwaitingConnection() > 5) {
System.err.println("WARNING: Too many threads waiting for connection!");
}
float utilization = (float) poolMXBean.getActiveConnections() /
dataSource.getMaximumPoolSize();
if (utilization > 0.8) {
System.err.println("WARNING: Connection pool utilization > 80%!");
}
}, 0, 30, TimeUnit.SECONDS);
}
public void shutdown() {
scheduler.shutdown();
}
}
数据订阅高级用法
消费者组管理
java
public class ConsumerGroupManager {
private final Properties baseProps;
private final List<TaosConsumer<?>> consumers = new CopyOnWriteArrayList<>();
public ConsumerGroupManager(String servers, String groupId) {
baseProps = new Properties();
baseProps.setProperty("td.connect.type", "ws");
baseProps.setProperty("bootstrap.servers", servers);
baseProps.setProperty("group.id", groupId);
baseProps.setProperty("enable.auto.commit", "false"); // 手动提交以确保数据不丢失
baseProps.setProperty("auto.offset.reset", "earliest");
// 高可用配置
baseProps.setProperty(TSDBDriver.PROPERTY_KEY_ENABLE_AUTO_RECONNECT, "true");
baseProps.setProperty(TSDBDriver.PROPERTY_KEY_RECONNECT_RETRY_COUNT, "5");
baseProps.setProperty(TSDBDriver.PROPERTY_KEY_RECONNECT_INTERVAL_MS, "3000");
}
/**
* 创建多个消费者实例实现并行消费
*/
public void createConsumers(int count, String topic,
ConsumerRecordHandler handler) throws SQLException {
for (int i = 0; i < count; i++) {
Properties props = new Properties();
props.putAll(baseProps);
props.setProperty("client.id", "consumer-" + i);
props.setProperty("value.deserializer",
"com.taosdata.jdbc.tmq.MapDeserializer");
TaosConsumer<Map<String, Object>> consumer = new TaosConsumer<>(props);
consumer.subscribe(Collections.singletonList(topic));
consumers.add(consumer);
// 启动消费线程
new Thread(() -> consumeLoop(consumer, handler), "consumer-thread-" + i).start();
}
}
private void consumeLoop(TaosConsumer<Map<String, Object>> consumer,
ConsumerRecordHandler handler) {
try {
while (!Thread.currentThread().isInterrupted()) {
ConsumerRecords<Map<String, Object>> records =
consumer.poll(Duration.ofMillis(500));
if (records.isEmpty()) {
continue;
}
try {
for (ConsumerRecord<Map<String, Object>> record : records) {
handler.handle(record);
}
// 处理成功后手动提交
consumer.commitSync();
} catch (Exception e) {
// 处理失败,不提交 offset,下次重新消费
System.err.println("Process failed, will retry: " + e.getMessage());
}
}
} catch (SQLException e) {
System.err.println("Consumer error: " + e.getMessage());
}
}
public void shutdown() {
for (TaosConsumer<?> consumer : consumers) {
try {
consumer.unsubscribe();
consumer.close();
} catch (SQLException e) {
e.printStackTrace();
}
}
}
@FunctionalInterface
public interface ConsumerRecordHandler {
void handle(ConsumerRecord<Map<String, Object>> record) throws Exception;
}
}
精确位点控制
java
public class OffsetManager {
private final TaosConsumer<Map<String, Object>> consumer;
public OffsetManager(TaosConsumer<Map<String, Object>> consumer) {
this.consumer = consumer;
}
/**
* 从指定时间点开始消费
*/
public void seekToTimestamp(String topic, long timestamp) throws SQLException {
// 获取所有分区
Set<TopicPartition> partitions = consumer.assignment();
// 获取每个分区的起始和结束 offset
Map<TopicPartition, Long> beginOffsets = consumer.beginningOffsets(topic);
Map<TopicPartition, Long> endOffsets = consumer.endOffsets(topic);
for (TopicPartition partition : partitions) {
if (partition.getTopic().equals(topic)) {
// 二分查找时间戳对应的 offset
long targetOffset = binarySearchOffset(partition, timestamp,
beginOffsets.get(partition), endOffsets.get(partition));
consumer.seek(partition, targetOffset);
System.out.printf("Partition %d seek to offset %d%n",
partition.getVGroupId(), targetOffset);
}
}
}
/**
* 保存消费进度到外部存储
*/
public void saveOffsets(Map<TopicPartition, Long> offsets) {
// 实际项目中应保存到 Redis/数据库
for (Map.Entry<TopicPartition, Long> entry : offsets.entrySet()) {
System.out.printf("Save offset: topic=%s, vgId=%d, offset=%d%n",
entry.getKey().getTopic(),
entry.getKey().getVGroupId(),
entry.getValue());
}
}
/**
* 从外部存储恢复消费进度
*/
public void restoreOffsets(String topic) throws SQLException {
// 从 Redis/数据库加载保存的 offset
Map<TopicPartition, Long> savedOffsets = loadSavedOffsets(topic);
for (Map.Entry<TopicPartition, Long> entry : savedOffsets.entrySet()) {
consumer.seek(entry.getKey(), entry.getValue());
}
}
private long binarySearchOffset(TopicPartition partition, long timestamp,
long beginOffset, long endOffset) {
// 简化实现,实际需要根据消息时间戳进行二分查找
return beginOffset;
}
private Map<TopicPartition, Long> loadSavedOffsets(String topic) {
// 从持久化存储加载
return new HashMap<>();
}
}
订阅数据库和超级表
从 3.6.2 版本开始支持订阅数据库和超级表:
java
public class DatabaseSubscription {
/**
* 订阅数据库(获取数据库下所有表的数据变更)
*/
public void subscribeDatabase() throws SQLException {
Properties props = new Properties();
props.setProperty("td.connect.type", "ws");
props.setProperty("bootstrap.servers", "192.168.1.100:6041");
props.setProperty("group.id", "db-subscriber");
props.setProperty("enable.auto.commit", "true");
props.setProperty("auto.commit.interval.ms", "1000");
// 订阅数据库时必须使用 MapEnhanceDeserializer
props.setProperty("value.deserializer",
"com.taosdata.jdbc.tmq.MapEnhanceDeserializer");
// 创建消费者
TaosConsumer<TMQEnhMap> consumer = new TaosConsumer<>(props);
// 订阅数据库(使用特殊语法)
consumer.subscribe(Collections.singletonList("power")); // 数据库名
try {
while (true) {
ConsumerRecords<TMQEnhMap> records = consumer.poll(Duration.ofMillis(500));
for (ConsumerRecord<TMQEnhMap> record : records) {
TMQEnhMap value = record.value();
// 获取表名
String tableName = value.getTableName();
// 获取数据
Map<String, Object> data = value.getData();
System.out.printf("Table: %s, Data: %s%n", tableName, data);
}
consumer.commitSync();
}
} finally {
consumer.unsubscribe();
consumer.close();
}
}
}
无模式写入优化
批量行协议写入
java
public class SchemalessWriter {
private final Connection conn;
public SchemalessWriter(Connection conn) {
this.conn = conn;
}
/**
* 高性能行协议写入
*/
public void writeInfluxDBLineProtocol(List<String> lines) throws SQLException {
AbstractConnection absConn = conn.unwrap(AbstractConnection.class);
// 方式1:数组方式(适合数据量不大的场景)
String[] lineArray = lines.toArray(new String[0]);
absConn.write(lineArray,
SchemalessProtocolType.LINE,
SchemalessTimestampType.NANO_SECONDS);
// 方式2:原始字符串方式(适合大批量数据,减少内存拷贝)
String rawData = String.join("\n", lines);
int affectedRows = absConn.writeRaw(rawData,
SchemalessProtocolType.LINE,
SchemalessTimestampType.NANO_SECONDS);
System.out.println("Affected rows: " + affectedRows);
}
/**
* 带 TTL 的写入
*/
public void writeWithTTL(String[] lines, int ttlDays, long reqId) throws SQLException {
AbstractConnection absConn = conn.unwrap(AbstractConnection.class);
absConn.write(lines,
SchemalessProtocolType.LINE,
SchemalessTimestampType.MILLI_SECONDS,
ttlDays, // 数据保留天数
reqId); // 请求 ID,用于链路追踪
}
/**
* 构建行协议数据
*/
public String buildLineProtocol(String measurement,
Map<String, String> tags,
Map<String, Object> fields,
long timestampNanos) {
StringBuilder sb = new StringBuilder();
// measurement
sb.append(escapeKey(measurement));
// tags
for (Map.Entry<String, String> tag : tags.entrySet()) {
sb.append(',')
.append(escapeKey(tag.getKey()))
.append('=')
.append(escapeTagValue(tag.getValue()));
}
sb.append(' ');
// fields
boolean first = true;
for (Map.Entry<String, Object> field : fields.entrySet()) {
if (!first) sb.append(',');
first = false;
sb.append(escapeKey(field.getKey()))
.append('=')
.append(formatFieldValue(field.getValue()));
}
// timestamp
sb.append(' ').append(timestampNanos);
return sb.toString();
}
private String escapeKey(String key) {
return key.replace(",", "\\,")
.replace("=", "\\=")
.replace(" ", "\\ ");
}
private String escapeTagValue(String value) {
return value.replace(",", "\\,")
.replace("=", "\\=")
.replace(" ", "\\ ");
}
private String formatFieldValue(Object value) {
if (value instanceof String) {
return "\"" + ((String) value).replace("\"", "\\\"") + "\"";
} else if (value instanceof Integer || value instanceof Long) {
return value + "i";
} else if (value instanceof Float || value instanceof Double) {
return value.toString();
} else if (value instanceof Boolean) {
return value.toString();
}
return value.toString();
}
}
查询性能优化
大结果集处理
java
public class LargeResultSetHandler {
/**
* 流式处理大结果集
*/
public void streamProcess(Connection conn, String sql,
RowProcessor processor) throws SQLException {
try (Statement stmt = conn.createStatement()) {
// 设置 fetchSize 控制每次从服务端拉取的行数
// 较小的值减少内存占用,较大的值减少网络往返
stmt.setFetchSize(1000);
// 设置查询超时(秒)
stmt.setQueryTimeout(300);
try (ResultSet rs = stmt.executeQuery(sql)) {
ResultSetMetaData meta = rs.getMetaData();
int columnCount = meta.getColumnCount();
while (rs.next()) {
Map<String, Object> row = new LinkedHashMap<>();
for (int i = 1; i <= columnCount; i++) {
row.put(meta.getColumnLabel(i), rs.getObject(i));
}
processor.process(row);
}
}
}
}
/**
* 分页查询(适用于需要随机访问的场景)
*/
public List<Map<String, Object>> queryWithPagination(Connection conn,
String baseSql,
int offset,
int limit) throws SQLException {
String pagedSql = baseSql + " LIMIT " + limit + " OFFSET " + offset;
List<Map<String, Object>> results = new ArrayList<>();
try (Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery(pagedSql)) {
ResultSetMetaData meta = rs.getMetaData();
int columnCount = meta.getColumnCount();
while (rs.next()) {
Map<String, Object> row = new LinkedHashMap<>();
for (int i = 1; i <= columnCount; i++) {
row.put(meta.getColumnLabel(i), rs.getObject(i));
}
results.add(row);
}
}
return results;
}
@FunctionalInterface
public interface RowProcessor {
void process(Map<String, Object> row);
}
}
请求链路追踪
java
public class RequestTracer {
private static final AtomicLong requestIdGenerator = new AtomicLong(0);
/**
* 生成唯一请求 ID
*/
public static long generateRequestId() {
// 格式:时间戳(高32位) + 序列号(低32位)
long timestamp = System.currentTimeMillis() / 1000;
long sequence = requestIdGenerator.incrementAndGet() & 0xFFFFFFFFL;
return (timestamp << 32) | sequence;
}
/**
* 带追踪的查询执行
*/
public ResultSet executeWithTrace(Connection conn, String sql) throws SQLException {
long reqId = generateRequestId();
long startTime = System.currentTimeMillis();
try (Statement stmt = conn.createStatement()) {
// 使用扩展接口传递请求 ID
AbstractStatement absStmt = stmt.unwrap(AbstractStatement.class);
ResultSet rs = absStmt.executeQuery(sql, reqId);
long duration = System.currentTimeMillis() - startTime;
System.out.printf("[ReqId: %d] Query completed in %d ms%n", reqId, duration);
return rs;
} catch (SQLException e) {
long duration = System.currentTimeMillis() - startTime;
System.err.printf("[ReqId: %d] Query failed after %d ms: %s%n",
reqId, duration, e.getMessage());
throw e;
}
}
}
错误处理最佳实践
分类异常处理
java
public class ExceptionHandler {
// 可重试的错误码
private static final Set<Integer> RETRYABLE_ERROR_CODES = Set.of(
0x0200, // TSC_DISCONNECTED
0x0204, // TSC_INVALID_CONNECTION
0x0B00, // RPC_NETWORK_UNAVAIL
0x0B01 // RPC_TIMEOUT
);
// 需要重新连接的错误码
private static final Set<Integer> RECONNECT_ERROR_CODES = Set.of(
0x0200, // TSC_DISCONNECTED
0x0204 // TSC_INVALID_CONNECTION
);
public void executeWithRetry(Connection conn, String sql,
int maxRetries) throws SQLException {
int retries = 0;
SQLException lastException = null;
while (retries < maxRetries) {
try {
try (Statement stmt = conn.createStatement()) {
stmt.executeUpdate(sql);
return; // 成功执行
}
} catch (SQLException e) {
lastException = e;
int errorCode = e.getErrorCode();
if (RETRYABLE_ERROR_CODES.contains(errorCode)) {
retries++;
long delay = calculateBackoff(retries);
System.err.printf("Retryable error (code: 0x%X), retry %d after %d ms%n",
errorCode, retries, delay);
try {
Thread.sleep(delay);
} catch (InterruptedException ie) {
Thread.currentThread().interrupt();
throw e;
}
// 检查是否需要重建连接
if (RECONNECT_ERROR_CODES.contains(errorCode) && !conn.isValid(5)) {
throw new SQLException("Connection lost, need reconnect", e);
}
} else {
// 不可重试的错误,直接抛出
throw e;
}
}
}
throw new SQLException("Max retries exceeded", lastException);
}
private long calculateBackoff(int retryCount) {
// 指数退避:100ms, 200ms, 400ms, 800ms...
return Math.min(100L * (1L << retryCount), 10000L);
}
/**
* 解析 SQLException 获取详细信息
*/
public ErrorInfo parseException(SQLException e) {
ErrorInfo info = new ErrorInfo();
info.errorCode = e.getErrorCode();
info.sqlState = e.getSQLState();
info.message = e.getMessage();
// 分析错误类型
int errorClass = (info.errorCode >> 8) & 0xFF;
switch (errorClass) {
case 0x02:
info.category = "Client Error";
break;
case 0x03:
info.category = "Server Error";
break;
case 0x05:
info.category = "Data Error";
break;
case 0x0B:
info.category = "Network Error";
break;
default:
info.category = "Unknown Error";
}
return info;
}
public static class ErrorInfo {
public int errorCode;
public String sqlState;
public String message;
public String category;
@Override
public String toString() {
return String.format("[%s] Code: 0x%X, State: %s, Message: %s",
category, errorCode, sqlState, message);
}
}
}
连接健康检查
java
public class ConnectionHealthChecker {
private final DataSource dataSource;
private final ScheduledExecutorService scheduler;
public ConnectionHealthChecker(DataSource dataSource) {
this.dataSource = dataSource;
this.scheduler = Executors.newSingleThreadScheduledExecutor();
}
public void startHealthCheck(long intervalSeconds) {
scheduler.scheduleAtFixedRate(this::checkHealth,
0, intervalSeconds, TimeUnit.SECONDS);
}
private void checkHealth() {
try (Connection conn = dataSource.getConnection()) {
// 1. 基础连接检查
if (!conn.isValid(5)) {
reportUnhealthy("Connection invalid");
return;
}
// 2. 执行轻量级查询
long startTime = System.currentTimeMillis();
try (Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT SERVER_VERSION()")) {
if (rs.next()) {
String version = rs.getString(1);
long latency = System.currentTimeMillis() - startTime;
if (latency > 1000) {
reportWarning("High latency: " + latency + "ms");
} else {
reportHealthy(version, latency);
}
}
}
// 3. 检查数据库状态
try (Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SHOW CLUSTER ALIVE")) {
// 处理集群健康状态
}
} catch (SQLException e) {
reportUnhealthy("Health check failed: " + e.getMessage());
}
}
private void reportHealthy(String version, long latency) {
System.out.printf("Health OK - Version: %s, Latency: %dms%n", version, latency);
}
private void reportWarning(String message) {
System.err.println("Health WARNING - " + message);
}
private void reportUnhealthy(String message) {
System.err.println("Health CRITICAL - " + message);
}
public void shutdown() {
scheduler.shutdown();
}
}
框架集成
Spring Boot 集成
java
@Configuration
public class TDengineConfig {
@Bean
@ConfigurationProperties(prefix = "spring.datasource.tdengine")
public DataSource tdengineDataSource() {
return DataSourceBuilder.create()
.type(HikariDataSource.class)
.build();
}
@Bean
public JdbcTemplate tdengineJdbcTemplate(
@Qualifier("tdengineDataSource") DataSource dataSource) {
JdbcTemplate template = new JdbcTemplate(dataSource);
template.setQueryTimeout(60); // 秒
return template;
}
}
yaml
# application.yml
spring:
datasource:
tdengine:
driver-class-name: com.taosdata.jdbc.ws.WebSocketDriver
jdbc-url: jdbc:TAOS-WS://192.168.1.100:6041/power
username: root
password: taosdata
hikari:
maximum-pool-size: 20
minimum-idle: 5
connection-timeout: 30000
idle-timeout: 600000
max-lifetime: 1800000
connection-test-query: SELECT SERVER_VERSION()
data-source-properties:
httpConnectTimeout: 60000
messageWaitTimeout: 60000
enableAutoReconnect: true
MyBatis Plus 集成
java
@Configuration
@MapperScan(basePackages = "com.example.mapper.tdengine",
sqlSessionFactoryRef = "tdengineSqlSessionFactory")
public class TDengineMybatisConfig {
@Bean
public SqlSessionFactory tdengineSqlSessionFactory(
@Qualifier("tdengineDataSource") DataSource dataSource) throws Exception {
MybatisSqlSessionFactoryBean factory = new MybatisSqlSessionFactoryBean();
factory.setDataSource(dataSource);
// 配置
MybatisConfiguration configuration = new MybatisConfiguration();
configuration.setMapUnderscoreToCamelCase(true);
configuration.setDefaultStatementTimeout(60);
factory.setConfiguration(configuration);
// 类型处理器(处理 TDengine 特殊类型)
factory.setTypeHandlers(new TypeHandler[]{
new TimestampTypeHandler(),
new GeometryTypeHandler()
});
return factory.getObject();
}
}
java
// 自定义类型处理器示例
@MappedJdbcTypes(JdbcType.BINARY)
@MappedTypes(Point.class)
public class GeometryTypeHandler extends BaseTypeHandler<Point> {
private final WKBReader wkbReader = new WKBReader();
private final WKBWriter wkbWriter = new WKBWriter(2, ByteOrderValues.LITTLE_ENDIAN);
@Override
public void setNonNullParameter(PreparedStatement ps, int i,
Point parameter, JdbcType jdbcType) throws SQLException {
ps.setBytes(i, wkbWriter.write(parameter));
}
@Override
public Point getNullableResult(ResultSet rs, String columnName) throws SQLException {
return parseGeometry(rs.getBytes(columnName));
}
@Override
public Point getNullableResult(ResultSet rs, int columnIndex) throws SQLException {
return parseGeometry(rs.getBytes(columnIndex));
}
@Override
public Point getNullableResult(CallableStatement cs, int columnIndex) throws SQLException {
return parseGeometry(cs.getBytes(columnIndex));
}
private Point parseGeometry(byte[] bytes) throws SQLException {
if (bytes == null) return null;
try {
return (Point) wkbReader.read(bytes);
} catch (ParseException e) {
throw new SQLException("Failed to parse geometry", e);
}
}
}
性能调优检查清单
写入性能优化
| 优化项 | 建议配置 | 说明 |
|---|---|---|
| 批量大小 | 1000-10000 行/批 | 过小增加网络开销,过大增加内存压力 |
| 高效写入 | asyncWrite=stmt |
WebSocket 连接专用,显著提升写入性能 |
| 后台线程数 | min(CPU核数, vnode数) |
过多线程反而降低性能 |
| 传输压缩 | enableCompression=true |
网络带宽受限时启用 |
| 参数绑定 | 使用列式绑定 | 比逐行绑定减少 JNI 调用 |
查询性能优化
| 优化项 | 建议配置 | 说明 |
|---|---|---|
| fetchSize | 1000-5000 | 根据单行数据大小调整 |
| 查询超时 | 根据业务设置 | 避免慢查询占用资源 |
| 连接复用 | 使用连接池 | 避免频繁创建销毁连接 |
| BI 模式 | connmode=1 |
元数据查询不统计子表,加速 BI 工具 |
连接管理
| 优化项 | 建议配置 | 说明 |
|---|---|---|
| 连接池大小 | 20-50 | 根据并发量和服务端承载能力调整 |
| 自动重连 | enableAutoReconnect=true |
生产环境必须启用 |
| 连接有效性检测 | wsKeepAlive=300 |
避免使用已断开的连接 |
| 多端点配置 | 配置 2-3 个端点 | 实现故障转移 |
故障排查指南
常见问题诊断
java
public class DiagnosticTool {
/**
* 连接诊断
*/
public void diagnoseConnection(String url, Properties props) {
System.out.println("=== Connection Diagnostics ===");
// 1. URL 解析检查
System.out.println("URL: " + url);
System.out.println("Connection Type: " +
(url.contains("TAOS-WS") ? "WebSocket" : "Native"));
// 2. 尝试连接
try (Connection conn = DriverManager.getConnection(url, props)) {
System.out.println("Connection Status: SUCCESS");
// 3. 获取服务端信息
DatabaseMetaData meta = conn.getMetaData();
System.out.println("Driver Version: " + meta.getDriverVersion());
System.out.println("Database Version: " + meta.getDatabaseProductVersion());
// 4. 执行测试查询
try (Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SELECT SERVER_STATUS()")) {
if (rs.next()) {
System.out.println("Server Status: " + rs.getString(1));
}
}
// 5. 检查数据库
try (Statement stmt = conn.createStatement();
ResultSet rs = stmt.executeQuery("SHOW DATABASES")) {
System.out.println("Available Databases:");
while (rs.next()) {
System.out.println(" - " + rs.getString("name"));
}
}
} catch (SQLException e) {
System.err.println("Connection Status: FAILED");
System.err.println("Error Code: 0x" + Integer.toHexString(e.getErrorCode()));
System.err.println("SQL State: " + e.getSQLState());
System.err.println("Message: " + e.getMessage());
// 给出修复建议
suggestFix(e);
}
}
private void suggestFix(SQLException e) {
System.out.println("\n=== Suggested Fix ===");
int errorCode = e.getErrorCode();
if (errorCode == 0x0200 || errorCode == 0x0204) {
System.out.println("1. Check if TDengine server is running");
System.out.println("2. Verify network connectivity");
System.out.println("3. Check firewall settings (port 6030/6041)");
} else if (errorCode == 0x0357) {
System.out.println("1. Verify username and password");
System.out.println("2. Check user permissions");
} else if (e.getMessage().contains("no taos in java.library.path")) {
System.out.println("1. Install TDengine client driver");
System.out.println("2. Or switch to WebSocket connection");
}
}
/**
* 性能诊断
*/
public void diagnosePerformance(Connection conn) throws SQLException {
System.out.println("\n=== Performance Diagnostics ===");
// 测试写入性能
String testTable = "perf_test_" + System.currentTimeMillis();
try (Statement stmt = conn.createStatement()) {
stmt.execute("CREATE TABLE IF NOT EXISTS " + testTable +
" (ts TIMESTAMP, v1 INT)");
// 单条写入测试
long start = System.currentTimeMillis();
for (int i = 0; i < 100; i++) {
stmt.executeUpdate("INSERT INTO " + testTable +
" VALUES(NOW + " + i + "a, " + i + ")");
}
long singleInsertTime = System.currentTimeMillis() - start;
System.out.println("Single INSERT (100 rows): " + singleInsertTime + "ms");
// 批量写入测试
start = System.currentTimeMillis();
StringBuilder sb = new StringBuilder();
sb.append("INSERT INTO ").append(testTable).append(" VALUES ");
for (int i = 0; i < 1000; i++) {
if (i > 0) sb.append(",");
sb.append("(NOW + ").append(100 + i).append("a, ").append(i).append(")");
}
stmt.executeUpdate(sb.toString());
long batchInsertTime = System.currentTimeMillis() - start;
System.out.println("Batch INSERT (1000 rows): " + batchInsertTime + "ms");
// 清理
stmt.execute("DROP TABLE " + testTable);
}
System.out.println("\nRecommendations:");
System.out.println("- Use batch INSERT for better performance");
System.out.println("- Consider enabling asyncWrite mode for WebSocket");
System.out.println("- Use PreparedStatement with parameter binding");
}
}
附录
参考资料
版本兼容性矩阵
| taos-jdbcdriver | TDengine Server | 关键特性 |
|---|---|---|
| 3.7.x | 3.3.x+ | 负载均衡、探活重平衡 |
| 3.6.x | 3.3.6.0+ | 高效写入、Decimal 类型 |
| 3.5.x | 3.3.5.0+ | 参数绑定优化、时区设置 |
| 3.2.x | 3.0.0.0+ | VARBINARY、GEOMETRY 类型 |
关于 TDengine
TDengine 专为物联网IoT平台、工业大数据平台设计。其中,TDengine TSDB 是一款高性能、分布式的时序数据库(Time Series Database),同时它还带有内建的缓存、流式计算、数据订阅等系统功能;TDengine IDMP 是一款AI原生工业数据管理平台,它通过树状层次结构建立数据目录,对数据进行标准化、情景化,并通过 AI 提供实时分析、可视化、事件管理与报警等功能。