seata适配人大金仓(kingbase8)数据库
关于作者
- 作者介绍
🍓 博客主页:作者主页
🍓 简介:JAVA领域优质创作者🥇、一名初入职场小白👨💻、在校期间参加各种省赛、国赛,斩获一系列荣誉 🏆、阿里云专家博主 、51CTO专家博主
🍓 关注我:关注我学习资料、文档下载统统都有,每日定时更新文章,励志做一名JAVA资深程序猿👨💻
seata适配人大金仓(kingbase8)数据库
一、Github克隆seata源码
- 本文适配源码版本是:1.7.0
shell
git clone https://github.com/seata/seata.git
二、编译seata源码
使用idea开发工具,打开下载好的seata源码,配置好maven配置信息,下载好相关jar包依赖。 编译如下:
三、seata适配kingbase8
3.1 core [seata-core]的修改
增加io.seata.core.store.db.sql.lock.KingbaseLockStoreSql.java
注意:KingbaseLockStoreSql.java,需要继承OracleLockStoreSql.java
java
package io.seata.core.store.db.sql.lock;
import io.seata.common.loader.LoadLevel;
/**
* @ClassName: KingbaseLockStoreSql
* @Author: zhangsr
* @Date: 2023/12/6 14:41
*/
@LoadLevel(name = "kingbase")
public class KingbaseLockStoreSql extends OracleLockStoreSql{
}
加入到配置文件中:META-INF/services/io.seata.core.store.db.sql.lock.LockStoreSql
TXT
io.seata.core.store.db.sql.lock.KingbaseLockStoreSql
增加io.seata.core.store.db.sql.log.KingBaseLogStoreSqls.java
注意:KingBaseLogStoreSqls.java,需要继承OracleLogStoreSqls.java
JAVA
package io.seata.core.store.db.sql.log;
import io.seata.common.loader.LoadLevel;
/**
* @ClassName: KingBaseLogStoreSqls
* @Author: zhangsr
* @Date: 2023/12/6 15:19
*/
@LoadLevel(name = "kingbase")
public class KingBaseLogStoreSqls extends OracleLogStoreSqls{
}
加入到配置文件中:META-INF/services/io.seata.core.store.db.sql.log.LogStoreSqls
TXT
io.seata.core.store.db.sql.log.KingBaseLogStoreSqls
io.seata.core.constants.DBType.java 增加 kingbase
修改io.seata.core.store.db.AbstractDataSourceProvider.java,getValidationQuery()方法 支持kingbase的类型
JAVA
protected String getValidationQuery(DBType dbType) {
if (DBType.ORACLE.equals(dbType) ||
DBType.KINGBASE.equals(dbType)) {
return "select sysdate from dual";
} else {
return "select 1";
}
}
3.2 seata-sqlparser-core修改
修改io.seata.sqlparser.util.JdbcConstants.java , 增加kingbase常量
java
String KINGBASE = "kingbase";
3.3 seata-sqlparser-druid修改
io.seata.sqlparser.druid-->增加kingbase包以及内部的java,模仿oracle的即可。 注意:需要修改的地方有: (1)常量JdbcConstants.KINGBASE (2)构造方法
加入到配置文件中:META-INF/services/io.seata.sqlparser.druid.SQLOperateRecognizerHolder
3.4 seata-rm-datasource的修改
增加io.seata.rm.datasource.exec.kingbase.KingbaseInsertExecutor.java 注意:KingbaseInsertExecutor.java,模仿OracleInsertExecutor.java
代码如下:
java
package io.seata.rm.datasource.exec.kingbase;
import io.seata.common.loader.LoadLevel;
import io.seata.common.loader.Scope;
import io.seata.common.util.CollectionUtils;
import io.seata.rm.datasource.StatementProxy;
import io.seata.rm.datasource.exec.BaseInsertExecutor;
import io.seata.rm.datasource.exec.StatementCallback;
import io.seata.sqlparser.SQLInsertRecognizer;
import io.seata.sqlparser.SQLRecognizer;
import io.seata.sqlparser.struct.Null;
import io.seata.sqlparser.struct.Sequenceable;
import io.seata.sqlparser.struct.SqlMethodExpr;
import io.seata.sqlparser.struct.SqlSequenceExpr;
import io.seata.sqlparser.util.ColumnUtils;
import io.seata.sqlparser.util.JdbcConstants;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
import java.sql.SQLException;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Set;
/**
* @ClassName: KingbaseInsertExecutor
* @Author: zhangsr
* @Date: 2023/12/6 15:52
*/
@LoadLevel(name = JdbcConstants.KINGBASE, scope = Scope.PROTOTYPE)
public class KingbaseInsertExecutor extends BaseInsertExecutor implements Sequenceable {
private static final Logger LOGGER = LoggerFactory.getLogger(KingbaseInsertExecutor.class);
/**
* Instantiates a new Abstract dml base executor.
*
* @param statementProxy the statement proxy
* @param statementCallback the statement callback
* @param sqlRecognizer the sql recognizer
*/
public KingbaseInsertExecutor(StatementProxy statementProxy, StatementCallback statementCallback,
SQLRecognizer sqlRecognizer) {
super(statementProxy, statementCallback, sqlRecognizer);
}
/**
* 1. If the insert columns are not empty and do not contain any pk columns,
* it means that there is no pk value in the insert rows, then all the pk values should come from auto-increment.
* <p>
* 2. The pk value exists in insert rows. The possible situations are:
* <ul>
* <li>The insert columns are empty: all pk values can be obtained from insert rows</li>
* <li>The insert columns contain at least one pk column: first obtain the existing pk value from the insert rows, and other from auto-increment</li>
* </ul>
*
* @return {@link Map}<{@link String}, {@link List}<{@link Object}>>
* @throws SQLException the sql exception
*/
@Override
public Map<String, List<Object>> getPkValues() throws SQLException {
List<String> pkColumnNameList = getTableMeta().getPrimaryKeyOnlyName();
Map<String, List<Object>> pkValuesMap = new HashMap<>(pkColumnNameList.size());
// first obtain the existing pk value from the insert rows (if exists)
if (!containsColumns() || containsAnyPk()) {
pkValuesMap.putAll(getPkValuesByColumn());
}
// other from auto-increment
for (String columnName : pkColumnNameList) {
if (!pkValuesMap.containsKey(columnName)) {
pkValuesMap.put(columnName, getGeneratedKeys(columnName));
}
}
return pkValuesMap;
}
/**
* Whether the insert columns contain any pk columns
*
* @return true: contain at least one pk column. false: do not contain any pk columns
*/
public boolean containsAnyPk() {
SQLInsertRecognizer recognizer = (SQLInsertRecognizer)sqlRecognizer;
List<String> insertColumns = recognizer.getInsertColumns();
if (CollectionUtils.isEmpty(insertColumns)) {
return false;
}
List<String> pkColumnNameList = getTableMeta().getPrimaryKeyOnlyName();
if (CollectionUtils.isEmpty(pkColumnNameList)) {
return false;
}
List<String> newColumns = ColumnUtils.delEscape(insertColumns, getDbType());
return pkColumnNameList.stream().anyMatch(pkColumn -> newColumns.contains(pkColumn)
|| CollectionUtils.toUpperList(newColumns).contains(pkColumn.toUpperCase()));
}
@Override
public Map<String, List<Object>> getPkValuesByColumn() throws SQLException {
Map<String, List<Object>> pkValuesMap = parsePkValuesFromStatement();
Set<String> keySet = pkValuesMap.keySet();
for (String pkKey : keySet) {
List<Object> pkValues = pkValuesMap.get(pkKey);
for (int i = 0; i < pkValues.size(); i++) {
if (!pkKey.isEmpty() && pkValues.get(i) instanceof SqlSequenceExpr) {
pkValues.set(i, getPkValuesBySequence((SqlSequenceExpr) pkValues.get(i), pkKey).get(0));
} else if (!pkKey.isEmpty() && pkValues.get(i) instanceof SqlMethodExpr) {
pkValues.set(i, getGeneratedKeys(pkKey).get(0));
} else if (!pkKey.isEmpty() && pkValues.get(i) instanceof Null) {
pkValues.set(i, getGeneratedKeys(pkKey).get(0));
}
}
pkValuesMap.put(pkKey, pkValues);
}
return pkValuesMap;
}
@Override
public String getSequenceSql(SqlSequenceExpr expr) {
return "SELECT " + expr.getSequence() + ".currval FROM DUAL";
}
}
加入到配置文件:META-INF/services/io.seata.rm.datasource.exec.InsertExecutor
txt
io.seata.rm.datasource.exec.kingbase.KingbaseInsertExecutor
增加io.seata.rm.datasource.sql.struct.cache.KingbaseTableMetaCache.java 注意:KingbaseTableMetaCache.java,模仿OracleTableMetaCache.java
JAVA
@LoadLevel(name = JdbcConstants.KINGBASE)
代码如下:
java
package io.seata.rm.datasource.sql.struct.cache;
import io.seata.common.exception.NotSupportYetException;
import io.seata.common.exception.ShouldNeverHappenException;
import io.seata.common.loader.LoadLevel;
import io.seata.common.util.StringUtils;
import io.seata.sqlparser.struct.ColumnMeta;
import io.seata.sqlparser.struct.IndexMeta;
import io.seata.sqlparser.struct.IndexType;
import io.seata.sqlparser.struct.TableMeta;
import io.seata.sqlparser.util.JdbcConstants;
import java.sql.Connection;
import java.sql.DatabaseMetaData;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.List;
import java.util.Map;
/**
* @ClassName: KingbaseTableMetaCache
* @Author: zhangsr
* @Date: 2023/12/6 16:00
*/
@LoadLevel(name = JdbcConstants.KINGBASE)
public class KingbaseTableMetaCache extends AbstractTableMetaCache{
@Override
protected String getCacheKey(Connection connection, String tableName, String resourceId) {
StringBuilder cacheKey = new StringBuilder(resourceId);
cacheKey.append(".");
//separate it to schemaName and tableName
String[] tableNameWithSchema = tableName.split("\\.");
String defaultTableName = tableNameWithSchema.length > 1 ? tableNameWithSchema[1] : tableNameWithSchema[0];
//oracle does not implement supportsMixedCaseIdentifiers in DatabaseMetadata
if (defaultTableName.contains("\"")) {
cacheKey.append(defaultTableName.replace("\"", ""));
} else {
// oracle default store in upper case
cacheKey.append(defaultTableName.toUpperCase());
}
return cacheKey.toString();
}
@Override
protected TableMeta fetchSchema(Connection connection, String tableName) throws SQLException {
try {
return resultSetMetaToSchema(connection.getMetaData(), tableName);
} catch (SQLException sqlEx) {
throw sqlEx;
} catch (Exception e) {
throw new SQLException(String.format("Failed to fetch schema of %s", tableName), e);
}
}
private TableMeta resultSetMetaToSchema(DatabaseMetaData dbmd, String tableName) throws SQLException {
TableMeta tm = new TableMeta();
tm.setTableName(tableName);
String[] schemaTable = tableName.split("\\.");
String schemaName = schemaTable.length > 1 ? schemaTable[0] : dbmd.getUserName();
tableName = schemaTable.length > 1 ? schemaTable[1] : tableName;
if (schemaName.contains("\"")) {
schemaName = schemaName.replace("\"", "");
} else {
schemaName = schemaName.toUpperCase();
}
if (tableName.contains("\"")) {
tableName = tableName.replace("\"", "");
} else {
tableName = tableName.toUpperCase();
}
tm.setCaseSensitive(StringUtils.hasLowerCase(tableName));
try (ResultSet rsColumns = dbmd.getColumns("", schemaName, tableName, "%");
ResultSet rsIndex = dbmd.getIndexInfo(null, schemaName, tableName, false, true);
ResultSet rsPrimary = dbmd.getPrimaryKeys(null, schemaName, tableName)) {
while (rsColumns.next()) {
ColumnMeta col = new ColumnMeta();
col.setTableCat(rsColumns.getString("TABLE_CAT"));
col.setTableSchemaName(rsColumns.getString("TABLE_SCHEM"));
col.setTableName(rsColumns.getString("TABLE_NAME"));
col.setColumnName(rsColumns.getString("COLUMN_NAME"));
col.setDataType(rsColumns.getInt("DATA_TYPE"));
col.setDataTypeName(rsColumns.getString("TYPE_NAME"));
col.setColumnSize(rsColumns.getInt("COLUMN_SIZE"));
col.setDecimalDigits(rsColumns.getInt("DECIMAL_DIGITS"));
col.setNumPrecRadix(rsColumns.getInt("NUM_PREC_RADIX"));
col.setNullAble(rsColumns.getInt("NULLABLE"));
col.setRemarks(rsColumns.getString("REMARKS"));
col.setColumnDef(rsColumns.getString("COLUMN_DEF"));
col.setSqlDataType(rsColumns.getInt("SQL_DATA_TYPE"));
col.setSqlDatetimeSub(rsColumns.getInt("SQL_DATETIME_SUB"));
col.setCharOctetLength(rsColumns.getInt("CHAR_OCTET_LENGTH"));
col.setOrdinalPosition(rsColumns.getInt("ORDINAL_POSITION"));
col.setIsNullAble(rsColumns.getString("IS_NULLABLE"));
col.setCaseSensitive(StringUtils.hasLowerCase(col.getColumnName()));
if (tm.getAllColumns().containsKey(col.getColumnName())) {
throw new NotSupportYetException("Not support the table has the same column name with different case yet");
}
tm.getAllColumns().put(col.getColumnName(), col);
}
while (rsIndex.next()) {
String indexName = rsIndex.getString("INDEX_NAME");
if (StringUtils.isNullOrEmpty(indexName)) {
continue;
}
String colName = rsIndex.getString("COLUMN_NAME");
ColumnMeta col = tm.getAllColumns().get(colName);
if (tm.getAllIndexes().containsKey(indexName)) {
IndexMeta index = tm.getAllIndexes().get(indexName);
index.getValues().add(col);
} else {
IndexMeta index = new IndexMeta();
index.setIndexName(indexName);
index.setNonUnique(rsIndex.getBoolean("NON_UNIQUE"));
index.setIndexQualifier(rsIndex.getString("INDEX_QUALIFIER"));
index.setIndexName(rsIndex.getString("INDEX_NAME"));
index.setType(rsIndex.getShort("TYPE"));
index.setOrdinalPosition(rsIndex.getShort("ORDINAL_POSITION"));
index.setAscOrDesc(rsIndex.getString("ASC_OR_DESC"));
index.setCardinality(rsIndex.getInt("CARDINALITY"));
index.getValues().add(col);
if (!index.isNonUnique()) {
index.setIndextype(IndexType.UNIQUE);
} else {
index.setIndextype(IndexType.NORMAL);
}
tm.getAllIndexes().put(indexName, index);
}
}
if (tm.getAllIndexes().isEmpty()) {
throw new ShouldNeverHappenException(String.format("Could not found any index in the table: %s", tableName));
}
// when we create a primary key constraint oracle will uses and existing unique index.
// if we create a unique index before create a primary constraint in the same column will cause the problem
// that primary key constraint name was different from the unique name.
List<String> pkcol = new ArrayList<>();
while (rsPrimary.next()) {
String pkConstraintName = rsPrimary.getString("PK_NAME");
if (tm.getAllIndexes().containsKey(pkConstraintName)) {
IndexMeta index = tm.getAllIndexes().get(pkConstraintName);
index.setIndextype(IndexType.PRIMARY);
} else {
//save the columns that constraint primary key name was different from unique index name
pkcol.add(rsPrimary.getString("COLUMN_NAME"));
}
}
//find the index that belong to the primary key constraint
if (!pkcol.isEmpty()) {
int matchCols = 0;
for (Map.Entry<String, IndexMeta> entry : tm.getAllIndexes().entrySet()) {
IndexMeta index = entry.getValue();
// only the unique index and all the unique index's columes same as primary key columes,
// it belongs to primary key
if (index.getIndextype().value() == IndexType.UNIQUE.value()) {
for (ColumnMeta col : index.getValues()) {
if (pkcol.contains(col.getColumnName())) {
matchCols++;
}
}
if (matchCols == pkcol.size()) {
index.setIndextype(IndexType.PRIMARY);
// each table only has one primary key
break;
} else {
matchCols = 0;
}
}
}
}
}
return tm;
}
}
加入到配置文件中:META-INF/services/io.seata.rm.datasource.sql.struct.TableMetaCache
txt
io.seata.rm.datasource.sql.struct.cache.KingbaseTableMetaCache
io.seata.rm.datasource.undo-->增加kingbase包以及内部的java,模仿oracle的即可。 注意:需要修改的地方有: (1)常量JdbcConstants.KINGBASE (2)构造方法
加入到配置文件中:META-INF/services/io.seata.rm.datasource.undo.UndoLogManager
txt
io.seata.rm.datasource.undo.kingbase.KingbaseUndoLogManager
加入到配置文件中:META-INF/services/io.seata.rm.datasource.undo.UndoExecutorHolder
txt
io.seata.rm.datasource.undo.kingbase.KingbaseUndoExecutorHolder
四、seata修改ip有效性校验
seata在启动时,会对 "-h ip" 进行有效性校验,并且 "127.0.0.1 and 0.0.0.0" 也是校验不通过的 在docker容器、需要注册到外网ip、需要注册到域名,那都是校验失败的 所以这里对源码进行修改,seata启动时,不校验ip的有效性。
修改io.seata.server.Server.java,中的main()方法
五、修改后seata源码打包
seata的根路径cmd命令窗口执行--> mvn -Prelease-seata -Dmaven.test.skip=true clean install -U
打包成功后,是放在seata-distribution下面,直接就是和在官网下载的seata服务一样的文件
这里要注意:要将kingbase8的驱动jar包,手动拷贝放入到lib下面。
结束:seata的适配kingbase8,并重新打包源码,已完成
-
seata以nacos为注册中心和服务中心使用,需要修改的配置如下:
propertiesstore.db.dbType=kingbase store.db.driverClassName=com.kingbase8.Driver store.db.url=jdbc:kingbase8://127.0.0.1:54321/xxxx?currentSchema=xxxx,SYS_CATALOG&clientEncoding=UTF-8 store.db.user=system store.db.password=123456
六、项目中使用seata
- 在项目中需要引入seata相关得依赖
xml
<seata.version>1.7.0</seata.version>
<dependency>
<groupId>io.seata</groupId>
<artifactId>seata-spring-boot-starter</artifactId>
<version>${seata.version}</version>
</dependency>
配置文件:
yaml
# Copyright 1999-2019 Seata.io Group.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
server:
port: 7091
spring:
application:
name: seata-server
logging:
config: classpath:logback-spring.xml
file:
path: ${user.home}/logs/seata
extend:
logstash-appender:
destination: 127.0.0.1:4560
kafka-appender:
bootstrap-servers: 127.0.0.1:9092
topic: logback_to_logstash
console:
user:
username: seata
password: seata
seata:
config:
type: nacos
nacos:
application: seata-server
server-addr: 127.0.0.1:8848
group : DEFAULT_GROUP
namespace: "0ab9428f-32eb-4f86-a9e9-e11f937af1f3"
cluster: default
dataId: "seataServer"
username: "nacos"
password: "nacos"
registry:
# support: nacos, eureka, redis, zk, consul, etcd3, sofa
type: nacos
nacos:
application: seata-server
server-addr: 127.0.0.1:8848
group: test_zsr
namespace: "0ab9428f-32eb-4f86-a9e9-e11f937af1f3"
username: nacos
password: nacos
context-path:
security:
secretKey: SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
tokenValidityInMilliseconds: 1800000
ignore:
urls: /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.jpeg,/**/*.ico,/api/v1/auth/login
适配种出现的问题:
Plugin 'org.apache.maven.plugins:maven-javadoc-plugin:3.0.0' not found
插件无法通过maven下载,只能通过手动下载
SHELL
mvn install:install-file -Dfile=maven-gpg-plugin-1.6.jar -DgroupId=org.apache.maven.plugins -DartifactId=maven-gpg-plugin -Dversion=1.6 -Dpackaging=jar
shell
mvn install:install-file -Dfile=nexus-staging-maven-plugin-1.6.7.jar -DgroupId=org.sonatype.plugins -DartifactId=nexus-staging-maven-plugin -Dversion=1.6.7 -Dpackaging=jar
shell
mvn install:install-file -Dfile=maven-javadoc-plugin-3.0.0.jar -DgroupId=org.apache.maven.plugins -DartifactId=maven-javadoc-plugin -Dversion=3.0.0 -Dpackaging=jar
shell
mvn install:install-file -Dfile=license-maven-plugin-1.20.jar -DgroupId=org.codehaus.mojo -DartifactId=license-maven-plugin -Dversion=1.20 -Dpackaging=jar
shell
Some Enforcer rules have failed. Look above for specific messages explaining why the rule failed.
shell
mvn install:install-file -Dfile=maven-enforcer-plugin-3.0.0-M3.jar -DgroupId=org.apache.maven.plugins -DartifactId=maven-enforcer-plugin -Dversion=3.0.0-M3 -Dpackaging=jar
shell
[WARNING] Rule 0: org.apache.maven.plugins.enforcer.RequireMavenVersion failed with message:
Detected Maven Version: 3.5.4 is not in the allowed range [3.6.0,).
这里限制打包的版本必须是3.6.0以上的,我这里使用的是3.5.4,所以需要更新版本在进行打包。
shell
# druid无法解析kingbase的特殊语句
nested exception is org.apache.ibatis.exceptions.PersistenceException:
java
@Test
public void KingbaseTest() {
String sql = "INSERT INTO xxx\n" +
"(xxxx)\n" +
"VALUES(xxxx)\n" +
"ON CONFLICT (id) DO UPDATE SET\n" +
" xxx = xxx,\n" +
" xxx = xxx,\n" +
" xxx = xxx;";
String parameterize = ParameterizedOutputVisitorUtils.parameterize(sql, DbType.kingbase);
}