用spark dataframe向doris写数据时,报下面错误:
Failed to load data on BE: http://192.168.50.10:18040/api/mydb/dwd_virtual_table/_stream_load? node and exceeded the max retry times.
发现表没写入成功。刚开始很困惑,后来发现是 dataFrame中的字段和目标表不一致 。
这种提示很不友好,有没有更好方式提示,方法是有的,可以用jdbc写入,发现错误时可以看到具体的提示。代码参考如下:
def writeByJDBC(dataframe: DataFrame, dorisTable: String): Unit = {
dataframe.write.format("jdbc")
.mode(SaveMode.Append)
.option("driver", "com.mysql.jdbc.Driver")
.option("url", "jdbc:mysql://" + DORIS_HOST + ":9030/" +DATABASE_NAME + "?rewriteBatchedStatements=false")
.option("batchsize", "" + WRITE_BATCH_SIZE)
.option("user", DORIS_USER)
.option("password", DORIS_PASSWORD)
.option("isolationLevel", "NONE")
// .option("doris.write.fields","case_id,defendant_name,finance_name,mediation_name,mediator_name,dt")
.option("dbtable", dorisTable)
.save()
}
不过这种方式还是没有Spark Doris Connector的方式写效率高,可以用上面jdbc方式调试,没问题再切换 Spark Doris Connector 方式:
def writeByDoris(dataframe: DataFrame, dorisTable: String): Unit = {
dataframe.write.format("doris")
.option("doris.table.identifier", dorisTable)
.option("doris.fenodes", DORIS_HOST + ":" + DORIS_FE_HTTP_PORT)
.option("user", DORIS_USER)
.option("password", DORIS_PASSWORD)
.option("sink.batch.size", WRITE_BATCH_SIZE)
.option("sink.max-retries", 3)
.option("doris.request.retries", 6)
.option("doris.request.retries", 100)
.option("doris.request.connect.timeout.ms", 60000)
.save()
}