spark-thrift-server 报错 Wrong FS

### 文章目录

  • [@[toc]](#文章目录 @[toc] 具体报错 实际原因 查看 hive 元数据 修改 spark-thrift-server 配置 修改 hive 元数据)
  • [具体报错](#文章目录 @[toc] 具体报错 实际原因 查看 hive 元数据 修改 spark-thrift-server 配置 修改 hive 元数据)
  • [实际原因](#文章目录 @[toc] 具体报错 实际原因 查看 hive 元数据 修改 spark-thrift-server 配置 修改 hive 元数据)
  • [查看 hive 元数据](#文章目录 @[toc] 具体报错 实际原因 查看 hive 元数据 修改 spark-thrift-server 配置 修改 hive 元数据)
  • [修改 spark-thrift-server 配置](#文章目录 @[toc] 具体报错 实际原因 查看 hive 元数据 修改 spark-thrift-server 配置 修改 hive 元数据)
  • [修改 hive 元数据](#文章目录 @[toc] 具体报错 实际原因 查看 hive 元数据 修改 spark-thrift-server 配置 修改 hive 元数据)

具体报错

spark-thrift-server 执行删表语句,出现如下报错

Error: org.apache.hive.service.cli.HiveSQLException: Error running query: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Wrong FS: hdfs://RMSS02ETL:9000/user/hive/warehouse/meta_data.db/dt_segment, expected: hdfs://hadoopmaster) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:361) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.$anonfun$run$2(SparkExecuteStatementOperation.scala:263) 
    at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
    at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties(SparkOperation.scala:78) 
    at org.apache.spark.sql.hive.thriftserver.SparkOperation.withLocalProperties$(SparkOperation.scala:62) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.withLocalProperties(SparkExecuteStatementOperation.scala:43) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:263) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2$$anon$3.run(SparkExecuteStatementOperation.scala:258) 
    at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) 
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1746) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation$$anon$2.run(SparkExecuteStatementOperation.scala:272) 
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) at java.util.concurrent.FutureTask.run(FutureTask.java:266)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) 
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) 
    at java.lang.Thread.run(Thread.java:748)
Caused by: org.apache.spark.sql.AnalysisException: org.apache.hadoop.hive.ql.metadata.HiveException: MetaException(message:java.lang.IllegalArgumentException: Wrong FS: hdfs://RMSS02ETL:9000/user/hive/warehouse/meta_data.db/dt_segment, expected: hdfs://hadoopmaster) at org.apache.spark.sql.hive.HiveExternalCatalog.withClient(HiveExternalCatalog.scala:112) 
    at org.apache.spark.sql.hive.HiveExternalCatalog.dropTable(HiveExternalCatalog.scala:517) 
    at org.apache.spark.sql.catalyst.catalog.ExternalCatalogWithListener.dropTable(ExternalCatalogWithListener.scala:104) 
    at org.apache.spark.sql.catalyst.catalog.SessionCatalog.dropTable(SessionCatalog.scala:778) 
    at org.apache.spark.sql.execution.command.DropTableCommand.run(ddl.scala:248) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:70) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:68) 
    at org.apache.spark.sql.execution.command.ExecutedCommandExec.executeCollect(commands.scala:79) 
    at org.apache.spark.sql.Dataset.$anonfun$logicalPlan$1(Dataset.scala:228) at org.apache.spark.sql.Dataset.$anonfun$withAction$1(Dataset.scala:3687) 
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$5(SQLExecution.scala:103) 
    at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:163) 
    at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withNewExecutionId$1(SQLExecution.scala:90)
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
    at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:64) 
    at org.apache.spark.sql.Dataset.withAction(Dataset.scala:3685) 
    at org.apache.spark.sql.Dataset.<init>(Dataset.scala:228) 
    at org.apache.spark.sql.Dataset$.$anonfun$ofRows$2(Dataset.scala:99) 
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
    at org.apache.spark.sql.Dataset$.ofRows(Dataset.scala:96) 
    at org.apache.spark.sql.SparkSession.$anonfun$sql$1(SparkSession.scala:615) 
    at org.apache.spark.sql.SparkSession.withActive(SparkSession.scala:772) 
    at org.apache.spark.sql.SparkSession.sql(SparkSession.scala:610) 
    at org.apache.spark.sql.SQLContext.sql(SQLContext.scala:650) 
    at org.apache.spark.sql.hive.thriftserver.SparkExecuteStatementOperation.org$apache$spark$sql$hive$thriftserver$SparkExecuteStatementOperation$$execute(SparkExecuteStatementOperation.scala:325) ... 16 more

实际原因

  • hadoop 使用了 ha 模式,有双 namenodespark-thrift-server 配置的 --conf spark.sql.warehouse.dir 地址是其中一个 namenode 地址,需要修改成 nameservice 的地址
  • 原因是 hive-metastore 配置的地址是 nameservice 地址,hive 元数据有问题,所以可以建库建表,可以查询,但是不能删表

查看 hive 元数据

  • hive.dbs - hive 库元数据信息
  • hive.sds - hive 表元数据信息

查看默认的 hdfs 路径

mysql 复制代码
 select * from hive.dbs where NAME='default'; 

默认的 hdfs 地址是走的 nameservice

+-------+-----------------------+-----------------------------------------+---------+------------+------------+ 
| DB_ID | DESC                  | DB_LOCATION_URI                         | NAME    | OWNER_NAME | OWNER_TYPE |
+-------+-----------------------+-----------------------------------------+---------+------------+------------+ 
| 1     | Default Hive database | hdfs://hadoopmaster/user/hive/warehouse | default | public     | ROLE       | 
+-------+-----------------------+-----------------------------------------+---------+------------+------------+ 

查看错误的 hdfs 地址

mysql 复制代码
select * from hive.dbs where DB_LOCATION_URI like '%RMSS02ETL%'; 

错误的 hdfs 地址走的是 namenode 地址

+-------+------+--------------------------------------------------------+-----------+------------+------------+ 
| DB_ID | DESC | DB_LOCATION_URI                                        | NAME      | OWNER_NAME | OWNER_TYPE |
+-------+------+--------------------------------------------------------+-----------+------------+------------+ 
|    12 |      | hdfs://RMSS02ETL:9000/user/hive/warehouse/meta_data.db | meta_data | hive       |     USER   |
+-------+------+--------------------------------------------------------+-----------+------------+------------+

查看 hive 表元数据数据

mysql 复制代码
select * from hive.sds where LOCATION like '%RMSS02ETL%' \G;

LOCATION 处的地址也是 namenode 的地址

                    SD_ID: 3768
                    CD_ID: 378
             INPUT_FORMAT: org.apache.hadoop.hive.ql.io.parquet.MapredParquetInputFormat
            IS_COMPRESSED:
IS_STOREDASSUBDIRECTORIES:
                 LOCATION: hdfs://RMSS02ETL:9000/user/hive/warehouse/meta_data.db/dt_segment
              NUM_BUCKETS: -1
            OUTPUT_FORMAT: org.apache.hadoop.hive.ql.io.parquet.MapredParquetOutputFormat
                 SERDE_ID: 3768

修改 spark-thrift-server 配置

--conf spark.sql.warehouse.dir 参数修改成 nameservice 的地址,重启 spark-thrift-server 使配置生效

修改 hive 元数据

修改 hive 库元数据

mysql 复制代码
update hive.dbs set DB_LOCATION_URI=REPLACE(DB_LOCATION_URI,'RMSS02ETL:9000','hadoopmaster'); 

修改 hive 表元数据

mysql 复制代码
update hive.sds set LOCATION=REPLACE(LOCATION,'RMSS02ETL:9000','hadoopmaster');

最后重新尝试删表,可以成功

相关推荐
AI量化投资实验室40 分钟前
deap系统重构,再新增一个新的因子,年化39.1%,卡玛提升至2.76(附python代码)
大数据·人工智能·重构
SelectDB1 小时前
Apache Doris 2.1.8 版本正式发布
大数据·数据库·数据分析
TMT星球1 小时前
生数科技携手央视新闻《文博日历》,推动AI视频技术的创新应用
大数据·人工智能·科技
Dipeak数巅科技3 小时前
数巅科技连续中标大模型项目 持续助力央国企数智化升级
大数据·人工智能·数据分析
Ray.19983 小时前
Flink 的核心特点和概念
大数据·数据仓库·数据分析·flink
极客先躯3 小时前
如何提升flink的处理速度?
大数据·flink·提高处理速度
BestandW1shEs3 小时前
快速入门Flink
java·大数据·flink
速融云6 小时前
汽车制造行业案例 | 发动机在制造品管理全解析(附解决方案模板)
大数据·人工智能·自动化·汽车·制造
金融OG6 小时前
99.11 金融难点通俗解释:净资产收益率(ROE)VS投资资本回报率(ROIC)VS总资产收益率(ROA)
大数据·python·算法·机器学习·金融
Linux运维老纪6 小时前
分布式存储的技术选型之HDFS、Ceph、MinIO对比
大数据·分布式·ceph·hdfs·云原生·云计算·运维开发