【Hive】——安装部署

1 MetaData(元数据)


2 MetaStore (元数据服务)

3 MetaStore配置方式


3.1 内嵌模式


3.2 本地模式


3.3 远程模式


4 安装前准备

bash 复制代码
    <!-- 整合hive -->
    <property>
        <name>hadoop.proxyuser.root.hosts</name>
        <value>*</value>
    </property>
    <property>
        <name>hadoop.proxyuser.root.groups</name>
        <value>*</value>
    </property>

5 远程模式安装

5.1 下载

powershell 复制代码
https://hive.apache.org/

5.2 解压并重命名

powershell 复制代码
tar -zxvf apache-hive-3.1.2-bin.tar.gz -C /opt/module/
cd /opt/module/
mv mv apache-hive-3.1.2-bin hive

5.3 解决hadoop、hive 之间的guava版本差异问题

powershell 复制代码
 cd /opt/module/hive/lib
 rm -f guava-19.0.jar
 cp /opt/module/hadoop-3.1.3/share/hadoop/common/lib/guava-27.0-jre.jar ./guava-27.0-jre.jar

5.4 添加环境变量

powershell 复制代码
vi /etc/profile.d/my_env.sh
powershell 复制代码
 #HIVE_HOME
 export HIVE_HOME=/opt/module/hive
 export PATH=$PATH:$HIVE_HOME/bin
powershell 复制代码
source /etc/profile

5.5 hive-env.sh 修改Hive环境变量

powershell 复制代码
cd /opt/module/hive/conf
mv hive-env.sh.template hive-env.sh
vim hive-env.sh
powershell 复制代码
# Set HADOOP_HOME to point to a specific hadoop install directory
export HADOOP_HOME=/opt/module/hadoop-3.1.3

# Hive Configuration Directory can be controlled by:
export HIVE_CONF_DIR=/opt/module/hive/conf

# Folder containing extra libraries required for hive compilation/execution can be controlled by:
export HIVE_AUX_JARS_PATH=/opt/module/hive/lib

5.6 hive-log4j2.properties 日志配置

powershell 复制代码
mkdir -P /opt/module/hive/datas
cd /opt/module/hive/conf
mv hive-log4j2.properties.template hive-log4j2.properties
vim hive-log4j2.properties
powershell 复制代码
property.hive.log.dir = /opt/module/hive/datas

5.7 hive-site.xml 配置MateStore

添加了hive.metastore.uris 配置,则需要手动启动Matastore服务

powershell 复制代码
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>

  <!-- 存储元数据mysql配置 -->
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>
      jdbc:mysql://hadoop102:3306/hive?createDatabaseIfNotExist=true&amp;useUnicode=true&amp;useSSL=false&amp;characterEncoding=utf8</value>
    <description>
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the
      connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.cj.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>Username to use against metastore database</description>
  </property>
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>123456</value>
    <description>password to use against metastore database</description>
  </property>
  <!-- H2S运行绑定host -->
  <property>
    <name>hive.server2.thrift.bind.host</name>
    <value>hadoop102</value>
    <description>Bind host on which to run the HiveServer2 Thrift service.</description>
  </property>
  <!-- 远程模式部署metastore 服务地址 -->
  <property>
    <name>hive.metastore.uris</name>
    <value>thrift://hadoop102:9083</value>
    <description>Thrift URI for the remote metastore. Used by metastore client to connect to remote
      metastore.</description>
  </property>
  <!-- 关闭元数据存储授权 -->
  <property>
    <name>hive.metastore.event.db.notification.api.auth</name>
    <value>false</value>
    <description>
      Should metastore do authorization against database notification related APIs such as
      get_next_notification.
      If set to true, then only the superusers in proxy settings have the permission
    </description>
  </property>
  <!-- 关闭元数据存储版本的验证 -->
  <property>
    <name>hive.metastore.schema.verification</name>
    <value>true</value>
    <description>
      Enforce metastore schema version consistency.
      True: Verify that version information stored in is compatible with one from Hive jars. Also
      disable automatic
      schema migration attempt. Users are required to manually migrate schema after Hive upgrade
      which ensures
      proper metastore schema migration. (Default)
      False: Warn if the version information stored in metastore doesn't match with one from in Hive
      jars.
    </description>
  </property>
</configuration>

5.8 上传mysql-connector-java-5.1.27-bin.jar

基于mysql的版本上传jar

powershell 复制代码
 /opt/module/hive/lib/mysql-connector-java-5.1.27-bin.jar

5.9 初始化Matedata

powershell 复制代码
cd /opt/module/hive/bin
./schematool -dbType mysql  -initSchema  --verbose

3.5.10 启动MateStore脚本

powershell 复制代码
vim hive_metastore.sh
powershell 复制代码
#!/bin/bash
if [ $# -lt 1 ]; then
    echo "No Args Input..."
    exit
fi

case $1 in
"start")
    {
        echo "----------------- MetaStore start -----------------"
        nohup /opt/module/hive/bin/hive --service metastore >> /opt/module/hive/datas/metastore.out 2>&1 &
        echo "----------------- Hiveserver2 start -----------------"
        nohup /opt/module/hive/bin/hive --service hiveserver2 >> /opt/module/hive/datas/hiveserver2.out 2>&1 &
    }
    ;;
"stop")
    {
        echo "----------------- MetaStore stop -----------------"
        pidMetaStore=$(ps -ef | grep -v grep | grep "Dproc_metastore" | awk '{printf $2" "}')
        kill -9 ${pidMetaStore}
        echo "----------------- Hiveserver2 stop -----------------"
        pidHiveserver2=$(ps -ef | grep -v grep | grep "Dproc_hiveserver2" | awk '{printf $2" "}')
        kill -9 ${pidHiveserver2}
    }
    ;;
*)
    echo "Input Args Error..."
    ;;
esac
相关推荐
菜鸟小码1 天前
HDFS 数据块(Block)机制深度解析:从原理到实战
大数据·hadoop·hdfs
早睡早起早日毕业1 天前
大数据管理与应用系列丛书《大数据平台架构》之第4章 Hadoop 分布式文件系统 (HDFS)
大数据·hadoop·架构
哥本哈士奇1 天前
数据仓库笔记 第五篇:Data Mart 层(数据集市)
数据仓库
juniperhan1 天前
Flink 系列第18篇:Flink 动态表、连续查询与 Changelog 机制
java·大数据·数据仓库·分布式·flink
juniperhan1 天前
Flink 系列第19篇:深入理解 Flink SQL 的时间语义与时区处理:从原理到实战
java·大数据·数据仓库·分布式·sql·flink
早睡早起早日毕业1 天前
大数据管理与应用系列丛书《大数据平台架构》之第2章 分布式理论基础:大数据系统的架构基石
大数据·hadoop·分布式·架构
菜鸟小码1 天前
HDFS 常用命令大全:从入门到生产实战
大数据·hadoop·hdfs
哥本哈士奇2 天前
数据仓库笔记 第三篇:常用缓慢变化维处理方式介绍
数据仓库
哥本哈士奇2 天前
数据仓库笔记 第一篇:数据仓库的定义、历史与意义
数据仓库
哥本哈士奇2 天前
数据仓库笔记 第四篇:Star Schema 层(维度建模)
数据仓库