hive自动安装脚本

  • 使用该脚本注意事项
  1. 安装hive之前确定机子有网络。或者yum 更改为本地源,因为会使用epel仓库下载一个pv的软件
  2. 使用该脚本前提是自行安装好mysql数据库
  3. 准备好tomcat软件包,该脚本使用tomcat9.x版本测试过能正常执行安装成功,其他版本没有测试过不知道有没这个问题
  4. hive安装脚本中自动更换成tez引擎了,需要准备好tez安装软件,测试时使用的是tez-0.9.2版本
  5. 安装之前请自行更改脚本内容前面定义的software_dir(所有安装软件包存放目录)、install_dir(安装软件目录)、mysql_user、mysql_password 这个几个变量根据自已情况进行更改
  6. 使用该脚本之前需要把hadoop先安装好并且把hadoop集群启动起来,hadoop集群未启动切勿执行该脚本
bash 复制代码
#!/bin/bash

software_dir=/root/hadoop/
install_dir=/opt/
mysql_user=root
mysql_password=1234
rm_node=node1

yum repolist |grep epel &> /dev/null
if [ $? -ne 0 ] 
then
   wget -O /etc/yum.repos.d/epel.repo https://mirrors.aliyun.com/repo/epel-7.repo &>/dev/null
fi

rpm -qa |grep pv &> /dev/null
if [ $? -ne 0 ]
then
  yum install -y pv &>/dev/null
  rpm -qa |grep pv &> /dev/null
  [ $? -ne 0 ] && echo " Please resolve the network problem !!!" && exit 1
fi
 
echo ============ Start Hive Configuration =============
hive_path=${install_dir}hive
hive_conf_path=$hive_path/conf
tez_path=${install_dir}tez

rm -rf $tez_path
pv ${software_dir}*tez* |tar zxf - -C $install_dir
mv ${install_dir}*tez* $tez_path
chown -R root:root $tez_path

rm -rf $hive_path
pv ${software_dir}*hive* |tar zxf - -C $install_dir
mv ${install_dir}*hive* ${install_dir}hive

if [ -f "$hive_conf_path/hive-log4j2.properties.template" ]
then 
   mv $hive_conf_path/hive-log4j2.properties.template $hive_conf_path/hive-log4j2.properties
fi

if [ -f "$hive_conf_path/hive-exec-log4j2.properties.template" ] 
then
   mv $hive_conf_path/hive-exec-log4j2.properties.template $hive_conf_path/hive-exec-log4j2.properties
fi
cp ${software_dir}mysql-connector-java-5.1.44-bin.jar $hive_path/lib
sed -i "/property.hive.log.dir/c\property.hive.log.dir=$hive_path/logs" $hive_conf_path/hive-log4j2.properties
sed -i "/property.hive.log.dir/c\property.hive.log.dir=$hive_path/logs" $hive_conf_path/hive-exec-log4j2.properties

cat > $hive_conf_path/hive-site.xml <<EOF
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

  <!-- 配置hive元数据在hdfs的存储路径 -->
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/hive/database</value>
  </property>

  <!-- 对于小数据量,自动使用本地模式执行 MR job 加快执行过程 ,默认是false -->
  <property>
    <name>hive.exec.mode.local.auto</name>
    <value>true</value>
  </property>
  
  <!-- 取消列名前面的表名 -->
  <property>
    <name>hive.resultset.use.unique.column.names</name>
    <value>false</value>
  </property>
  
  <!-- 更换计算引擎 默认MR -->  
  <property>
    <name>hive.execution.engine</name>
    <value>tez</value>
  </property>

  <!-- 关闭元数据校验 -->
  <property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://${HOSTNAME}:3306/hive?createDatabaseIfNotExist=true&amp;useUnicode=true&amp;characterEncodeing=UTF-8&amp;useSSL=false</value>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>$mysql_user</value>
  </property>

  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>$mysql_password</value>
  </property>

  <!-- 下面三个设置解决 tez-ui中hive queries不显示数据的问题-->
  <property>
    <name>hive.exec.pre.hooks</name>
    <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
  </property>

  <property>
    <name>hive.exec.post.hooks</name>
    <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
  </property>

  <property>
    <name>hive.exec.failure.hooks</name>
    <value>org.apache.hadoop.hive.ql.hooks.ATSHook</value>
  </property>

</configuration>
EOF
rm -rf ${hive_path}/lib/log4j-slf4j-impl-*.jar

ss -lntp |grep 3306 &>/dev/null
if [ $? -ne 0 ]
then
  echo -e " \033[31m ================== Install mysql and start it =================== \033[0m"
  exit 1
fi

docker ps | grep mysql &>/dev/null
if [ $? -eq 0 ]
then
   docker exec -it mysql mysql -u $mysql_user -p$mysql_password -e "drop database if exists hive;" &>/dev/null
else
   mysql mysql -u $mysql_user -p$mysql_password -e "drop database if exists hive;" &>/dev/null
fi
schematool -dbType mysql -initSchema

hdfs dfs -rm -r /tez
hdfs dfs -mkdir /tez
tez_name=$(ls $tez_path/share)
hdfs dfs -put $tez_path/share/$tez_name /tez
rm -rf ${tez_path}/lib/slf4j-log4j12-*.jar

cat > $hive_conf_path/tez-site.xml <<EOF
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<configuration>

  <property>
    <name>tez.lib.uris</name>
    <value>\${fs.defaultFS}/tez/$tez_name</value>
  </property>

  <property>
    <name>tez.use.cluster.hadoop-libs</name>
    <value>true</value>
  </property>

  <property>
    <name>tez.history.logging.service.class</name>
    <value>org.apache.tez.dag.history.logging.ats.ATSHistoryLoggingService</value>
  </property>

  <property>
    <name>tez.tez-ui.history-url.base</name>
    <value>http://localhost:8080/tez-ui/</value>
  </property>  
</configuration>
EOF

mv $hive_conf_path/hive-env.sh.template $hive_conf_path/hive-env.sh

cat >> $hive_conf_path/hive-env.sh <<EOF
export TEZ_HOME=$tez_path
export HADOOP_CLASSPATH=\$HADOOP_CLASSPATH:\$TEZ_HOME/*:\$TEZ_HOME/lib/*
EOF

hadoop_path=$(which hadoop|sed 's|/bin.*||')
sed -i '/^<\/configuration>$/d' $hadoop_path/etc/hadoop/yarn-site.xml
sed -i '/<!-- 开启timelineserver -->/,$d' $hadoop_path/etc/hadoop/yarn-site.xml
cat >> $hadoop_path/etc/hadoop/yarn-site.xml <<EOF
  <!-- 开启timelineserver -->
  <property>
    <name>yarn.timeline-service.enabled</name>
    <value>true</value>
  </property>
  
  <!-- 配置timeline安装在指定节点上 -->
  <property>
    <name>yarn.timeline-service.hostname</name>
    <value>$HOSTNAME</value>
  </property>
  
  <!-- 开启跨域 -->
  <property>
    <name>yarn.timeline-service.http-cross-origin.enabled</name>
    <value>true</value>
  </property>
  
  <!-- 发送RM的系统指标到timeline服务上 -->
  <property>
    <name>yarn.resourcemanager.system-metrics-publisher.enabled</name>
    <value>true</value>
  </property>

  <!-- 开启通用历史服务 -->  
  <property>
    <name>yarn.timeline-service.generic-application-history.enabled</name>
    <value>true</value>
  </property>

  <!-- 设置RPC请求的处理程序线程数  默认 10 -->
  <property>
    <name>yarn.timeline-service.handler-thread-count</name>
    <value>24</value>
  </property>

</configuration>
EOF

xsync $hadoop_path/etc/hadoop/yarn-site.xml
jps |egrep "NodeManager|ResourceManager"
if [ $? -eq 0 ]
then
  stop-yarn.sh
  sleep 2
  start-yarn.sh
else
  start-all.sh
fi

jps |grep "ApplicationHistoryServer"
if [ $? -eq 0 ]
then 
  yarn-daemon.sh stop timelineserver
fi
yarn-daemon.sh start timelineserver

tomcat_path=${install_dir}tomcat
if [ -x $tomcat_path/bin/shutdown.sh ]
then
   $tomcat_path/bin/shutdown.sh &>/dev/null
   rm -rf $tomcat_path
fi

pv ${software_dir}*tomcat* |tar zxf - -C $install_dir
mv ${install_dir}*tomcat* $tomcat_path

mkdir -p $tomcat_path/webapps/tez-ui/
unzip -d $tomcat_path/webapps/tez-ui/ $tez_path/*.war &>/dev/null
sed -i "s|//timeline: \"http://localhost:8188\"|timeline: \"http://$HOSTNAME:8188\"|" $tomcat_path/webapps/tez-ui/config/configs.env
sed -i "s|//rm: \"http://localhost:8088\"|rm: \"http://$rm_node:8088\"|" $tomcat_path/webapps/tez-ui/config/configs.env
sed -i 's|//timeZone: "UTC"|timeZone: "Asia/Shanghai"|' $tomcat_path/webapps/tez-ui/config/configs.env

cat > /usr/lib/systemd/system/tomcat.service <<EOF
[Unit]
Description=tomcat
After=syslog.target network.target remote-fs.target nss-lookup.target
 
[Service]
Type=forking
 
Environment="JAVA_HOME=/opt/jdk1.8.0_201"
 
PIDFile=$tomcat_path/tomcat.pid
ExecStart=$tomcat_path/bin/startup.sh
ExecStop=$tomcat_path/bin/shutdown.sh
ExecReload=/bin/kill -s HUP $MAINPID
PrivateTmp=true
 
[Install]
WantedBy=multi-user.target

EOF

echo "CATALINA_PID=\$CATALINA_BASE/tomcat.pid"  >  $tomcat_path/bin/setenv.sh

systemctl daemon-reload
systemctl enable --now tomcat

echo -e "\033[32m ================== Installation Hive Complete =================== \033[0m"
  • 脚本执行成功之后,访问tez-ui界面地址, 下面的master 是服务器的主机名,自行更换自已定义的主机名或者ip
html 复制代码
http://master:8080/tez-ui
  • tiemlineserver 服务器启动与关闭 hive脚本执行成功之后 是自动把它启动了的
bash 复制代码
# 关闭
yarn-daemon.sh stop timelineserver 
# 开启
yarn-daemon.sh start timelineserver 
  • tomcat 加入到service中了 默认设置成开机自启了
bash 复制代码
#关闭
systemctl stop tomcat
#开启
systemctl start tomcat
  • beeline 启动
bash 复制代码
nohup hiveserver2 &>hiveserver2.log &
beeline -u jdbc:hive2://master:10000 -n root 
#退出 beeline 命令行界面
!q
相关推荐
Mephisto.java42 分钟前
【Hadoop|HDFS篇】NameNode和SecondaryNameNode
大数据·hadoop·hdfs
飞酱不会电脑7 小时前
云计算第四阶段----CLOUD 01-03
java·linux·云计算·bash
花菜回锅肉7 小时前
开源可视化大屏superset Docker环境部署
数据仓库·docker·容器·开源·superset
HHoao11 小时前
HiveServer2 启动时 datanucleus.schema.autoCreateTables 不生效的问题
大数据·hive
CodeMaster_3771484811 小时前
starrocks结合同步和异步物化视图建立数据湖和数据仓库
大数据·数据仓库
Casual_Lei11 小时前
Hive是什么?
数据仓库·hive·hadoop
hhXx_琉璃18 小时前
docker ps -a及docker exec -it ubuntu-01 /bin/bash
ubuntu·docker·bash
Hu_O&M1 天前
Shell脚本判断、if语句
linux·开发语言·bash·运维开发
柒拾柒_L1 天前
Nosql数据库
大数据·hadoop·redis·mongodb·数据分析·数据库架构
Mephisto.java1 天前
【Hadoop|MapReduce篇】MapReduce概述
大数据·hadoop·mapreduce