准备工作
1、确认各个服务器网络是否互通、时间是否同步
2、确认各个节点部署那些组件
|-----------------|------------|----------------------------------------------------------------------------------------------|
| ip地址 | host名 | 部署组件 |
| 192.168.190.130 | h202406131 | NameNode ResourceManager QuorumPeerMain JournalNode DFSZKFailoverController JobHistoryServer |
| 192.168.190.131 | h202406132 | NameNode ResourceManager QuorumPeerMain JournalNode DFSZKFailoverController JobHistoryServer |
| 192.168.190.132 | h202406133 | QuorumPeerMain JournalNode DataNode NodeManager |
| 192.168.190.133 | h202406134 | DataNode NodeManager |
| 192.168.190.134 | h202406135 | DataNode NodeManager |
| 192.168.190.135 | h202406136 | DataNode NodeManager |
| 192.168.190.136 | h202406137 | DataNode NodeManager |
| 192.168.190.137 | h202406138 | DataNode NodeManager |
3、配置各个节点的 /etc/hosts
4、配置用户的免密登录
5、确认各个组件版本
|-----------|-------|
| Hadoop | 3.3.1 |
| zookeeper | 3.7.0 |
| jdk | 1.8 |
6、确认安装目录,并配置环境变量
安装jdk
bash
# 安装java
yum install -y java-1.8.0-openjdk.x86_64
# 配置环境变量
vim /etc/profile
export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.372.b07-1.el7_9.x86_64/jre
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
# 验证安装
java -version
安装zookeerper
下载路径:http://archive.apache.org/dist/zookeeper/
bash
cd /data
tar -xf zookeeper-3.7.0.tar.gz
mv zookeeper-3.7.0 zookeeper
安装hadoop
下载命令:
wget https://downloads.apache.org/hadoop/common/hadoop-3.3.1/hadoop-3.3.1-aarch64.tar.gz
启动步骤
1、各个zk节点启动zk
zk启动命令:./bin/zkServer.sh start
zk状态查询命令:./bin/zkServer.sh status
2、各个journalnode节点启动journalnode
启动命令:hadoop-daemon.sh start journalnode
3、在nn1节点上格式化、并启动namenode
初始化namenode:hdfs namenode -format
启动namenode:hadoop-deaemon.sh start namedoe
4、在nn2节点上格同步nn1的节点信息并启动namenode
同步nn1:hdfs namenode -bootstarpStandby
启动namenode:hadoop-deaemon.sh start namedoe
5、关闭nn1和nn2的namenode并初始化HA在zk中的状态
初始化zkfc:hdfs zkfc -formatZK
6、nn1启动历史服务器
JobHistoryServer启动:mr-jobhistory-daemon.sh start JobHistoryServer
7、nn1和nn2启动zkfc
启动zkfc:hadoop-deaemon.sh start zkfc
8、datanode节点启动datanode服务
datanode启动:hadoop-deaemon.sh start datanode
9、nn1和nn2节点启动 resourcemanager
resourcemanager启动:yarn-daemon.sh start resourcemanager
10、NodeManager节点启动NodeManager服务
NodeManager启动:yarn-daemon.sh start nodemanager