前言
Hadoop / Zookeeper / Hbase
因资源有限 这三个都是安装在同一台Centos7.9的机器上
但通过配置 所以在逻辑上是distributed模式
1 Java安装
1.1 下载java11
tar/opt/java/jdk-11.0.5/
1.2 环境配置修改
文件/etc/profile
export JAVA_HOME=/opt/java/jdk-11.0.5/
export CLASSPATH=$JAVA_HOME/lib
export PATH=PATH:JAVA_HOME/bin
执行命令使之生效
$ source /etc/profile
2 Hadoop安装
2.1 创建hadoop用户
$ adduser hadoop
2.2 修改用户目录配置
/home/hadoop/.bashrc
export HADOOP_HOME=/home/bigdata/hadoop/
export PATH=HADOOP_HOME/sbin:HADOOP_HOME/bin:$PATH
1.3 下载Hadoop
下载的版本是3.3.6
tar到目录/home/bigdata/hadoop/
1.4 修改hadoop相关配置
/home/bigdata/hadoop/etc/hadoop/core-site.xml
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/bigdata/hadoopTmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
/home/bigdata/hadoop/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/opt/java/jdk-11.0.5/
/home/bigdata/hadoop/etc/hadoop/hdfs-site.xml
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/home/bigdata/hadoopTmp/dfs/name</value>
</property>
<property>
<name>dfs.namenode.checkpoint.dir</name>
<value>file:/home/bigdata/hadoopTmp/dfs/namesecondary</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/home/bigdata/hadoopTmp/dfs/data</value>
</property>
1.5 初始化hdfs
cd /home/bigdata/hadoop
$ hdfs namenode -format
1.6 启动hdfs
cd /home/bigdata/hadoop/sbin/
$ ./start-dfs.sh
3 ZooKeeper安装
3.1 下载ZooKeeper
版本是3.8.3
tar到目录/home/bigdata/zookeeper/
3.2 配置修改
3.2.1 配置改名
mv /home/bigdata/zookeeper/conf/zoo_sample.cfg /home/bigdata/zookeeper/conf/zoo.cfg
3.2.2 打开配置
vi /home/bigdata/zookeeper/conf/zoo.cfg
3.2.3 修改配置
dataDir=/home/bigdata/zookeeper/dataDir
dataLogDir=/home/bigdata/zookeeper/dataLogDir
3.2.4 启动zookeeper
cd /home/bigdata/zookeeper/bin/
./zkServer.sh start
通过jps查看zk的进程
3.2.5 查看状态
cd /home/bigdata/zookeeper/bin/
./zkServer.sh status
Hbase安装
1 修改bashrc配置
export HBASE_HOME=/home/bigdata/hbase/
export PATH=PATH:HBASE_HOME/bin
生效
$ source /home/hadoop/.bashrc
2 修改hbase配置文件
/home/bigdata/hbase/conf/hbase-env.sh
export JAVA_HOME=/opt/java/jdk-11.0.5/
export HBASE_CLASSPATH=/home/bigdata/hbase/conf/
export HBASE_MANAGES_ZK=false
/home/bigdata/hbase/conf/hbase-site.xml
Start
/home/bigdata/hbase/bin/start-hbase.sh
Shell
/home/bigdata/hbase/bin/hbase shell
Shell - Command
create / list / describe / put / scan / get / display / drop / quit
create 'test', 'cf'
list 'test'
describe 'test'
put 'test', 'row1', 'cf:a', 'value1'
put 'test', 'row2', 'cf:b', 'value2'
put 'test', 'row3', 'cf:c', 'value3'
scan 'test'
get 'test', 'row1'
disable 'test'
enable 'test'
drop 'test'
Stop
/home/bigdata/hbase/bin/hbase-daemon.sh stop master
/home/bigdata/hbase/bin/stop.hbase.sh
看到最后 觉得还是看apache的吧