centos7安装hadoop 单机版

1.解压

(1)将hadoop压缩包复制到/opt/software路径下

(2)解压hadoop到/opt/module目录下

root@kb135 software\]# tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/ ![](https://file.jishuzhan.net/article/1696222177730236417/41d8e886a64b4bf3917f9c128552fada.png) #### (3)修改hadoop属主和属组 \[root@kb135 module\]# chown -R root:root ./hadoop-3.1.3/ ![](https://file.jishuzhan.net/article/1696222177730236417/5da0680b6f2e492cb5e480e060d726e6.png) ### 2.配置环境变量 \[root@kb135 module\]# vim /etc/profile # HADOOP_HOME export HADOOP_HOME=/opt/soft/hadoop313 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export HDFS_JOURNALNODE_USER=root export HDFS_ZKFC_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_YARN_HOME=$HADOOP_HOME export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop ![](https://file.jishuzhan.net/article/1696222177730236417/375624eaebe44733b0c33490ca702264.png) 修改完之后\[root@kb135 module\]# source /etc/profile ### 3.在hadoop目录创建data目录 \[root@kb135 module\]# cd ./hadoop-3.1.3/ 创建目录data \[root@kb135 hadoop-3.1.3\]# mkdir ./data ![](https://file.jishuzhan.net/article/1696222177730236417/31d22e3c684243c29896b8e02c166f33.png) ### 4.修改配置文件 进入/opt/module/hadoop-3.1.3/etc/hadoop目录,查看目录下的文件,配置几个必要的文件 ![](https://file.jishuzhan.net/article/1696222177730236417/806722a8c04d4423b228bbb56892ae1b.png) #### (1)配置core-site.xml \[root@kb135 hadoop\]# vim ./core-site.xml \ \ \fs.defaultFS\ \hdfs://kb135:9000\ \ \ \hadoop.tmp.dir\ \/opt/module/hadoop-3.1.3/data\ \ \ \hadoop.http.staticuser.user\ \root\ \ \ \io.file.buffer.size\ \131073\ \ \ \hadoop.proxyuser.root.hosts\ \\*\ \ \ \hadoop.proxyuser.root.groups\ \\*\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/360f1d7faabb4e67a23b14e7a5ba1341.png) #### (2)配置hadoop-env.sh \[root@kb135 hadoop\]# vim ./hadoop-env.sh 修改第54行 export JAVA_HOME=/opt/module/jdk1.8.0_381 ![](https://file.jishuzhan.net/article/1696222177730236417/52e886996872466bae31072f1f0e11d6.png) #### (3)配置hdfs-site.xml \[root@kb135 hadoop\]# vim ./hdfs-site.xml \ \ \dfs.replication\ \1\ \ \ \dfs.namenode.name.dir\ \/opt/module/hadoop-3.1.3/data/dfs/name\ \ \ \dfs.datanode.data.dir\ \/opt/module/hadoop-3.1.3/data/dfs/data\ \ \ \dfs.permissions.enabled\ \false\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/223a3ab6c7d2438d8215f0edca53d61f.png) #### (4)配置mapred-site.xml \[root@kb135 hadoop\]# vim ./mapred-site.xml \ \ \mapreduce.framework.name\ \yarn\ \ \ \mapreduce.jobhistory.address\ \kb135:10020\ \ \ \mapreduce.jobhistory.webapp.address\ \kb135:19888\ \ \ \mapreduce.map.memory.mb\ \2048\ \ \ \mapreduce.reduce.memory.mb\ \2048\ \ \ \mapreduce.application.classpath\ \/opt/module/hadoop-3.1.3/etc/hadoop:/opt/module/hadoop-3.1.3/share/hadoop/common/\*:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/\*:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/\*:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/yarn/\*:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/\*\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/cbe688012268452e93bf8673f1d7484b.png) #### (5)配置yarn-site.xml \[root@kb135 hadoop\]# vim ./yarn-site.xml \ \ \yarn.resourcemanager.connect.retry-interval.ms\ \20000\ \ \ \yarn.resourcemanager.scheduler.class\ \org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler\ \ \ \yarn.nodemanager.localizer.address\ \kb135:8040\ \ \ \yarn.nodemanager.address\ \kb135:8050\ \ \ \yarn.nodemanager.webapp.address\ \kb135:8042\ \ \ \yarn.nodemanager.aux-services\ \mapreduce_shuffle\ \ \ \yarn.nodemanager.local-dirs\ \/opt/module/hadoop-3.1.3/yarndata/yarn\ \ \ \yarn.nodemanager.log-dirs\ \/opt/module/hadoop-3.1.3/yarndata/log\ \ \ \yarn.nodemanager.vmem-check-enabled\ \false\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/b2fe38c2f66f4a32917c8db02fd20b91.png) #### (6)配置workers \[root@kb135 hadoop\]# vim ./workers 修改为kb135 ![](https://file.jishuzhan.net/article/1696222177730236417/2c7b8e0c236746e0806b30ac6501d766.png) ### 5.初始化hadoop 进入/opt/module/hadoop-3.1.3/bin路径 \[root@kb135 bin\]# hadoop namenode -format ### 6.设置免密登录 \[root@kb135 \~\]# ssh-keygen -t rsa -P "" \[root@kb135 \~\]# cat /root/.ssh/id_rsa.pub \>\> /root/.ssh/authorized_keys \[root@kb135 \~\]# ssh-copy-id -i \~/.ssh/id_rsa.pub -p22 root@kb135 ### 7.启动hadoop \[root@kb135 \~\]# start-all.sh 查看进程 \[root@kb135 \~\]# jps ![](https://file.jishuzhan.net/article/1696222177730236417/b48eeb0481864bb8833a24fbc5a8ceaf.png) ### 8.测试 网页中输入网址:[http://192.168.142.135:9870/](http://192.168.142.135:9870/ "http://192.168.142.135:9870/") ![](https://file.jishuzhan.net/article/1696222177730236417/3503c809ab9a42d5b0e21e70d0bc3e41.png)

相关推荐
纳兰青华7 分钟前
bean注入的过程中,Property of ‘java.util.ArrayList‘ type cannot be injected by ‘List‘
java·开发语言·spring·list
Johny_Zhao11 分钟前
Docker 一键安装部署 JumpServer 堡垒机
linux·网络安全·信息安全·云计算·shell·jumpserver·ldap·yum源·系统运维
coding and coffee12 分钟前
狂神说 - Mybatis 学习笔记 --下
java·后端·mybatis
千楼17 分钟前
阿里巴巴Java开发手册(1.3.0)
java·代码规范
reiraoy31 分钟前
缓存解决方案
java
安之若素^1 小时前
启用不安全的HTTP方法
java·开发语言
ruanjiananquan991 小时前
c,c++语言的栈内存、堆内存及任意读写内存
java·c语言·c++
zskj_zhyl1 小时前
智慧养老丨从依赖式养老到自主式养老:如何重构晚年生活新范式
大数据·人工智能·物联网
chuanauc1 小时前
Kubernets K8s 学习
java·学习·kubernetes
一头生产的驴2 小时前
java整合itext pdf实现自定义PDF文件格式导出
java·spring boot·pdf·itextpdf