centos7安装hadoop 单机版

1.解压

(1)将hadoop压缩包复制到/opt/software路径下

(2)解压hadoop到/opt/module目录下

root@kb135 software\]# tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/ ![](https://file.jishuzhan.net/article/1696222177730236417/41d8e886a64b4bf3917f9c128552fada.png) #### (3)修改hadoop属主和属组 \[root@kb135 module\]# chown -R root:root ./hadoop-3.1.3/ ![](https://file.jishuzhan.net/article/1696222177730236417/5da0680b6f2e492cb5e480e060d726e6.png) ### 2.配置环境变量 \[root@kb135 module\]# vim /etc/profile # HADOOP_HOME export HADOOP_HOME=/opt/soft/hadoop313 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export HDFS_JOURNALNODE_USER=root export HDFS_ZKFC_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_YARN_HOME=$HADOOP_HOME export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop ![](https://file.jishuzhan.net/article/1696222177730236417/375624eaebe44733b0c33490ca702264.png) 修改完之后\[root@kb135 module\]# source /etc/profile ### 3.在hadoop目录创建data目录 \[root@kb135 module\]# cd ./hadoop-3.1.3/ 创建目录data \[root@kb135 hadoop-3.1.3\]# mkdir ./data ![](https://file.jishuzhan.net/article/1696222177730236417/31d22e3c684243c29896b8e02c166f33.png) ### 4.修改配置文件 进入/opt/module/hadoop-3.1.3/etc/hadoop目录,查看目录下的文件,配置几个必要的文件 ![](https://file.jishuzhan.net/article/1696222177730236417/806722a8c04d4423b228bbb56892ae1b.png) #### (1)配置core-site.xml \[root@kb135 hadoop\]# vim ./core-site.xml \ \ \fs.defaultFS\ \hdfs://kb135:9000\ \ \ \hadoop.tmp.dir\ \/opt/module/hadoop-3.1.3/data\ \ \ \hadoop.http.staticuser.user\ \root\ \ \ \io.file.buffer.size\ \131073\ \ \ \hadoop.proxyuser.root.hosts\ \\*\ \ \ \hadoop.proxyuser.root.groups\ \\*\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/360f1d7faabb4e67a23b14e7a5ba1341.png) #### (2)配置hadoop-env.sh \[root@kb135 hadoop\]# vim ./hadoop-env.sh 修改第54行 export JAVA_HOME=/opt/module/jdk1.8.0_381 ![](https://file.jishuzhan.net/article/1696222177730236417/52e886996872466bae31072f1f0e11d6.png) #### (3)配置hdfs-site.xml \[root@kb135 hadoop\]# vim ./hdfs-site.xml \ \ \dfs.replication\ \1\ \ \ \dfs.namenode.name.dir\ \/opt/module/hadoop-3.1.3/data/dfs/name\ \ \ \dfs.datanode.data.dir\ \/opt/module/hadoop-3.1.3/data/dfs/data\ \ \ \dfs.permissions.enabled\ \false\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/223a3ab6c7d2438d8215f0edca53d61f.png) #### (4)配置mapred-site.xml \[root@kb135 hadoop\]# vim ./mapred-site.xml \ \ \mapreduce.framework.name\ \yarn\ \ \ \mapreduce.jobhistory.address\ \kb135:10020\ \ \ \mapreduce.jobhistory.webapp.address\ \kb135:19888\ \ \ \mapreduce.map.memory.mb\ \2048\ \ \ \mapreduce.reduce.memory.mb\ \2048\ \ \ \mapreduce.application.classpath\ \/opt/module/hadoop-3.1.3/etc/hadoop:/opt/module/hadoop-3.1.3/share/hadoop/common/\*:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/\*:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/\*:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/yarn/\*:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/\*\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/cbe688012268452e93bf8673f1d7484b.png) #### (5)配置yarn-site.xml \[root@kb135 hadoop\]# vim ./yarn-site.xml \ \ \yarn.resourcemanager.connect.retry-interval.ms\ \20000\ \ \ \yarn.resourcemanager.scheduler.class\ \org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler\ \ \ \yarn.nodemanager.localizer.address\ \kb135:8040\ \ \ \yarn.nodemanager.address\ \kb135:8050\ \ \ \yarn.nodemanager.webapp.address\ \kb135:8042\ \ \ \yarn.nodemanager.aux-services\ \mapreduce_shuffle\ \ \ \yarn.nodemanager.local-dirs\ \/opt/module/hadoop-3.1.3/yarndata/yarn\ \ \ \yarn.nodemanager.log-dirs\ \/opt/module/hadoop-3.1.3/yarndata/log\ \ \ \yarn.nodemanager.vmem-check-enabled\ \false\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/b2fe38c2f66f4a32917c8db02fd20b91.png) #### (6)配置workers \[root@kb135 hadoop\]# vim ./workers 修改为kb135 ![](https://file.jishuzhan.net/article/1696222177730236417/2c7b8e0c236746e0806b30ac6501d766.png) ### 5.初始化hadoop 进入/opt/module/hadoop-3.1.3/bin路径 \[root@kb135 bin\]# hadoop namenode -format ### 6.设置免密登录 \[root@kb135 \~\]# ssh-keygen -t rsa -P "" \[root@kb135 \~\]# cat /root/.ssh/id_rsa.pub \>\> /root/.ssh/authorized_keys \[root@kb135 \~\]# ssh-copy-id -i \~/.ssh/id_rsa.pub -p22 root@kb135 ### 7.启动hadoop \[root@kb135 \~\]# start-all.sh 查看进程 \[root@kb135 \~\]# jps ![](https://file.jishuzhan.net/article/1696222177730236417/b48eeb0481864bb8833a24fbc5a8ceaf.png) ### 8.测试 网页中输入网址:[http://192.168.142.135:9870/](http://192.168.142.135:9870/ "http://192.168.142.135:9870/") ![](https://file.jishuzhan.net/article/1696222177730236417/3503c809ab9a42d5b0e21e70d0bc3e41.png)

相关推荐
Galeoto6 分钟前
how to install linux perf through deb file
linux·运维·服务器
pofenx14 分钟前
记录idea可以运行但是maven install打包却找不到问题
java·maven·intellij-idea
小布不吃竹22 分钟前
Maven进阶
java·maven
素雪风华32 分钟前
服务容错治理框架resilience4j&sentinel基础应用---微服务的限流/熔断/降级解决方案
java·微服务·sentinel·springboot·服务容错·resilience
_Jyuan_1 小时前
Android Studio-相对布局(私人笔记)
android·java·ide·经验分享·笔记·android studio
为什么要做囚徒1 小时前
CentOS网络之network和NetworkManager深度解析
linux·网络·centos·network·networkmanager
Sahas10191 小时前
centos 安装jenkins
servlet·centos·jenkins
Sui_Network1 小时前
Walrus 与 Pudgy Penguins 达成合作,为 Web3 头部 IP 引入去中心化存储
大数据·人工智能·游戏·web3·去中心化·区块链
wuyunhang1234561 小时前
Spring AOP概念及其实现
java·后端·spring
不思念一个荒废的名字1 小时前
【黑马JavaWeb+AI知识梳理】后端Web基础01 - Maven
java·前端·maven