centos7安装hadoop 单机版

1.解压

(1)将hadoop压缩包复制到/opt/software路径下

(2)解压hadoop到/opt/module目录下

root@kb135 software\]# tar -zxvf hadoop-3.1.3.tar.gz -C /opt/module/ ![](https://file.jishuzhan.net/article/1696222177730236417/41d8e886a64b4bf3917f9c128552fada.png) #### (3)修改hadoop属主和属组 \[root@kb135 module\]# chown -R root:root ./hadoop-3.1.3/ ![](https://file.jishuzhan.net/article/1696222177730236417/5da0680b6f2e492cb5e480e060d726e6.png) ### 2.配置环境变量 \[root@kb135 module\]# vim /etc/profile # HADOOP_HOME export HADOOP_HOME=/opt/soft/hadoop313 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export HDFS_JOURNALNODE_USER=root export HDFS_ZKFC_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_YARN_HOME=$HADOOP_HOME export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop ![](https://file.jishuzhan.net/article/1696222177730236417/375624eaebe44733b0c33490ca702264.png) 修改完之后\[root@kb135 module\]# source /etc/profile ### 3.在hadoop目录创建data目录 \[root@kb135 module\]# cd ./hadoop-3.1.3/ 创建目录data \[root@kb135 hadoop-3.1.3\]# mkdir ./data ![](https://file.jishuzhan.net/article/1696222177730236417/31d22e3c684243c29896b8e02c166f33.png) ### 4.修改配置文件 进入/opt/module/hadoop-3.1.3/etc/hadoop目录,查看目录下的文件,配置几个必要的文件 ![](https://file.jishuzhan.net/article/1696222177730236417/806722a8c04d4423b228bbb56892ae1b.png) #### (1)配置core-site.xml \[root@kb135 hadoop\]# vim ./core-site.xml \ \ \fs.defaultFS\ \hdfs://kb135:9000\ \ \ \hadoop.tmp.dir\ \/opt/module/hadoop-3.1.3/data\ \ \ \hadoop.http.staticuser.user\ \root\ \ \ \io.file.buffer.size\ \131073\ \ \ \hadoop.proxyuser.root.hosts\ \\*\ \ \ \hadoop.proxyuser.root.groups\ \\*\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/360f1d7faabb4e67a23b14e7a5ba1341.png) #### (2)配置hadoop-env.sh \[root@kb135 hadoop\]# vim ./hadoop-env.sh 修改第54行 export JAVA_HOME=/opt/module/jdk1.8.0_381 ![](https://file.jishuzhan.net/article/1696222177730236417/52e886996872466bae31072f1f0e11d6.png) #### (3)配置hdfs-site.xml \[root@kb135 hadoop\]# vim ./hdfs-site.xml \ \ \dfs.replication\ \1\ \ \ \dfs.namenode.name.dir\ \/opt/module/hadoop-3.1.3/data/dfs/name\ \ \ \dfs.datanode.data.dir\ \/opt/module/hadoop-3.1.3/data/dfs/data\ \ \ \dfs.permissions.enabled\ \false\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/223a3ab6c7d2438d8215f0edca53d61f.png) #### (4)配置mapred-site.xml \[root@kb135 hadoop\]# vim ./mapred-site.xml \ \ \mapreduce.framework.name\ \yarn\ \ \ \mapreduce.jobhistory.address\ \kb135:10020\ \ \ \mapreduce.jobhistory.webapp.address\ \kb135:19888\ \ \ \mapreduce.map.memory.mb\ \2048\ \ \ \mapreduce.reduce.memory.mb\ \2048\ \ \ \mapreduce.application.classpath\ \/opt/module/hadoop-3.1.3/etc/hadoop:/opt/module/hadoop-3.1.3/share/hadoop/common/\*:/opt/module/hadoop-3.1.3/share/hadoop/common/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/\*:/opt/module/hadoop-3.1.3/share/hadoop/hdfs/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/\*:/opt/module/hadoop-3.1.3/share/hadoop/mapreduce/lib/\*:/opt/module/hadoop-3.1.3/share/hadoop/yarn/\*:/opt/module/hadoop-3.1.3/share/hadoop/yarn/lib/\*\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/cbe688012268452e93bf8673f1d7484b.png) #### (5)配置yarn-site.xml \[root@kb135 hadoop\]# vim ./yarn-site.xml \ \ \yarn.resourcemanager.connect.retry-interval.ms\ \20000\ \ \ \yarn.resourcemanager.scheduler.class\ \org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler\ \ \ \yarn.nodemanager.localizer.address\ \kb135:8040\ \ \ \yarn.nodemanager.address\ \kb135:8050\ \ \ \yarn.nodemanager.webapp.address\ \kb135:8042\ \ \ \yarn.nodemanager.aux-services\ \mapreduce_shuffle\ \ \ \yarn.nodemanager.local-dirs\ \/opt/module/hadoop-3.1.3/yarndata/yarn\ \ \ \yarn.nodemanager.log-dirs\ \/opt/module/hadoop-3.1.3/yarndata/log\ \ \ \yarn.nodemanager.vmem-check-enabled\ \false\ \ \ ![](https://file.jishuzhan.net/article/1696222177730236417/b2fe38c2f66f4a32917c8db02fd20b91.png) #### (6)配置workers \[root@kb135 hadoop\]# vim ./workers 修改为kb135 ![](https://file.jishuzhan.net/article/1696222177730236417/2c7b8e0c236746e0806b30ac6501d766.png) ### 5.初始化hadoop 进入/opt/module/hadoop-3.1.3/bin路径 \[root@kb135 bin\]# hadoop namenode -format ### 6.设置免密登录 \[root@kb135 \~\]# ssh-keygen -t rsa -P "" \[root@kb135 \~\]# cat /root/.ssh/id_rsa.pub \>\> /root/.ssh/authorized_keys \[root@kb135 \~\]# ssh-copy-id -i \~/.ssh/id_rsa.pub -p22 root@kb135 ### 7.启动hadoop \[root@kb135 \~\]# start-all.sh 查看进程 \[root@kb135 \~\]# jps ![](https://file.jishuzhan.net/article/1696222177730236417/b48eeb0481864bb8833a24fbc5a8ceaf.png) ### 8.测试 网页中输入网址:[http://192.168.142.135:9870/](http://192.168.142.135:9870/ "http://192.168.142.135:9870/") ![](https://file.jishuzhan.net/article/1696222177730236417/3503c809ab9a42d5b0e21e70d0bc3e41.png)

相关推荐
SUPER52662 小时前
FastApi项目启动失败 got an unexpected keyword argument ‘loop_factory‘
java·服务器·前端
咕噜咕噜啦啦3 小时前
Eclipse集成开发环境的使用
java·ide·eclipse
青草地溪水旁4 小时前
linux信号(14)——SIGALRM:从“手机闹钟”看SIGALRM:进程的非阻塞定时神器
linux·信号机制
心灵宝贝5 小时前
libopenssl-1_0_0-devel-1.0.2p RPM 包安装教程(openSUSE/SLES x86_64)
linux·服务器·数据库
计算机编程-吉哥5 小时前
大数据毕业设计-基于大数据的NBA美国职业篮球联赛数据分析可视化系统(高分计算机毕业设计选题·定制开发·真正大数据·机器学习毕业设计)
大数据·毕业设计·计算机毕业设计选题·机器学习毕业设计·大数据毕业设计·大数据毕业设计选题推荐·大数据毕设项目
计算机编程-吉哥5 小时前
大数据毕业设计-基于大数据的BOSS直聘岗位招聘数据可视化分析系统(高分计算机毕业设计选题·定制开发·真正大数据·机器学习毕业设计)
大数据·毕业设计·计算机毕业设计选题·机器学习毕业设计·大数据毕业设计·大数据毕业设计选题推荐·大数据毕设项目
光军oi5 小时前
全栈开发杂谈————关于websocket若干问题的大讨论
java·websocket·apache
weixin_419658315 小时前
Spring 的统一功能
java·后端·spring
小许学java6 小时前
Spring AI-流式编程
java·后端·spring·sse·spring ai
BullSmall6 小时前
linux zgrep命令介绍
linux·运维