linux安装JDK及hadoop运行环境搭建

1.linux中安装jdk

|------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| (1)下载JDK至opt/install目录下,opt下创建目录soft,并解压至当前目录 tar xvf ./jdk-8u321-linux-x64.tar.gz -C /opt/soft/ (2)改名 (3)配置环境变量:vim /etc/profile #JAVA_HOME export JAVA_HOME=/opt/soft/jdk180 export CLASSPATH=.:JAVA_HOME/lib/dt.jar:JAVA_HOME/lib/tools.jar export PATH=PATH:JAVA_HOME/bin (4)更新资源并测试是否安装成功 source /opt/profile java |

2.hadoop运行环境搭建

|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 2.1 安装jDK:参上 |
| 2.2 下载安装Hadoop 解压至soft目录下,改名为hadoop313 更改所属用户为root 配置环境变量:vim /etc/profilre;配置完成后source /etc/profile bash # HADOOP_HOME export HADOOP_HOME=/opt/soft/hadoop313 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HADOOP_HOME/lib export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_SECONDARYNAMENODE_USER=root export HDFS_JOURNALNODE_USER=root export HDFS_ZKFC_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_YARN_HOME=$HADOOP_HOME export HADOOP_INSTALL=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop 创建数据目录data 切换至hadoop目录,查看目录下文件,准备进行配置 cd /opt/soft/hadoop313/etc/hadoop |
| 2.3 配置单机Hadoop (1)配置core-site.xml XML <configuration> <!-- 指定NameNode的地址 --> <property> <name>fs.defaultFS</name> <value>hdfs://kb129:9000</value> </property> <!-- 指定hadoop数据的存储目录 --> <property> <name>hadoop.tmp.dir</name> <value>/opt/soft/hadoop313/data</value> </property> <!-- 配置HDFS网页登录使用的静态用户为root --> <property> <name>hadoop.http.staticuser.user</name> <value>root</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>*</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> </configuration> (2)配置hdfs-site.xml 1)编辑hadoop-enc.sh 2)开始配置hdfs-site.xml XML <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/opt/soft/hadoop313/data/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/opt/soft/hadoop313/data/dfs/data</value> </property> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> </configuration> (3)配置yarn-site.xml XML <configuration> <!-- Site specific YARN configuration properties --> <!-- 每隔20s测试连接 --> <property> <name>yarn.resourcemanager.connect.retry-interval.ms</name> <value>20000</value> </property> <property> <name>yarn.resourcemanager.scheduler.class</name> <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler</value> </property> <property> <name>yarn.nodemanager.localizer.address</name> <value>kb129:8040</value> </property> <property> <name>yarn.nodemanager.address</name> <value>kb129:8050</value> </property> <property> <name>yarn.nodemanager.webapp.address</name> <value>kb129:8042</value> </property> <!-- 指定MapReduce走shuffle --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.local-dirs</name> <value>/opt/soft/hadoop313/yarndata/yarn</value> </property> <property> <name>yarn.nodemanager.log-dirs</name> <value>/opt/soft/hadoop313/yarndata/log</value> </property> <property> <name>yarn.nodemanager.vmem-check-enabled</name> <value>false</value> </property> </configuration> (4)配置workers更改workers内容为kb129(主机名) (5)配置mapred-site.xml XML <configuration> <!-- 指定MapReduce程序运行在Yarn上 --> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>kb129:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>kb129:19888</value> </property> <property> <name>mapreduce.map.memory.mb</name> <value>2048</value> </property> <property> <name>mapreduce.reduce.memory.mb</name> <value>2048</value> </property> <property> <name>mapreduce.application.classpath</name> <value>/opt/soft/hadoop313/etc/hadoop:/opt/soft/hadoop313/share/hadoop/common/lib/*:/opt/soft/hadoop313/share/hadoop/common/*:/opt/soft/hadoop313/share/hadoop/hdfs/*:/opt/soft/hadoop313/share/hadoop/hdfs/lib/*:/opt/soft/hadoop313/share/hadoop/mapreduce/*:/opt/soft/hadoop313/share/hadoop/mapreduce/lib/*:/opt/soft/hadoop313/share/hadoop/yarn/*:/opt/soft/hadoop313/share/hadoop/yarn/lib/*</value> </property> </configuration> |
| 2.4 启动测试hadoop (1)bin目录下初始化集群hadoop namenode -format (2)设置免密登录 回到根目录下配置kb129免密登录:ssh-keygen -t rsa -P "" 将本地主机的公钥文件(~/.ssh/id_rsa.pub)拷贝到远程主机 kb128 的 root 用户的 .ssh/authorized_keys 文件中,通过 SSH 连接到远程主机时可以使用公钥进行身份验证:cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys 将本地主机的公钥添加到远程主机的授权密钥列表中,以便实现通过 SSH 公钥身份验证来连接远程主机:ssh-copy-id -i ~/.ssh/id_rsa.pub -p22 root@kb128 (3)启动/关闭、查看 [root@kb129 hadoop]# start-all.sh [root@kb129 hadoop]# stop-all.sh [root@kb129 hadoop]# jps 15089 NodeManager 16241 Jps 14616 DataNode 13801 ResourceManager 14476 NameNode 16110 SecondaryNameNode (4)网页测试:浏览器中输入网址: http://192.168.142.129:9870/ |

相关推荐
高山上有一只小老虎2 分钟前
JPA实现分页查询
java·spring boot·后端
阿蒙Amon4 分钟前
C#每日面试题-Task和ValueTask区别
java·开发语言·c#
YuTaoShao6 分钟前
【LeetCode 每日一题】1895. 最大的幻方——(解法二)前缀和优化
linux·算法·leetcode
Java程序员威哥7 分钟前
【包教包会】SpringBoot依赖Jar指定位置打包:配置+原理+避坑全解析
java·开发语言·spring boot·后端·python·微服务·jar
a程序小傲9 分钟前
中国邮政Java面试被问:边缘计算的数据同步和计算卸载
java·服务器·开发语言·算法·面试·职场和发展·边缘计算
Java程序员威哥9 分钟前
Java微服务可观测性实战:Prometheus+Grafana+SkyWalking全链路监控落地
java·开发语言·python·docker·微服务·grafana·prometheus
毕设源码-邱学长14 分钟前
【开题答辩全过程】以 面向警务应用的问答系统的设计与实现为例,包含答辩的问题和答案
java
何亚告15 分钟前
记一次项目上hadoop数据迁移
大数据·hadoop·分布式
小尧嵌入式15 分钟前
【Linux开发二】数字反转|除数累加|差分数组|vector插入和访问|小数四舍五入及向上取整|矩阵逆置|基础文件IO|深入文件IO
linux·服务器·开发语言·c++·线性代数·算法·矩阵
试试勇气17 分钟前
Linux学习笔记(十二)--用户缓冲区
linux·笔记·学习