物联网架构之 Hadoop

修改/etc/hosts文件

192.168.107.197 node1

192.168.107.196 node2

192.168.107.195 node3

创建用户并加入组

groupadd hadoop

useradd -g hadoop hduser

passwd hduser

vim /etc/sudoers

hduser ALL=(ALL) ALL

安装JDK

rpm -ivh jdk-8u171-linux-x64.rpm

vim /etc/profile

export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64

export CLASSPATH= J A V A H O M E / l i b : JAVA_HOME/lib: JAVAHOME/lib:CLASSPATH

export PATH= J A V A H O M E / b i n : JAVA_HOME/bin: JAVAHOME/bin:PATH

source /etc/profile

java -version

配置本机SSH免密码登录

ssh-keygen -t rsa

ssh-copy-id node1

ssh-copy-id node2

ssh-copy-id node3

hadoop完全分布式安装

cd /home/hduser

tar zxf hadoop-2.6.5.tar.gz

mv hadoop-2.6.5 hadoop

hadoop的环境变量

vim /etc/profile

#hadoop

export HADOOP_HOME=/home/hduser/hadoop

export PATH= H A D O O P H O M E / b i n : HADOOP_HOME/bin: HADOOPHOME/bin:PATH

source /etc/profile

配置Hadoop:

vim /home/hduser/hadoop/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64

vim /home/hduser/hadoop/etc/hadoop/yarn-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_171-amd64

vim /home/hduser/hadoop/etc/hadoop/slaves

node2

node3

vim /home/hduser/hadoop/etc/hadoop/core-site.xml

fs.defaultFS

hdfs://node1:9000

hadoop.tmp.dir

file:/home/hduser/hadoop/tmp

vim /home/hduser/hadoop/etc/hadoop/hdfs-site.xml

dfs.namenode.secondary.http-address

node1:50090

dfs.namenode.name.dir

file:/home/hduser/hadoop/dfs/name

dfs.datanode.data.dir

file:/home/hduser/hadoop/dfs/data

dfs.replication

2

dfs.webhdfs.enabled

true

vim /home/hduser/hadoop/etc/hadoop/mapred-site.xml

mapreduce.framework.name

yarn

mapreduce.jobhistory.address

node1:10020

mapreduce.jobhistory.webapp.address

node1:19888

vim /home/hduser/hadoop/etc/hadoop/yarn-site.xml

yarn.nodemanager.aux-services

mapreduce_shuffle

yarn.nodemanager.aux-services.mapreduce.shuffle.class

org.apache.hadoop.mapred.ShuffleHandler

yarn.resourcemanager.address

node1:8032

yarn.resourcemanager.scheduler.address

node1:8030

yarn.resourcemanager.resource-tracker.address

node1:8035

yarn.resourcemanager.admin.address

node1:8033

yarn.resourcemanager.webapp.address

node1:8088

scp -r /home/hduser/hadoop node2:/home/hduser

scp -r /home/hduser/hadoop node3:/home/hduser

验证安装配置:

cd /home/hduser/hadoop

bin/hdfs namenode -format

sbin/start-dfs.sh

jps

sbin/start-yarn.sh

sbin/start-all.sh

bin/hdfs dfsadmin -report

http://192.168.107.197:50070

sbin/stop-all.sh

mkdir file

cd file

echo "Hello World hi HADOOP" > file1.txt

echo "Hello hadoop hi CHINA" > file2.txt

sbin/start-all

bin/hadoop fs -mkdir /input2

bin/hadoop fs -put file* /input2

bin/hadoop fs -ls /input2

bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.5.jar wordcount /input2/ /output2/wordcount1

bin/hadoop fs -cat /output2/wordcount1/*

HDFS的相关命令:

hdfs fsck / -files -blocks

sbin/start-balancer.sh

hadoop fs -mkdir /user

hadoop fs -mkdir /user/hadoop/dir1 /user/hadoop/dir2

hadoop fs -ls /input2/file1.txt

hadoop fs -ls /input2/

hadoop fs -cat /input2/file1.txt /input2/file2.txt

文件转移

hadoop fs -put /home/hduser/file/file1.txt /input2

hadoop fs -put /home/hduser/file/file1.txt /home/hduser/file/file2.txt /input2

hadoop fs -get /input2/file1.txt $HOME/file.txt

hadoop fs -mv /input2/file1.txt /input2/file2.txt /user/hadoop/dir1

hadoop fs -cp /input2/file1.txt /input2/file2.txt /user/hadoop/dir1

hadoop fs -cp file:///file1.txt file:///file2.txt file:///tmp

hadoop fs -rm /input2/file3.txt

hadoop fs -rmr /input2#现在推荐使用 hadoop fs -rm -r /input2 命令

hadoop fs -test -e /input2/file3.txt

hadoop fs -test -z /input2/file1.txt

相关推荐
小新学习屋1 小时前
Spark从入门到熟悉(篇三)
大数据·分布式·spark
西陵1 小时前
Nx带来极致的前端开发体验——借助CDD&TDD开发提效
前端·javascript·架构
rui锐rui1 小时前
大数据学习2:HIve
大数据·hive·学习
G皮T1 小时前
【Elasticsearch】检索高亮
大数据·elasticsearch·搜索引擎·全文检索·kibana·检索·高亮
zskj_zhyl6 小时前
智慧养老丨从依赖式养老到自主式养老:如何重构晚年生活新范式
大数据·人工智能·物联网
文火冰糖的硅基工坊6 小时前
[创业之路-458]:企业经营层 - 蓝海战略 - 重构价值曲线、整合产业要素、创造新需求
科技·重构·架构·创业·业务
哲科软件6 小时前
从“电话催维修“到“手机看进度“——售后服务系统开发如何重构客户体验
大数据·智能手机·重构
zzywxc7876 小时前
AI 正在深度重构软件开发的底层逻辑和全生命周期,从技术演进、流程重构和未来趋势三个维度进行系统性分析
java·大数据·开发语言·人工智能·spring
小张是铁粉6 小时前
oracle的内存架构学习
数据库·学习·oracle·架构
专注API从业者7 小时前
构建淘宝评论监控系统:API 接口开发与实时数据采集教程
大数据·前端·数据库·oracle