MapReduce WordCount程序实践(IDEA版)

环境

Linux:Hadoop2.x

Windows:jdk1.8、Maven3、IDEA2021

步骤

编程分析

编程分析包括:

1.数据过程分析:数据从输入到输出的过程分析。

2.数据类型分析:Map的输入输出类型,Reduce的输入输出类型;

编程分析决定了我们该如何编写代码。

新建Maven工程

打开IDEA-->点击File-->New-->Project

选择Maven-->点击Next

选择一个空目录作为项目目录,目录名称例如:wordcount,建议目录路径不包含中文和空格,点击Finish

添加依赖

修改pom.xml,添加如下依赖

xml 复制代码
    <dependencies>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-common</artifactId>
            <version>2.7.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-client</artifactId>
            <version>2.7.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-hdfs</artifactId>
            <version>2.7.3</version>
        </dependency>
        <dependency>
            <groupId>org.apache.hadoop</groupId>
            <artifactId>hadoop-mapreduce-client-core</artifactId>
            <version>2.7.3</version>
        </dependency>
    </dependencies>

加载依赖

新建包

src\main\java目录下,新建包:org.example

填入org.example,效果如下:

新建类

org.example包下,新建出三个类,分别为:MyMapperMyReducerMyMain,效果如下:

编写Map程序

编辑MyMapper类,步骤如下:

复制代码
1.继承Mapper
2.重写map()方法
3.编写Map逻辑代码:
	1.v1由Text类型转换为String
	2.按空格进行分词:split(" ")方法
	3.输出k2, v2

编写Reduce程序

编辑MyReducer类,步骤如下:

复制代码
1.继承Reducer
2.重写reduce()方法
3.编写Reduce逻辑代码:
    1.k4 = k3
    2.v4 = v3元素的和
    3.输出k4, v4

编写Main程序(Driver程序)

编辑MyMain类,步骤如下:

复制代码
1. 创建一个job和任务入口(指定主类)
2. 指定job的mapper和输出的类型<k2 v2>
3. 指定job的reducer和输出的类型<k4  v4>
4. 指定job的输入和输出路径
5. 执行job

思考

代码编写完成后,可以先在Windows本地运行吗?

打包

看到BUILD SUCCESS为打包成功

打包后得到的jar包,在项目的target目录下

提交到Hadoop集群运行

1.将上一步打包得到的jar包,上传到linux

2.启动hadoop集群

复制代码
start-all.sh

3.运行jar包

从Linux本地上传一个文件到hdfs

复制代码
hdfs dfs -put 1.txt /input/1.txt

hdfs查看输入数据

运行jar包

复制代码
hadoop jar wordcount-1.0-SNAPSHOT.jar org.example.MyMain /input/1.txt /output/wordcount

正常运行过程输出如下:

复制代码
[hadoop@node1 ~]$ hadoop jar wordcount-1.0-SNAPSHOT.jar org.example.MyMain /input/1.txt /output/wordcount
22/03/29 00:23:59 INFO client.RMProxy: Connecting to ResourceManager at node1/192.168.193.140:8032
22/03/29 00:23:59 WARN mapreduce.JobResourceUploader: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
22/03/29 00:24:00 INFO input.FileInputFormat: Total input paths to process : 1
22/03/29 00:24:00 INFO mapreduce.JobSubmitter: number of splits:1
22/03/29 00:24:01 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1648484275192_0001
22/03/29 00:24:01 INFO impl.YarnClientImpl: Submitted application application_1648484275192_0001
22/03/29 00:24:01 INFO mapreduce.Job: The url to track the job: http://node1:8088/proxy/application_1648484275192_0001/
22/03/29 00:24:01 INFO mapreduce.Job: Running job: job_1648484275192_0001
22/03/29 00:24:08 INFO mapreduce.Job: Job job_1648484275192_0001 running in uber mode : false
22/03/29 00:24:08 INFO mapreduce.Job:  map 0% reduce 0%
22/03/29 00:24:12 INFO mapreduce.Job:  map 100% reduce 0%
22/03/29 00:24:17 INFO mapreduce.Job:  map 100% reduce 100%
22/03/29 00:24:19 INFO mapreduce.Job: Job job_1648484275192_0001 completed successfully
22/03/29 00:24:19 INFO mapreduce.Job: Counters: 49
	File System Counters
		FILE: Number of bytes read=55
		FILE: Number of bytes written=237261
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=119
		HDFS: Number of bytes written=25
		HDFS: Number of read operations=6
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=2
	Job Counters 
		Launched map tasks=1
		Launched reduce tasks=1
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=2290
		Total time spent by all reduces in occupied slots (ms)=2516
		Total time spent by all map tasks (ms)=2290
		Total time spent by all reduce tasks (ms)=2516
		Total vcore-milliseconds taken by all map tasks=2290
		Total vcore-milliseconds taken by all reduce tasks=2516
		Total megabyte-milliseconds taken by all map tasks=2344960
		Total megabyte-milliseconds taken by all reduce tasks=2576384
	Map-Reduce Framework
		Map input records=2
		Map output records=4
		Map output bytes=41
		Map output materialized bytes=55
		Input split bytes=94
		Combine input records=0
		Combine output records=0
		Reduce input groups=3
		Reduce shuffle bytes=55
		Reduce input records=4
		Reduce output records=3
		Spilled Records=8
		Shuffled Maps =1
		Failed Shuffles=0
		Merged Map outputs=1
		GC time elapsed (ms)=103
		CPU time spent (ms)=1200
		Physical memory (bytes) snapshot=425283584
		Virtual memory (bytes) snapshot=4223356928
		Total committed heap usage (bytes)=277348352
	Shuffle Errors
		BAD_ID=0
		CONNECTION=0
		IO_ERROR=0
		WRONG_LENGTH=0
		WRONG_MAP=0
		WRONG_REDUCE=0
	File Input Format Counters 
		Bytes Read=25
	File Output Format Counters 
		Bytes Written=25
[hadoop@node1 ~]$ 

查看输出结果

思考

  • 如果运行过程报如下错误,该如何解决?

  • 代码还可以优化吗?如何优化?

完成!enjoy it!

相关推荐
BYSJMG2 小时前
计算机大数据毕业设计推荐:基于Hadoop+Spark的食物口味差异分析可视化系统【源码+文档+调试】
大数据·hadoop·分布式·python·spark·django·课程设计
萤丰信息3 小时前
技术赋能安全:智慧工地构建城市建设新防线
java·大数据·开发语言·人工智能·智慧城市·智慧工地
Viking_bird4 小时前
Apache Spark 3.2.0 开发测试环境部署指南
大数据·分布式·ajax·spark·apache
用户199701080185 小时前
抖音商品列表API技术文档
大数据·数据挖掘·数据分析
数据皮皮侠8 小时前
最新上市公司业绩说明会文本数据(2017.02-2025.08)
大数据·数据库·人工智能·笔记·物联网·小程序·区块链
计算机毕设-小月哥9 小时前
完整源码+技术文档!基于Hadoop+Spark的鲍鱼生理特征大数据分析系统免费分享
大数据·hadoop·spark·numpy·pandas·计算机毕业设计
Jinkxs9 小时前
AI重塑金融风控:从传统规则到智能模型的信贷审批转型案例
大数据·人工智能
时序数据说16 小时前
时序数据库市场前景分析
大数据·数据库·物联网·开源·时序数据库
2501_9301040421 小时前
GitCode 疑难问题诊疗:全方位指南
大数据·elasticsearch·gitcode
健康平安的活着21 小时前
es7.17.x es服务yellow状态的排查&查看节点,分片状态数量
大数据·elasticsearch·搜索引擎