4.MapReduce 序列化

目录

概述

序列化是分布式计算中很重要的一环境,好的序列化方式,可以大大减少分布式计算中,网络传输的数据量。

序列化

序列化

对象 --> 字节序例 :存储到磁盘或者网络传输

MR 、Spark、Flink :分布式的执行框架 必然会涉及到网络传输

java 中的序列化:Serializable

Hadoop 中序列化特点: 紧凑、速度、扩展性、互操作

Spark 中使用了其它的序例化框架 Kyro

反序例化

字节序例 ---> 对象

java自带的两种

Serializable

此处是 java 自带的 序例化 方式,这种方式简单方便,但体积大,不利于大数据量网络传输。

java 复制代码
public class JavaSerDemo {

    public static void main(String[] args) throws IOException, ClassNotFoundException {
        Person person = new Person(1, "张三", 33);
        ObjectOutputStream out = new ObjectOutputStream(new FileOutputStream("download/person.obj"));
        out.writeObject(person);

        ObjectInputStream in = new ObjectInputStream(new FileInputStream("download/person.obj"));
        Object o = in.readObject();
        System.out.println(o);
    }


    static class Person implements Serializable {
        private int id;
        private String name;
        private int age;

        public Person(int id, String name, int age) {
            this.id = id;
            this.name = name;
            this.age = age;
        }

        @Override
        public String toString() {
            return "Person{" +
                    "id=" + id +
                    ", name='" + name + '\'' +
                    ", age=" + age +
                    '}';
        }

        public int getId() {
            return id;
        }

        public void setId(int id) {
            this.id = id;
        }

        public String getName() {
            return name;
        }

        public void setName(String name) {
            this.name = name;
        }

        public int getAge() {
            return age;
        }

        public void setAge(int age) {
            this.age = age;
        }
    }
}

非Serializable

java 复制代码
public class DataSerDemo {

    public static void main(String[] args) throws IOException {

        Person person = new Person(1, "张三", 33);
        DataOutputStream out = new DataOutputStream(new FileOutputStream("download/person2.obj"));
        out.writeInt(person.getId());
        out.writeUTF(person.getName());
        out.close();

        DataInputStream in = new DataInputStream(new FileInputStream("download/person2.obj"));
        // 这里要注意,上面以什么顺序写出去,这里就要以什么顺序读取
        int id = in.readInt();
        String name = in.readUTF();
        in.close();
        System.out.println("id:" + id + " name:" + name);

    }

    /**
     *  注意: 不需要继承 Serializable
     */
    static class Person {
        private int id;
        private String name;
        private int age;

        public Person(int id, String name, int age) {
            this.id = id;
            this.name = name;
            this.age = age;
        }

        @Override
        public String toString() {
            return "Person{" +
                    "id=" + id +
                    ", name='" + name + '\'' +
                    ", age=" + age +
                    '}';
        }

        public int getId() {
            return id;
        }

        public void setId(int id) {
            this.id = id;
        }

        public String getName() {
            return name;
        }

        public void setName(String name) {
            this.name = name;
        }

        public int getAge() {
            return age;
        }

        public void setAge(int age) {
            this.age = age;
        }
    }
}

hadoop序例化

官方地址速递

The key and value classes have to be serializable by the framework and hence need to implement the Writable interface. Additionally, the key classes have to implement the WritableComparable interface to facilitate sorting by the framework.

注意:Writable 两个方法,一个 write ,readFields

java 复制代码
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface Writable {

  void write(DataOutput out) throws IOException;

  void readFields(DataInput in) throws IOException;
}

实践

java 复制代码
public class PersonWritable implements Writable {

    private int id;
    private String name;
    private int age;
    // 消费金额
    private int consumption;
    // 消费总金额
    private long consumptions;


    public PersonWritable() {
    }

    public PersonWritable(int id, String name, int age, int consumption) {
        this.id = id;
        this.name = name;
        this.age = age;
        this.consumption = consumption;
    }

    public PersonWritable(int id, String name, int age, int consumption, long consumptions) {
        this.id = id;
        this.name = name;
        this.age = age;
        this.consumption = consumption;
        this.consumptions = consumptions;
    }

    public int getId() {
        return id;
    }

    public void setId(int id) {
        this.id = id;
    }

    public String getName() {
        return name;
    }

    public void setName(String name) {
        this.name = name;
    }

    public int getAge() {
        return age;
    }

    public void setAge(int age) {
        this.age = age;
    }

    public int getConsumption() {
        return consumption;
    }

    public void setConsumption(int consumption) {
        this.consumption = consumption;
    }

    public long getConsumptions() {
        return consumptions;
    }

    public void setConsumptions(long consumptions) {
        this.consumptions = consumptions;
    }

    @Override
    public String toString() {
        return
                "id=" + id +
                        ", name='" + name + '\'' +
                        ", age='" + age + '\'' +
                        ", consumption=" + consumption + '\'' +
                        ", consumptions=" + consumptions;
    }

    @Override
    public void write(DataOutput out) throws IOException {
        out.writeInt(id);
        out.writeUTF(name);
        out.writeInt(age);
        out.writeInt(consumption);
        out.writeLong(consumptions);
    }

    @Override
    public void readFields(DataInput in) throws IOException {
        id = in.readInt();
        name = in.readUTF();
        age = in.readInt();
        consumption = in.readInt();
        consumptions = in.readLong();
    }
}
java 复制代码
/**
 * 统计 个人 消费
 */
public class PersonStatistics {

    static class PersonStatisticsMapper extends Mapper<LongWritable, Text, IntWritable, PersonWritable> {
        @Override
        protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException {
            String[] split = value.toString().split(",");
            int id = Integer.parseInt(split[0]);
            String name = split[1];
            int age = Integer.parseInt(split[2]);
            int consumption = Integer.parseInt(split[3]);
            PersonWritable writable = new PersonWritable(id, name, age, consumption, 0);
            context.write(new IntWritable(id), writable);
        }
    }

    static class PersonStatisticsReducer extends Reducer<IntWritable, PersonWritable, NullWritable, PersonWritable> {
        @Override
        protected void reduce(IntWritable key, Iterable<PersonWritable> values, Context context) throws IOException, InterruptedException {
            long count = 0L;
            PersonWritable person = null;
            for (PersonWritable data : values) {
                if (Objects.isNull(person)) {
                    person = data;
                }
                count = count + data.getConsumption();
            }
            person.setConsumptions(count);

            PersonWritable personWritable = new PersonWritable(person.getId(), person.getName(), person.getAge(), person.getConsumption(), count);

            context.write(NullWritable.get(), personWritable);
        }
    }

    public static void main(String[] args) throws IOException, InterruptedException, ClassNotFoundException {
        Configuration configuration = new Configuration();

        String sourcePath = "data/person.data";
        String distPath = "downloadOut/person-out.data";

        FileUtil.deleteIfExist(configuration, distPath);

        Job job = Job.getInstance(configuration, "person statistics");
        job.setJarByClass(PersonStatistics.class);
        //job.setCombinerClass(PersonStatistics.PersonStatisticsReducer.class);
        job.setMapperClass(PersonStatisticsMapper.class);
        job.setReducerClass(PersonStatisticsReducer.class);
        job.setMapOutputKeyClass(IntWritable.class);
        job.setMapOutputValueClass(PersonWritable.class);
        job.setOutputKeyClass(NullWritable.class);
        job.setOutputValueClass(PersonWritable.class);

        FileInputFormat.addInputPath(job, new Path(sourcePath));
        FileOutputFormat.setOutputPath(job, new Path(distPath));
        System.exit(job.waitForCompletion(true) ? 0 : 1);
    }
}
bash 复制代码
# person.data
1,张三,30,10
1,张三,30,20
2,李四,25,5

上述执行结果如下:

分片/InputFormat & InputSplit

官方文档速递

java 复制代码
org.apache.hadoop.mapreduce.InputFormat
org.apache.hadoop.mapreduce.InputSplit

日志

执行 序列化 测试小程序,关注以下日志

bash 复制代码
# 总共加载一个文件,分隔成一个
2024-01-06 09:19:42,363 [main] [org.apache.hadoop.mapreduce.lib.input.FileInputFormat] [INFO] - Total input files to process : 1
2024-01-06 09:19:42,487 [main] [org.apache.hadoop.mapreduce.JobSubmitter] [INFO] - number of splits:1

结束

至此,MapReduce 序列化 至此结束,如有疑问,欢迎评论区留言。

相关推荐
字节跳动数据平台16 小时前
代码量减少 70%、GPU 利用率达 95%:火山引擎多模态数据湖如何释放模思智能的算法生产力
大数据
得物技术18 小时前
深入剖析Spark UI界面:参数与界面详解|得物技术
大数据·后端·spark
武子康19 小时前
大数据-238 离线数仓 - 广告业务 Hive分析实战:ADS 点击率、购买率与 Top100 排名避坑
大数据·后端·apache hive
武子康2 天前
大数据-237 离线数仓 - Hive 广告业务实战:ODS→DWD 事件解析、广告明细与转化分析落地
大数据·后端·apache hive
大大大大晴天2 天前
Flink生产问题排障-Kryo serializer scala extensions are not available
大数据·flink
武子康4 天前
大数据-236 离线数仓 - 会员指标验证、DataX 导出与广告业务 ODS/DWD/ADS 全流程
大数据·后端·apache hive
武子康5 天前
大数据-235 离线数仓 - 实战:Flume+HDFS+Hive 搭建 ODS/DWD/DWS/ADS 会员分析链路
大数据·后端·apache hive
DianSan_ERP5 天前
电商API接口全链路监控:构建坚不可摧的线上运维防线
大数据·运维·网络·人工智能·git·servlet
够快云库5 天前
能源行业非结构化数据治理实战:从数据沼泽到智能资产
大数据·人工智能·机器学习·企业文件安全
AI周红伟5 天前
周红伟:智能体全栈构建实操:OpenClaw部署+Agent Skills+Seedance+RAG从入门到实战
大数据·人工智能·大模型·智能体