ViewFs And Federation On HDFS

序言

ViewFs 是在Federation的基础上提出的,用于通过一个HDFS路径来访问多个NameSpace,同时与ViewFs搭配的技术是client-side mount table(这个就是具体的规则配置信息可以放置在core.xml中,也可以放置在mountTable.xml中).

总的来说ViewFs的其实就是一个中间层,用于去连接不同的Namenode,然后返还给我们的客户端程序. 所以ViewFs必须要实现HDFS的所有接口,这样才能来做转发管理. 这样就会有一些问题,比如不同的NameNode版本带来的问题,就没法解决cuiyaonan2000@163.com

Federation Of HDFS

只是单纯的搭建联盟其实比较简单.

core.xml

XML 复制代码
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>


    <property>
        <name>hadoop.tmp.dir</name>
        <value>/soft/hadoop/data_hadoop</value>
    </property>


    <!-- 这里的ip要改成对应的namenode地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop:9000/</value>
    </property>

 

</configuration>

hfds-site.xml

XML 复制代码
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>


    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>


    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/soft/hadoop/data_hadoop/datanode</value>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/soft/hadoop/data_hadoop/namenode</value>
    </property>


    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>


    <property>
        <name>dfs.nameservices</name>
        <value>ns1,ns2</value>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.ns1</name>
        <value>hadoop:9000</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.ns1</name>
        <value>hadoop:50070</value>
    </property>

    <property>
        <name>dfs.namenode.secondaryhttp-address.ns1</name>
        <value>hadoop:50090</value>
    </property>


    <property>
        <name>dfs.namenode.rpc-address.ns2</name>
        <value>hadoop1:9000</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.ns2</name>
        <value>hadoop1:50070</value>
    </property>

    <property>
        <name>dfs.namenode.secondaryhttp-address.ns2</name>
        <value>hadoop1:50090</value>
    </property>

</configuration>

启动

首先就是需要格式化namenode,这个很常规 hdfs namenode -format

关于联盟版本的创建则需要设置联盟的id,所以需要再格式化namenode 的时候指定

hdfs namenode -format -clusterId cui

验证

最主要的就是共用DataNode,即他们的DataNode 信息一样

ViewFs Of HDFS

相关推荐
Yz987620 小时前
Hive分桶超详细!!!
大数据·数据仓库·hive·hadoop·hdfs·数据库开发·big data
武子康3 天前
大数据-225 离线数仓 - 目前需求分析 指标口径 日志数据采集 taildir source HDFS Sink Agent Flume 优化配置
java·大数据·数据仓库·hadoop·hdfs·数据挖掘·flume
PersistJiao4 天前
Spark读MySQL数据rdd分区数受什么影响,读parquet、hdfs、hive、Doris、Kafka呢?
mysql·hdfs·spark
Dreams°1234 天前
【大数据测试HDFS + Flask详细教程与实例】
大数据·功能测试·hdfs·单元测试·flask
小C哈哈哈5 天前
大数据技术之HBase中的HRegion
大数据·数据仓库·hadoop·hdfs·hbase·mapreduce·database
PersistJiao6 天前
Spark 读取 HDFS 文件时 RDD 分区数的确定原理与源码分析
hdfs·spark·rdd分区
scc21407 天前
kafka:使用flume自定义拦截器,将json文件抽取到kafka的消息队列(topic)中,再从topic中将数据抽取到hdfs上
hdfs·kafka·flume
scc21407 天前
kafka中topic的数据抽取不到hdfs上问题解决
分布式·hdfs·kafka
Kika写代码8 天前
【Hadoop】【hdfs】【大数据技术基础】实验三 HDFS 基础编程实验
大数据·hadoop·hdfs
不太灵光的程序员10 天前
【Flume实操】实时监听 NetCat 端口和本地文件数据到 HDFS 案例分析
大数据·hdfs·flume