ViewFs And Federation On HDFS

序言

ViewFs 是在Federation的基础上提出的,用于通过一个HDFS路径来访问多个NameSpace,同时与ViewFs搭配的技术是client-side mount table(这个就是具体的规则配置信息可以放置在core.xml中,也可以放置在mountTable.xml中).

总的来说ViewFs的其实就是一个中间层,用于去连接不同的Namenode,然后返还给我们的客户端程序. 所以ViewFs必须要实现HDFS的所有接口,这样才能来做转发管理. 这样就会有一些问题,比如不同的NameNode版本带来的问题,就没法解决[email protected]

Federation Of HDFS

只是单纯的搭建联盟其实比较简单.

core.xml

XML 复制代码
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>


    <property>
        <name>hadoop.tmp.dir</name>
        <value>/soft/hadoop/data_hadoop</value>
    </property>


    <!-- 这里的ip要改成对应的namenode地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop:9000/</value>
    </property>

 

</configuration>

hfds-site.xml

XML 复制代码
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>


    <property>
        <name>dfs.replication</name>
        <value>2</value>
    </property>


    <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/soft/hadoop/data_hadoop/datanode</value>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/soft/hadoop/data_hadoop/namenode</value>
    </property>


    <property>
        <name>dfs.permissions</name>
        <value>false</value>
    </property>


    <property>
        <name>dfs.nameservices</name>
        <value>ns1,ns2</value>
    </property>

    <property>
        <name>dfs.namenode.rpc-address.ns1</name>
        <value>hadoop:9000</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.ns1</name>
        <value>hadoop:50070</value>
    </property>

    <property>
        <name>dfs.namenode.secondaryhttp-address.ns1</name>
        <value>hadoop:50090</value>
    </property>


    <property>
        <name>dfs.namenode.rpc-address.ns2</name>
        <value>hadoop1:9000</value>
    </property>

    <property>
        <name>dfs.namenode.http-address.ns2</name>
        <value>hadoop1:50070</value>
    </property>

    <property>
        <name>dfs.namenode.secondaryhttp-address.ns2</name>
        <value>hadoop1:50090</value>
    </property>

</configuration>

启动

首先就是需要格式化namenode,这个很常规 hdfs namenode -format

关于联盟版本的创建则需要设置联盟的id,所以需要再格式化namenode 的时候指定

hdfs namenode -format -clusterId cui

验证

最主要的就是共用DataNode,即他们的DataNode 信息一样

ViewFs Of HDFS

相关推荐
IT成长日记5 小时前
【Hive入门】Hive数据导出完全指南:从HDFS到本地文件系统的专业实践
hive·hadoop·hdfs·数据导出
李菠菜2 天前
常用Hadoop HDFS命令详解与实用指南
大数据·hadoop·hdfs
和算法死磕到底2 天前
ubantu18.04(Hadoop3.1.3)Hive3.1.2安装指南
大数据·数据库·hive·hadoop·mysql·hdfs·dubbo
IT成长日记5 天前
【HDFS】HDFS数据迁移与备份全攻略:从日常备份到灾难恢复
大数据·hadoop·hdfs·数据迁移与备份
IT成长日记8 天前
【HDFS入门】HDFS与Hadoop生态的深度集成:与YARN、MapReduce和Hive的协同工作原理
hadoop·hdfs·mapreduce·yarn
IT成长日记10 天前
【HDFS入门】HDFS核心组件JournalNode运行机制剖析:高可用架构的基石
hadoop·hdfs·架构·journalnode
爱编程的王小美18 天前
数据一致性:MySQL、HBase和HDFS的协同
mysql·hdfs·hbase
Y1nhl19 天前
Pyspark学习二:快速入门基本数据结构
大数据·数据结构·python·学习·算法·hdfs·pyspark
今天我又学废了21 天前
Spark,HDFS概述
大数据·hdfs·spark
闯闯桑1 个月前
Spark 从HDFS读取时,通常按文件块(block)数量决定初始partition数,这是怎么实现的?
大数据·hdfs·spark