【Hadoop|HDFS篇】HDFS的Shell操作

1. 基本语法

hadoop fs 具体命令或者hadoop dfs 具体命令。

两个是完全相同的。

2. 命令大全

hadoop fs:

bash 复制代码
Usage: hadoop fs [generic options]
	[-appendToFile <localsrc> ... <dst>]
	[-cat [-ignoreCrc] <src> ...]
	[-checksum <src> ...]
	[-chgrp [-R] GROUP PATH...]
	[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
	[-chown [-R] [OWNER][:[GROUP]] PATH...]
	[-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
	[-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
	[-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...]
	[-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
	[-createSnapshot <snapshotDir> [<snapshotName>]]
	[-deleteSnapshot <snapshotDir> <snapshotName>]
	[-df [-h] [<path> ...]]
	[-du [-s] [-h] [-v] [-x] <path> ...]
	[-expunge]
	[-find <path> ... <expression> ...]
	[-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
	[-getfacl [-R] <path>]
	[-getfattr [-R] {-n name | -d} [-e en] <path>]
	[-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
	[-head <file>]
	[-help [cmd ...]]
	[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
	[-mkdir [-p] <path> ...]
	[-moveFromLocal <localsrc> ... <dst>]
	[-moveToLocal <src> <localdst>]
	[-mv <src> ... <dst>]
	[-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
	[-renameSnapshot <snapshotDir> <oldName> <newName>]
	[-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
	[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
	[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
	[-setfattr {-n name [-v value] | -x name} <path>]
	[-setrep [-R] [-w] <rep> <path> ...]
	[-stat [format] <path> ...]
	[-tail [-f] [-s <sleep interval>] <file>]
	[-test -[defsz] <path>]
	[-text [-ignoreCrc] <src> ...]
	[-touch [-a] [-m] [-t TIMESTAMP ] [-c] <path> ...]
	[-touchz <path> ...]
	[-truncate [-w] <length> <path> ...]
	[-usage [cmd ...]]

Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]

2.1 help命令

hadoop fs -help 命令

比如:

bash 复制代码
[hexuan@hadoop102 ~]$ hadoop fs -help ls
-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...] :
  List the contents that match the specified file pattern. If path is not
  specified, the contents of /user/<currentUser> will be listed. For a directory a
  list of its direct children is returned (unless -d option is specified).
  
  Directory entries are of the form:
  	permissions - userId groupId sizeOfDirectory(in bytes)
  modificationDate(yyyy-MM-dd HH:mm) directoryName
  
  and file entries are of the form:
  	permissions numberOfReplicas userId groupId sizeOfFile(in bytes)
  modificationDate(yyyy-MM-dd HH:mm) fileName
  
    -C  Display the paths of files and directories only.
    -d  Directories are listed as plain files.
    -h  Formats the sizes of files in a human-readable fashion
        rather than a number of bytes.
    -q  Print ? instead of non-printable characters.
    -R  Recursively list the contents of directories.
    -t  Sort files by modification time (most recent first).
    -S  Sort files by size.
    -r  Reverse the order of the sort.
    -u  Use time of last access instead of modification for
        display and sorting.
    -e  Display the erasure coding policy of files and directories.

2.2 上传

(1)-moveFromLocal :从本地剪切黏贴到HDFS

bash 复制代码
[hexuan@hadoop102 ~]$ vim wuguo.txt
[hexuan@hadoop102 ~]$ hadoop fs -moveFromLocal wuguo.txt /sanguo/
2024-09-04 10:39:14,395 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false

(2)-copyFromLocal:从本地文件系统拷贝文件到HDFS中。

bash 复制代码
hadoop fs -copyFromLocal wuguo.txt /sanguo/

(3)-put:等同于copyFromLocal,生产环境更习惯于用put。

bash 复制代码
hadoop fs -put wuguo.txt /sanguo/

(4)-appendToFile:追加一个文件到已经存在的文件末尾。

2.3 下载

(1)-copyToLocal:从HDFS拷贝到本地

(2)-get:等同于copyToLocal,生产环境中更常用get。

bash 复制代码
[hexuan@hadoop102 ~]$ hadoop fs -get /sanguo/wuguo.txt ./wuguo_copy.txt
2024-09-04 10:59:00,847 INFO sasl.SaslDataTransferClient: SASL encryption trust check: localHostTrusted = false, remoteHostTrusted = false

2.4 其他命令

(1)-ls:显示目录信息。

(2)-cat:显示文件内容。

(3)-chgrp,-chmod,-chown:与Linux系统中的用法一致,修改文件的权限。

(4)-mkdir:创建文件目录。

(5)-cp:从HDFS的一个路径复制到另一个路径。

(6)-mv:在HDFS中移动文件。

(7)-tail:显示一个文件的末尾1kb的数据。

(8)-rm:删除文件或文件夹。

(9)-rm -r:递归删除目录和目录里的内容。

(10)-du:统计文件夹的大小信息。

bash 复制代码
[hexuan@hadoop102 ~]$ hadoop fs -du /sanguo
40  120  /sanguo/jiangwei.txt._COPYING_
0   0    /sanguo/simayi.txt
12  36   /sanguo/wuguo.txt
[hexuan@hadoop102 ~]$ hadoop fs -du -s /sanguo
52  156  /sanguo

说明:

40, 12, 52表示的都是文件大小,120表示的是40*3个副本,/sanguo表示查看的目录。

添加选项-s显示文件夹整体的信息。

(11)-setrep:设置HDFS中文件的副本数量

bash 复制代码
[hexuan@hadoop102 ~]$ hadoop fs -setrep 10 /sanguo/wuguo.txt
Replication 10 set: /sanguo/wuguo.txt

这里设置的副本数只是记录在NameNode的元数据,是否真的会有这么多副本,还得看DataNode的数量。目前只有三台设备,所以最多也就三个副本,只有当节点数达到10台,副本数才能到10个。

相关推荐
喂完待续20 分钟前
Apache Hudi:数据湖的实时革命
大数据·数据仓库·分布式·架构·apache·数据库架构
青云交20 分钟前
Java 大视界 -- 基于 Java 的大数据可视化在城市交通拥堵治理与出行效率提升中的应用(398)
java·大数据·flink·大数据可视化·拥堵预测·城市交通治理·实时热力图
计艺回忆路2 小时前
从Podman开始一步步构建Hadoop开发集群
hadoop
还是大剑师兰特6 小时前
Flink面试题及详细答案100道(1-20)- 基础概念与架构
大数据·flink·大剑师·flink面试题
1892280486110 小时前
NY243NY253美光固态闪存NY257NY260
大数据·网络·人工智能·缓存
武子康10 小时前
大数据-70 Kafka 日志清理:删除、压缩及混合模式最佳实践
大数据·后端·kafka
CCF_NOI.12 小时前
解锁聚变密码:从微观世界到能源新未来
大数据·人工智能·计算机·聚变
杨荧12 小时前
基于Python的电影评论数据分析系统 Python+Django+Vue.js
大数据·前端·vue.js·python
数据智研13 小时前
【数据分享】上市公司创新韧性数据(2007-2023)
大数据·人工智能
辞--忧19 小时前
双十一美妆数据分析:洞察消费趋势与行业秘密
大数据