How Spark Read Sftp Files from Hadoop SFTP FileSystem

Gradle Dependencies

gradle 复制代码
        implementation('org.apache.spark:spark-sql_2.13:3.5.3')
        implementation 'org.apache.hadoop:hadoop-common:3.3.4'

        testImplementation "org.springframework.boot:spring-boot-starter-test"
        testImplementation "org.apache.sshd:sshd-core:2.8.0"
        testImplementation "org.apache.sshd:sshd-sftp:2.8.0"

Setup a Fake SFTP server

java 复制代码
        // GIVEN
        // SETUP Fake SFTP Server
        String host = "127.0.0.1";
        String user = "username";
        String passwd = "password";
        int port = 9188;

        SshServer sshd = SshServer.setUpDefaultServer();
        sshd.setPort(port);
        sshd.setKeyPairProvider(new SimpleGeneratorHostKeyProvider());
        sshd.setPasswordAuthenticator((username, password, session) -> user.equals(username) && passwd.equals(password) );
        sshd.setSubsystemFactories(Collections.singletonList(new SftpSubsystemFactory()));
        sshd.setFileSystemFactory(new VirtualFileSystemFactory(rootPath));

        sshd.start();
        System.out.println("Fake SFTP server started at port " + port);

Generate A tested CSV file based on Hadoop SFTP FileSystem api

java 复制代码
        String sftpURL = String.format("sftp://%s:%s@%s:%d", user, passwd, host, port);
        String testedCsvFile = "test.csv";
        // WHEN
        // Create a CSV file by Hadoop FileSystem api
        Configuration conf = new Configuration();
        conf.set("fs.sftp.impl", "org.apache.hadoop.fs.sftp.SFTPFileSystem");
        conf.set("fs.defaultFS", sftpURL);

        // get FileSystem instance by a root Path
        Path path = new Path("/");
        FileSystem sftpFileSystem = FileSystem.get(path.toUri(),conf);
        Assertions.assertTrue(sftpFileSystem instanceof SFTPFileSystem);

        // Create a test csv file and write text contents to it
        try (BufferedWriter br = new BufferedWriter(new OutputStreamWriter(sftpFileSystem.create(new Path(testedCsvFile), true)))) {
            br.write("A|B|C|D");
            br.newLine();
            br.write("1|2|3|4");
        }

        // check the tested file
        FileStatus[] statuses = sftpFileSystem.listStatus(new Path("/"));
        Assertions.assertEquals(1, statuses.length);
        Assertions.assertTrue(statuses[0].isFile());
        Assertions.assertEquals(testedCsvFile, statuses[0].getPath().getName());

Finally, Read the tested data from SFTP Server

java 复制代码
    // THEN
    // Read the test csv file by Spark
    SparkConf sparkConf = new SparkConf()
            .setAppName("spark-test")
            .setMaster("local[2]")
            .set("spark.ui.enabled","false")
            .set("spark.hadoop.fs.sftp.impl","org.apache.hadoop.fs.sftp.SFTPFileSystem")
            .set("spark.hadoop.fs.defaultFS",sftpURL)
            ;
    SparkSession sparkSession = SparkSession.builder().config(sparkConf).getOrCreate();

    // read csv file by the sftp connection
    Dataset<Row> dataset = sparkSession.read()
            .option("header","true").option("delimiter","|")
            .csv(testedCsvFile);
    dataset.printSchema();
    dataset.show();
        
text 复制代码
root
    |-- A: string (nullable = true)
    |-- B: string (nullable = true)
    |-- C: string (nullable = true)
    |-- D: string (nullable = true)

+---+---+---+---+
|  A|  B|  C|  D|
+---+---+---+---+
|  1|  2|  3|  4|
+---+---+---+---+
相关推荐
阿里云大数据AI技术15 小时前
StarRocks 助力数禾科技构建实时数仓:从数据孤岛到智能决策
大数据
Lx35220 小时前
Hadoop数据处理优化:减少Shuffle阶段的性能损耗
大数据·hadoop
武子康1 天前
大数据-99 Spark Streaming 数据源全面总结:原理、应用 文件流、Socket、RDD队列流
大数据·后端·spark
阿里云大数据AI技术2 天前
大数据公有云市场第一,阿里云占比47%!
大数据
Lx3522 天前
Hadoop容错机制深度解析:保障作业稳定运行
大数据·hadoop
计算机毕业设计木哥2 天前
计算机毕设选题推荐:基于Java+SpringBoot物品租赁管理系统【源码+文档+调试】
java·vue.js·spring boot·mysql·spark·毕业设计·课程设计
T06205142 天前
工具变量-5G试点城市DID数据(2014-2025年
大数据
向往鹰的翱翔2 天前
BKY莱德因:5大黑科技逆转时光
大数据·人工智能·科技·生活·健康医疗
鸿乃江边鸟2 天前
向量化和列式存储
大数据·sql·向量化
IT毕设梦工厂2 天前
大数据毕业设计选题推荐-基于大数据的客户购物订单数据分析与可视化系统-Hadoop-Spark-数据可视化-BigData
大数据·hadoop·数据分析·spark·毕业设计·源码·bigdata