大数据分析与内存计算学习笔记

一、Scala编程初级实践

1.计算级数:

请用脚本的方式编程计算并输出下列级数的前n项之和Sn,直到Sn刚好大于或等于q为止,其中q为大于0的整数,其值通过键盘输入。(不使用脚本执行方式可写Java代码转换成Scala代码执行)

例如,若q的值为50.0,则输出应为:Sn=50.416695。

测试样例:

q=1时,Sn=2;

q=30时,Sn=30.891459;

q=50时,Sn=50.416695;

相关代码:

Scala 复制代码
import scala.io.StdIn.readInt
object MedicalOne {
  def main(args: Array[String]): Unit = {
    var Sn: Float = 0
    var n: Float = 1
    println("please input q:")
    val q = readInt()
    while (Sn < q) {
      Sn += (n + 1) / n
      n += 1
    }
    if (Sn == Sn.toInt) {
      println(s"Sn=${Sn.toInt}")
    } else {
      println(s"Sn=$Sn")
    }
  }
}

运行结果:

2 模拟图形绘制:

对于一个图形绘制程序,用下面的层次对各种实体进行抽象。定义一个Drawable的特质,其包括一个draw方法,默认实现为输出对象的字符串表示。定义一个Point类表示点,其混入了Drawable特质,并包含一个shift方法,用于移动点。所有图形实体的抽象类为Shape,其构造函数包括一个Point类型,表示图形的具体位置(具体意义对不同的具体图形不一样)。Shape类有一个具体方法moveTo和一个抽象方法zoom,其中moveTo将图形从当前位置移动到新的位置, 各种具体图形的moveTo可能会有不一样的地方。zoom方法实现对图形的放缩,接受一个浮点型的放缩倍数参数,不同具体图形放缩实现不一样。继承Shape类的具体图形类型包括直线类Line和圆类Circle。Line类的第一个参数表示其位置,第二个参数表示另一个端点,Line放缩的时候,其中点位置不变,长度按倍数放缩(注意,缩放时,其两个端点信息也改变了),另外,Line的move行为影响了另一个端点,需要对move方法进行重载。Circle类第一个参数表示其圆心,也是其位置,另一个参数表示其半径,Circle缩放的时候,位置参数不变,半径按倍数缩放。另外直线类Line和圆类Circle都混入了Drawable特质,要求对draw进行重载实现,其中类Line的draw输出的信息样式为"Line:第一个端点的坐标--第二个端点的坐标)",类Circle的draw输出的信息样式为"Circle center:圆心坐标,R=半径"。如下的代码已经给出了Drawable和Point的定义,同时也给出了程序入口main函数的实现,请完成Shape类、Line类和Circle类的定义。

相关代码

代码目录:

主类GraphicPlotting:

Scala 复制代码
object GraphicPlotting {
  def main(args: Array[String]): Unit = {
    val p = Point(10, 30)
    p.draw()
    val line1 = new Line(Point(0, 0), Point(20, 20))
    line1.draw()
    line1.moveTo(Point(5, 5)) //移动到一个新的点
    line1.draw()
    line1.zoom(2) //放大两倍
    line1.draw()
    val cir = new Circle(Point(10, 10), 5)
    cir.draw()
    cir.moveTo(Point(30, 20))
    cir.draw()
    cir.zoom(0.5)
    cir.draw()
  }
}

Drawable类

Scala 复制代码
trait Drawable {
  def draw() {
    println(this.toString)
  }
}

Point类

Scala 复制代码
case class Point(var x: Double, var y: Double) extends Drawable {
  def shift(deltaX: Double, deltaY: Double) {
    x += deltaX;
    y += deltaY
  }
}

Shape类

Scala 复制代码
abstract class Shape(var location: Point) {
  def moveTo(newLocation: Point) {
    location = newLocation
  }

  def zoom(scale: Double)
}

Line类

Scala 复制代码
class Line(beginPoint: Point, var endPoint: Point) extends Shape(beginPoint) with Drawable {
  override def draw() {
    println(s"Line:(${location.x},${location.y})--(${endPoint.x},${endPoint.y})")
  } //按指定格式重载click

  override def moveTo(newLocation: Point) {
    endPoint.shift(newLocation.x - location.x, newLocation.y - location.y) //直线移动时,先移动另外一个端点
    location = newLocation //移动位置
  }

  override def zoom(scale: Double) {
    val midPoint = Point((endPoint.x + location.x) / 2, (endPoint.y + location.y) / 2) //求出中点,并按中点进行缩放
    location.x = midPoint.x + scale * (location.x - midPoint.x)
    location.y = midPoint.y + scale * (location.y - midPoint.y)
    endPoint.x = midPoint.x + scale * (endPoint.x - midPoint.x)
    endPoint.y = midPoint.y + scale * (endPoint.y - midPoint.y)
  }
}

Circle类

Scala 复制代码
class Circle(center: Point, var radius: Double) extends Shape(center) with Drawable {
  override def draw() { //按指定格式重载click
    println(s"Circle center:(${location.x},${location.y}),R=$radius")
  }

  override def zoom(scale: Double) {
    radius = radius * scale //对圆的缩放只用修改半径
  }
}

编译运行程序,期望的输出结果如下:

3 统计学生成绩:

学生的成绩清单格式如下所示,第一行为表头,各字段意思分别为学号、性别、课程名1、课程名2等,后面每一行代表一个学生的信息,各字段之间用空白符隔开

Id gender Math English Physics

301610 male 80 64 78

301611 female 65 87 58

...

给定任何一个如上格式的清单(不同清单里课程数量可能不一样),要求尽可能采用函数式编程,统计出各门课程的平均成绩,最低成绩,和最高成绩;另外还需按男女同学分开,分别统计各门课程的平均成绩,最低成绩,和最高成绩。

测试样例1如下:

Id gender Math English Physics

301610 male 80 64 78

301611 female 65 87 58

301612 female 44 71 77

301613 female 66 71 91

301614 female 70 71 100

301615 male 72 77 72

301616 female 73 81 75

301617 female 69 77 75

301618 male 73 61 65

301619 male 74 69 68

301620 male 76 62 76

301621 male 73 69 91

301622 male 55 69 61

301623 male 50 58 75

301624 female 63 83 93

301625 male 72 54 100

301626 male 76 66 73

301627 male 82 87 79

301628 female 62 80 54

301629 male 89 77 72

测试样例2如下:

Id gender Math English Physics Science

301610 male 72 39 74 93

301611 male 75 85 93 26

301612 female 85 79 91 57

301613 female 63 89 61 62

301614 male 72 63 58 64

301615 male 99 82 70 31

301616 female 100 81 63 72

301617 male 74 100 81 59

301618 female 68 72 63 100

301619 male 63 39 59 87

301620 female 84 88 48 48

301621 male 71 88 92 46

301622 male 82 49 66 78

301623 male 63 80 83 88

301624 female 86 80 56 69

301625 male 76 69 86 49

301626 male 91 59 93 51

301627 female 92 76 79 100

301628 male 79 89 78 57

301629 male 85 74 78 80

样例1的统计代码和结果输出

相关代码

Scala 复制代码
import scala.io.Source

object Studentgrades_1 {
  def main(args: Array[String]): Unit = {
    val fileName = "D:\\IDEAProjects\\SecondScala\\src\\main\\scala\\grades1.txt"
    val lines = Source.fromFile(fileName).getLines().toList
    val header = lines.head.trim.split("\\s+").map(_.trim)
    val data = lines.tail.map(_.trim.split("\\s+"))

    val courseNames = header.drop(2)
    val statsTotal = calculateStatistics(data, courseNames)
    val statsMales = calculateStatistics(data.filter(_ (1) == "male"), courseNames)
    val statsFemales = calculateStatistics(data.filter(_ (1) == "female"), courseNames)

    printStatistics(statsTotal, "")
    printStatistics(statsMales, " (males)")
    printStatistics(statsFemales, " (females)")
  }

  def calculateStatistics(data: List[Array[String]], courses: Array[String]): List[(String, Double, Double, Double)] = {
    val courseScores = courses.indices.map { i =>
      val scores = data.filter(_(i + 2).matches("-?\\d+(\\.\\d+)?")).map(_(i + 2).toDouble) // Ensure we only have numbers
      if (scores.isEmpty) {
        (courses(i), 0.0, 0.0, 0.0) // Avoid division by zero if there are no scores for a course
      } else {
        val average = scores.sum / scores.length
        val min = scores.min
        val max = scores.max
        (courses(i), average, min, max)
      }
    }
    courseScores.toList
  }

  def printStatistics(stats: List[(String, Double, Double, Double)], title: String): Unit = {
    println(s"course    average   min   max${title}")
    stats.foreach { case (course, average, min, max) =>
      println(f"$course: $average%.2f $min%.2f $max%.2f")
    }
    println()
  }
}

结果输出

样例1的统计代码和结果输出

相关代码

Scala 复制代码
import scala.io.Source

object Studentgrades_2 {
  def main(args: Array[String]): Unit = {
    val fileName = "D:\\IDEAProjects\\SecondScala\\src\\main\\scala\\grades2.txt"
    val lines = Source.fromFile(fileName).getLines().toList
    val header = lines.head.trim.split("\\s+").map(_.trim)
    val data = lines.tail.map(_.trim.split("\\s+"))

    val courseNames = header.drop(2)
    val statsTotal = calculateStatistics(data, courseNames)
    val statsMales = calculateStatistics(data.filter(_ (1) == "male"), courseNames)
    val statsFemales = calculateStatistics(data.filter(_ (1) == "female"), courseNames)

    printStatistics(statsTotal, "")
    printStatistics(statsMales, " (males)")
    printStatistics(statsFemales, " (females)")
  }

  def calculateStatistics(data: List[Array[String]], courses: Array[String]): List[(String, Double, Double, Double)] = {
    val courseScores = courses.indices.map { i =>
      val scores = data.filter(_(i + 2).matches("-?\\d+(\\.\\d+)?")).map(_(i + 2).toDouble) // Ensure we only have numbers
      if (scores.isEmpty) {
        (courses(i), 0.0, 0.0, 0.0) // Avoid division by zero if there are no scores for a course
      } else {
        val average = scores.sum / scores.length
        val min = scores.min
        val max = scores.max
        (courses(i), average, min, max)
      }
    }
    courseScores.toList
  }

  def printStatistics(stats: List[(String, Double, Double, Double)], title: String): Unit = {
    println(s"course    average   min   max")
    stats.foreach { case (course, average, min, max) =>
      println(f"$course: $average%.2f $min%.2f $max%.2f")
    }
    println()
  }
}

结果输出

二、RDD编程初级实践

1.spark-shell交互式编程

使用chapter5-data1.txt,该数据集包含了某大学计算机系的成绩,数据格式如下所示:

|--------------------------------------------------------------------------------------------------------------------|
| Tom,DataBase,80 Tom,Algorithm,50 Tom,DataStructure,60 Jim,DataBase,90 Jim,Algorithm,60 Jim,DataStructure,80 ...... |

请根据给定的实验数据,在spark-shell中通过编程来计算以下内容:

(1)该系总共有多少学生;

(2)该系共开设来多少门课程;

(3)Tom同学的总成绩平均分是多少;

(4)求每名同学的选修的课程门数;

(5)该系DataBase课程共有多少人选修;

(6)各门课程的平均分是多少;

(7)使用累加器计算共有多少人选了DataBase这门课。

相关代码:

Scala 复制代码
(1)
val lines = sc.textFile("file:///home/qiangzi/chapter5-data1.txt") 
val par = lines.map(row=>row.split(",")(0)) 
val distinct_par = par.distinct() //去重操作 
distinct_par.count //取得总数
(2)
val lines = sc.textFile("file:///home/qiangzi/chapter5-data1.txt") 
val par = lines.map(row=>row.split(",")(1))
val distinct_par = par.distinct() 
distinct_par.count
(3)
val lines = sc.textFile("file:///home/qiangzi/chapter5-data1.txt")
val pare = lines.filter(row=>row.split(",")(0)=="Tom")
pare.foreach(println)
pare.map(row=>(row.split(",")(0),row.split(",")(2).toInt)).mapValues(x=>(x,1)).reduceByKey((x,y ) => (x._1+y._1,x._2 + y._2)).mapValues(x => (x._1 / x._2)).collect() 
(4)
val lines = sc.textFile("file:///home/qiangzi/chapter5-data1.txt")
val pare = lines.map(row=>(row.split(",")(0),row.split(",")(1)))
pare.mapValues(x => (x,1)).reduceByKey((x,y) => (" ",x._2 + y._2)).mapValues(x => x._2).foreach(println)
(5)
val lines = sc.textFile("file:///home/qiangzi/chapter5-data1.txt")
val pare = lines.filter(row=>row.split(",")(1)=="DataBase")
pare.count
(6)
val lines = sc.textFile("file:///home/qiangzi/chapter5-data1.txt")
val pare = lines.map(row=>(row.split(",")(1),row.split(",")(2).toInt))
pare.mapValues(x=>(x,1)).reduceByKey((x,y) => (x._1+y._1,x._2 + y._2)).mapValues(x => (x._1/ x._2)).collect().foreach(x => println(s"${x._1}: ${x._2}"))
(7)
val lines = sc.textFile("file:///home/qiangzi/chapter5-data1.txt")
val pare = lines.filter(row=>row.split(",")(1)=="DataBase").map(row=>(row.split(",")(1),1))
val accum = sc.longAccumulator("My Accumulator")
pare.values.foreach(x => accum.add(x))
accum.value

2.编写独立应用程序实现数据去重

对于两个输入文件A和B,编写Spark独立应用程序,对两个文件进行合并,并剔除其中重复的内容,得到一个新文件C。下面是输入文件和输出文件的一个样例,供参考。

输入文件A的样例如下:

20170101 x

20170102 y

20170103 x

20170104 y

20170105 z

20170106 z

输入文件B的样例如下:

20170101 y

20170102 y

20170103 x

20170104 z

20170105 y

根据输入的文件A和B合并得到的输出文件C的样例如下:

20170101 x

20170101 y

20170102 y

20170103 x

20170104 y

20170104 z

20170105 y

20170105 z

20170106 z

实验代码:

bash 复制代码
cd ~  # 进入用户主文件夹
mkdir ./remdup
mkdir -p ./remdup/src/main/scala
vim ./remdup/src/main/scala/RemDup.scala

/* RemDup.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.HashPartitioner

object RemDup {
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("RemDup")
    val sc = new SparkContext(conf)
    val dataFileA = "file:///home/qiangzi/A.txt"
    val dataFileB = "file:///home/qiangzi/B.txt"
    val dataA = sc.textFile(dataFileA, 2)
    val dataB = sc.textFile(dataFileB, 2)
    val res = dataA.union(dataB).filter(_.trim().length > 0).map(line => (line.trim, "")).partitionBy(new HashPartitioner(1)).groupByKey().sortByKey().keys
    res.saveAsTextFile("file:///home/qiangzi/C.txt")
  }
}

cd ~/remdup
vim simple.sbt

/* simple.sbt*/
name := "Simple Project"
version := "1.0"
scalaVersion := "2.12.18"
libraryDependencies += "org.apache.spark" %% "spark-core" % "3.5.1"

/usr/local/sbt-1.9.0/sbt/sbt package

/usr/local/spark-3.5.1/bin/spark-submit --class "RemDup" --driver-java-options "-Dfile.encoding=UTF-8" ~/remdup/target/scala-2.12/simple-project_2.12-1.0.jar

3.编写独立应用程序实现求平均值问题

每个输入文件表示班级学生某个学科的成绩,每行内容由两个字段组成,第一个是学生名字,第二个是学生的成绩;编写Spark独立应用程序求出所有学生的平均成绩,并输出到一个新文件中。下面是输入文件和输出文件的一个样例,供参考。

Algorithm成绩:

小明 92

小红 87

小新 82

小丽 90

Database成绩:

小明 95

小红 81

小新 89

小丽 85

Python成绩:

小明 82

小红 83

小新 94

小丽 91

平均成绩如下:

(小红,83.67)

(小新,88.33)

(小明,89.67)

(小丽,88.67)

实验代码:

bash 复制代码
cd ~  # 进入用户主文件夹
mkdir ./avgscore
mkdir -p ./avgscore/src/main/scala
vim ./avgscore/src/main/scala/AvgScore.scala

/* AvgScore.scala */
import org.apache.spark.SparkContext
import org.apache.spark.SparkContext._
import org.apache.spark.SparkConf
import org.apache.spark.HashPartitioner

object AvgScore {
  def main(args: Array[String]) {
    val conf = new SparkConf().setAppName("AvgScore")
    val sc = new SparkContext(conf)
    val dataFiles = Array(
      "file:///home/qiangzi/Sparkdata/algorithm.txt",
      "file:///home/qiangzi/Sparkdata/database.txt",
      "file:///home/qiangzi/Sparkdata/python.txt"
    )
    val data = dataFiles.foldLeft(sc.emptyRDD[String]) { (acc, file) =>
      acc.union(sc.textFile(file, 3))
    }

    val res = data.filter(_.trim().length > 0).map(line => {
      val fields = line.split(" ")
      (fields(0).trim(), fields(1).trim().toInt)
    }).partitionBy(new HashPartitioner(1)).groupByKey().mapValues(x => {
      var n = 0
      var sum = 0.0
      for (i <- x) {
        sum += i
        n += 1
      }
      val avg = sum / n
      f"$avg%1.2f"
    })

    res.saveAsTextFile("file:///home/qiangzi/Sparkdata/average.txt")
  }
}

cd ~/avgscore
vim simple.sbt

/* simple.sbt*/
name := "Simple Project"
version := "1.0"
scalaVersion := "2.12.18"
libraryDependencies += "org.apache.spark" %% "spark-core" % "3.5.1"

/usr/local/sbt-1.9.0/sbt/sbt package

/usr/local/spark-3.5.1/bin/spark-submit --class "AvgScore" --driver-java-options "-Dfile.encoding=UTF-8" ~/avgscore/target/scala-2.12/simple-project_2.12-1.0.jar

**三、**Spark SQL编程初级实践

1.Spark SQL基本操作

将下列JSON格式数据复制到Linux系统中,并保存命名为employee.json。

|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| { "id":1 , "name":" Ella" , "age":36 } { "id":2, "name":"Bob","age":29 } { "id":3 , "name":"Jack","age":29 } { "id":4 , "name":"Jim","age":28 } { "id":4 , "name":"Jim","age":28 } { "id":5 , "name":"Damon" } { "id":5 , "name":"Damon" } |

为employee.json创建DataFrame,并写出Scala语句完成下列操作:

  1. 查询所有数据;
  2. 查询所有数据,并去除重复的数据;
  3. 查询所有数据,打印时去除id字段;
  4. 筛选出age>30的记录;
  5. 将数据按age分组;
  6. 将数据按name升序排列;
  7. 取出前3行数据;
  8. 查询所有记录的name列,并为其取别名为username;
  9. 查询年龄age的平均值;
  10. 查询年龄age的最小值。

实验代码:

Scala 复制代码
(1)
import org.apache.spark.sql.SparkSession
val spark=SparkSession.builder().getOrCreate()
import spark.implicits._
val df = spark.read.json("file:///home/qiangzi/employee.json")
df.show()
(2)
	df.distinct().show()
(3)
	df.drop("id").show()
(4)
	df.filter(df("age") > 30 ).show()
(5)
	df.groupBy("age").count().show()
(6)
	df.sort(df("name").asc).show()
(7)
	df.take(3) 
(8)
	df.select(df("name").as("username")).show()
(9)
	val avgAge = df.agg(avg("age")).first().getDouble(0)
(10)
	val minAge = df.agg(min("age")).first().getLong(0).toInt

2.编程实现将RDD转换为DataFrame

源文件内容如下(包含id,name,age):

|------------------------------|
| 1,Ella,36 2,Bob,29 3,Jack,29 |

请将数据复制保存到Linux系统中,命名为employee.txt,实现从RDD转换得到DataFrame,并按"id:1,name:Ella,age:36"的格式打印出DataFrame的所有数据。请写出程序代码。

实验代码:

Scala 复制代码
cd ~  # 进入用户主文件夹
mkdir ./rddtodf
mkdir -p ./rddtodf/src/main/scala
vim ./rddtodf/src/main/scala/RDDtoDF.scala

/* RDDtoDF.scala */
import org.apache.spark.sql.types._
import org.apache.spark.sql.Encoder
import org.apache.spark.sql.Row
import org.apache.spark.sql.SparkSession

object RDDtoDF {
  def main(args: Array[String]) {
    val spark = SparkSession.builder().appName("RDDtoDF").getOrCreate()
    import spark.implicits._
    val employeeRDD = spark.sparkContext.textFile("file:///home/qiangzi/employee.txt")
    val schemaString = "id name age"
    val fields = schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, nullable = true))
    val schema = StructType(fields)
    val rowRDD = employeeRDD.map(_.split(",")).map(attributes => Row(attributes(0).trim, attributes(1), attributes(2).trim))
    val employeeDF = spark.createDataFrame(rowRDD, schema)
    employeeDF.createOrReplaceTempView("employee")
    val results = spark.sql("SELECT id,name,age FROM employee")
    results.map(t => "id:" + t(0) + "," + "name:" + t(1) + "," + "age:" + t(2)).show()
    spark.stop()
  }
}


cd ~/rddtodf
vim simple.sbt

/*simple.sbt*/
name := "Simple Project"
version := "1.0"
scalaVersion := "2.12.18"
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "3.5.1",
  "org.apache.spark" %% "spark-sql" % "3.5.1"
)

/usr/local/sbt-1.9.0/sbt/sbt package

/usr/local/spark-3.5.1/bin/spark-submit --class "RDDtoDF" --driver-java-options "-Dfile.encoding=UTF-8" ~/rddtodf/target/scala-2.12/simple-project_2.12-1.0.jar

3.编程实现利用DataFrame读写MySQL的数据

(1)在MySQL数据库中新建数据库sparktest,再创建表employee,包含如表6-2所示的两行数据。

6-2 employee 表原有数据

|----|-------|--------|-----|
| id | name | gender | Age |
| 1 | Alice | F | 22 |
| 2 | John | M | 25 |

(2)配置Spark通过JDBC连接数据库MySQL,编程实现利用DataFrame插入如表6-3所示的两行数据到MySQL中,最后打印出age的最大值和age的总和。

6-3 employee 表新增数据

|----|------|--------|-----|
| id | name | gender | age |
| 3 | Mary | F | 26 |
| 4 | Tom | M | 23 |

实验代码:

Scala 复制代码
mysql -u root -p

create database sparktest;
use sparktest;
create table employee (id int(4), name char(20), gender char(4), age int(4));
insert into employee values(1,'Alice','F',22);
insert into employee values(2,'John','M',25);

cd ~  # 进入用户主文件夹
mkdir ./testmysql
mkdir -p ./testmysql/src/main/scala
vim ./testmysql/src/main/scala/TestMySQL.scala

/* TestMySQL.scala */
import java.util.Properties
import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.functions._
import org.apache.spark.sql.types._
import org.apache.spark.sql.Row

object TestMySQL {

  def main(args: Array[String]): Unit = {

    val spark = SparkSession.builder()
      .appName("TestMySQL")
      .getOrCreate()
    val employeeRDD = spark.sparkContext.parallelize(Array("3 Mary F 26", "4 Tom M 23"))
      .map(_.split(" "))


    val schema = StructType(List(
      StructField("id", IntegerType, true),
      StructField("name", StringType, true),
      StructField("gender", StringType, true),
      StructField("age", IntegerType, true)
    ))
	val rowRDD = employeeRDD.map(p => Row(p(0).toInt, p(1).trim, p(2).trim, p(3).toInt))
    val employeeDF = spark.createDataFrame(rowRDD, schema)


    val prop = new Properties()
    prop.put("user", "root")
    prop.put("password", "789456MLq")
    prop.put("driver", "com.mysql.jdbc.Driver")

    employeeDF.write
      .mode("append")
      .jdbc("jdbc:mysql://slave1:3306/sparktest", "sparktest.employee", prop)

    val jdbcDF = spark.read
      .format("jdbc")
      .option("url", "jdbc:mysql://slave1:3306/sparktest")
      .option("driver", "com.mysql.jdbc.Driver")
      .option("dbtable", "employee")
	  .option("user", "root")
      .option("password", "789456MLq")
      .load()

    val aggregatedDF = jdbcDF.agg(
      max("age").alias("max_age"),
      sum("age").alias("total_age")
    )


    aggregatedDF.show()

    spark.stop()
  }
}


cd ~/testmysql
vim simple.sbt

/*simple.sbt*/
name := "Simple Project"
version := "1.0"
scalaVersion := "2.12.18"
libraryDependencies ++= Seq(
  "org.apache.spark" %% "spark-core" % "3.5.1",
  "org.apache.spark" %% "spark-sql" % "3.5.1"
)

/usr/local/sbt-1.9.0/sbt/sbt package

sudo /usr/local/spark-3.5.1/bin/spark-submit --class "TestMySQL" --driver-java-options "-Dfile.encoding=UTF-8" --jars /usr/local/hive-3.1.2/lib/mysql-connector-java-5.1.49.jar ~/testmysql/target/scala-2.12/simple-project_2.12-1.0.jar

四、

相关推荐
人工智障调包侠3 分钟前
基于深度学习多层感知机进行手机价格预测
人工智能·python·深度学习·机器学习·数据分析
笑鸿的学习笔记24 分钟前
工具笔记之生成图表和可视化的标记语言Mermaid
笔记
大霞上仙43 分钟前
jmeter学习(7)beanshell
学习·jmeter
大霞上仙44 分钟前
jmeter学习(1)线程组与发送请求
java·学习·jmeter
望森FPGA1 小时前
HDLBits中文版,标准参考答案 |2.5 More Verilog Features | 更多Verilog 要点
学习·fpga开发
kissSimple1 小时前
UE行为树编辑器图文笔记
笔记·ue5·编辑器·unreal engine·unreal engine 5
l1x1n01 小时前
DOS 命令学习笔记
笔记·学习·web安全
winds~2 小时前
自动驾驶-问题笔记-待解决
人工智能·笔记·自动驾驶
道爷我悟了2 小时前
Vue入门-指令学习-v-on
javascript·vue.js·学习
人工智能技术咨询.2 小时前
张雪峰谈人工智能技术应用专业的就业前景!
人工智能·学习·计算机视觉·语言模型