Pyspark下操作dataframe方法(3)

Pyspark dataframe

df.foreach 逐条执行

df.foreach() == df.rdd.foreach()

复制代码
df.show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  5|  Bob|
+---+-----+
def func(row):
    print(row.name)

# row对象进入func执行
df.foreach(func)
Alice
Bob

foreachPartition 按分区逐条执行

复制代码
df.show()
+---+-----+
|age| name|
+---+-----+
| 14|  Tom|
| 23|Alice|
| 16|  Bob|
+---+-----+
def func(itr):
    for person in itr:
        print(person.name)

df.foreachPartition(func)
Tom
Alice
Bob

freqltems

复制代码
df = spark.createDataFrame([(1, 11), (1, 11), (3, 10), (4, 8), (4, 8)], ["c1", "c2"])
df.show()
+---+---+
| c1| c2|
+---+---+
|  1| 11|
|  1| 11|
|  3| 10|
|  4|  8|
|  4|  8|
+---+---+
df.freqItems(["c1", "c2"]).show()
+------------+------------+
|c1_freqItems|c2_freqItems|
+------------+------------+
|   [1, 3, 4]| [8, 10, 11]|
+------------+------------+

groupBy 分组

复制代码
df.show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  2|  Bob|
|  2|  Bob|
|  5|  Bob|
+---+-----+

df.groupBy("name").agg({"age": "sum"}).show()
+-----+--------+
| name|sum(age)|
+-----+--------+
|  Bob|       9|
|Alice|       2|
+-----+--------+

df.groupBy("name").agg({"age": "max"}).withColumnRenamed('max(age)','new_age').sort('new_age').show()
+-----+-------+
| name|new_age|
+-----+-------+
|Alice|      2|
|  Bob|      5|
+-----+-------+

head 获取指定数量开头

复制代码
df.head(2)
[Row(age=2, name='Alice'), Row(age=2, name='Bob')]

hint 未确定

复制代码
df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])                                                                                             
df2 = spark.createDataFrame([Row(height=80, name="Tom"), Row(height=85, name="Bob")])
df.join(df2, "name").explain()  
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Project [name#1641, age#1640L, height#1644L]
   +- SortMergeJoin [name#1641], [name#1645], Inner
      :- Sort [name#1641 ASC NULLS FIRST], false, 0
      :  +- Exchange hashpartitioning(name#1641, 200), ENSURE_REQUIREMENTS, [plan_id=1916]
      :     +- Filter isnotnull(name#1641)
      :        +- Scan ExistingRDD[age#1640L,name#1641]
      +- Sort [name#1645 ASC NULLS FIRST], false, 0
         +- Exchange hashpartitioning(name#1645, 200), ENSURE_REQUIREMENTS, [plan_id=1917]
            +- Filter isnotnull(name#1645)
               +- Scan ExistingRDD[height#1644L,name#1645]
               
df.join(df2.hint("broadcast"), "name").explain()
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Project [name#1641, age#1640L, height#1644L]
   +- BroadcastHashJoin [name#1641], [name#1645], Inner, BuildRight, false
      :- Filter isnotnull(name#1641)
      :  +- Scan ExistingRDD[age#1640L,name#1641]
      +- BroadcastExchange HashedRelationBroadcastMode(List(input[1, string, false]),false), [plan_id=1946]
         +- Filter isnotnull(name#1645)
            +- Scan ExistingRDD[height#1644L,name#1645]
相关推荐
迈火15 小时前
PuLID_ComfyUI:ComfyUI中的图像生成强化插件
开发语言·人工智能·python·深度学习·计算机视觉·stable diffusion·语音识别
浔川python社17 小时前
《网络爬虫技术规范与应用指南系列》(xc—5)完
爬虫·python
MongoVIP18 小时前
Scrapy爬虫实战:正则高效解析豆瓣电影
python·scrapy
李小白6618 小时前
Python文件操作
开发语言·python
weixin_5259363319 小时前
金融大数据处理与分析
hadoop·python·hdfs·金融·数据分析·spark·matplotlib
Zwb29979219 小时前
Day 30 - 错误、异常与 JSON 数据 - Python学习笔记
笔记·python·学习·json
码界筑梦坊20 小时前
206-基于深度学习的胸部CT肺癌诊断项目的设计与实现
人工智能·python·深度学习·flask·毕业设计
flashlight_hi21 小时前
LeetCode 分类刷题:74. 搜索二维矩阵
python·算法·leetcode·矩阵
通往曙光的路上21 小时前
国庆回来的css
人工智能·python·tensorflow
不语n21 小时前
Windows+Docker+AI开发板打造智能终端助手
python·docker·树莓派·香橙派·dify·ollama·ai开发板