Pyspark下操作dataframe方法(3)

Pyspark dataframe

df.foreach 逐条执行

df.foreach() == df.rdd.foreach()

df.show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  5|  Bob|
+---+-----+
def func(row):
    print(row.name)

# row对象进入func执行
df.foreach(func)
Alice
Bob

foreachPartition 按分区逐条执行

df.show()
+---+-----+
|age| name|
+---+-----+
| 14|  Tom|
| 23|Alice|
| 16|  Bob|
+---+-----+
def func(itr):
    for person in itr:
        print(person.name)

df.foreachPartition(func)
Tom
Alice
Bob

freqltems

df = spark.createDataFrame([(1, 11), (1, 11), (3, 10), (4, 8), (4, 8)], ["c1", "c2"])
df.show()
+---+---+
| c1| c2|
+---+---+
|  1| 11|
|  1| 11|
|  3| 10|
|  4|  8|
|  4|  8|
+---+---+
df.freqItems(["c1", "c2"]).show()
+------------+------------+
|c1_freqItems|c2_freqItems|
+------------+------------+
|   [1, 3, 4]| [8, 10, 11]|
+------------+------------+

groupBy 分组

df.show()
+---+-----+
|age| name|
+---+-----+
|  2|Alice|
|  2|  Bob|
|  2|  Bob|
|  5|  Bob|
+---+-----+

df.groupBy("name").agg({"age": "sum"}).show()
+-----+--------+
| name|sum(age)|
+-----+--------+
|  Bob|       9|
|Alice|       2|
+-----+--------+

df.groupBy("name").agg({"age": "max"}).withColumnRenamed('max(age)','new_age').sort('new_age').show()
+-----+-------+
| name|new_age|
+-----+-------+
|Alice|      2|
|  Bob|      5|
+-----+-------+

head 获取指定数量开头

df.head(2)
[Row(age=2, name='Alice'), Row(age=2, name='Bob')]

hint 未确定

df = spark.createDataFrame([(2, "Alice"), (5, "Bob")], schema=["age", "name"])                                                                                             
df2 = spark.createDataFrame([Row(height=80, name="Tom"), Row(height=85, name="Bob")])
df.join(df2, "name").explain()  
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Project [name#1641, age#1640L, height#1644L]
   +- SortMergeJoin [name#1641], [name#1645], Inner
      :- Sort [name#1641 ASC NULLS FIRST], false, 0
      :  +- Exchange hashpartitioning(name#1641, 200), ENSURE_REQUIREMENTS, [plan_id=1916]
      :     +- Filter isnotnull(name#1641)
      :        +- Scan ExistingRDD[age#1640L,name#1641]
      +- Sort [name#1645 ASC NULLS FIRST], false, 0
         +- Exchange hashpartitioning(name#1645, 200), ENSURE_REQUIREMENTS, [plan_id=1917]
            +- Filter isnotnull(name#1645)
               +- Scan ExistingRDD[height#1644L,name#1645]
               
df.join(df2.hint("broadcast"), "name").explain()
== Physical Plan ==
AdaptiveSparkPlan isFinalPlan=false
+- Project [name#1641, age#1640L, height#1644L]
   +- BroadcastHashJoin [name#1641], [name#1645], Inner, BuildRight, false
      :- Filter isnotnull(name#1641)
      :  +- Scan ExistingRDD[age#1640L,name#1641]
      +- BroadcastExchange HashedRelationBroadcastMode(List(input[1, string, false]),false), [plan_id=1946]
         +- Filter isnotnull(name#1645)
            +- Scan ExistingRDD[height#1644L,name#1645]
相关推荐
lzhlizihang4 分钟前
【Hive sql 面试题】求出各类型专利top 10申请人,以及对应的专利申请数(难)
大数据·hive·sql·面试题
Hsu_kk5 分钟前
Hive 查询各类型专利 Top 10 申请人及对应的专利申请数
数据仓库·hive·hadoop
静听山水8 分钟前
Hive 的数据存储单元结构
hive
大数据编程之光9 分钟前
Hive 查询各类型专利 top10 申请人及专利申请数
大数据·数据仓库·hive·hadoop
杰克逊的日记12 分钟前
Hive详解
数据仓库·hive·hadoop
进击的六角龙34 分钟前
Python中处理Excel的基本概念(如工作簿、工作表等)
开发语言·python·excel
一只爱好编程的程序猿1 小时前
Java后台生成指定路径下创建指定名称的文件
java·python·数据下载
Aniay_ivy1 小时前
深入探索 Java 8 Stream 流:高效操作与应用场景
java·开发语言·python
gonghw4031 小时前
DearPyGui学习
python·gui
向阳12181 小时前
Bert快速入门
人工智能·python·自然语言处理·bert