Transform Spark

rm -r dp-203 -f

git clone https://github.com/MicrosoftLearning/dp-203-azure-data-engineer dp-203

cd dp-203/Allfiles/labs/06

./setup.ps1

https://github.com/MicrosoftLearning/dp-203-azure-data-engineer/tree/master/Allfiles/labs/06/notebooks

python 复制代码
order_details = spark.read.csv('/data/*.csv', header=True, inferSchema=True)
display(order_details.limit(5))
python 复制代码
from pyspark.sql.functions import split, col

# Create the new FirstName and LastName fields
transformed_df = order_details.withColumn("FirstName", split(col("CustomerName"), " ").getItem(0)).withColumn("LastName", split(col("CustomerName"), " ").getItem(1))

# Remove the CustomerName field
transformed_df = transformed_df.drop("CustomerName")

display(transformed_df.limit(5))
python 复制代码
transformed_df.write.mode("overwrite").parquet('/transformed_data/orders.parquet')
print ("Transformed data saved!")
python 复制代码
from pyspark.sql.functions import year, month, col

dated_df = transformed_df.withColumn("Year", year(col("OrderDate"))).withColumn("Month", month(col("OrderDate")))
display(dated_df.limit(5))
dated_df.write.partitionBy("Year","Month").mode("overwrite").parquet("/partitioned_data")
print ("Transformed data saved!")
python 复制代码
orders_2020 = spark.read.parquet('/partitioned_data/Year=2020/Month=*')
display(orders_2020.limit(5))
python 复制代码
order_details.write.saveAsTable('sales_orders', format='parquet', mode='overwrite', path='/sales_orders_table')
python 复制代码
sql_transform = spark.sql("SELECT *, YEAR(OrderDate) AS Year, MONTH(OrderDate) AS Month FROM sales_orders")
display(sql_transform.limit(5))
sql_transform.write.partitionBy("Year","Month").saveAsTable('transformed_orders', format='parquet', mode='overwrite', path='/transformed_orders_table')
sql 复制代码
%%sql

SELECT * FROM transformed_orders
WHERE Year = 2021
    AND Month = 1
sql 复制代码
%%sql

DROP TABLE transformed_orders;
DROP TABLE sales_orders;
相关推荐
武子康13 小时前
大数据-98 Spark 从 DStream 到 Structured Streaming:Spark 实时计算的演进
大数据·后端·spark
阿里云大数据AI技术13 小时前
2025云栖大会·大数据AI参会攻略请查收!
大数据·人工智能
代码匠心16 小时前
从零开始学Flink:数据源
java·大数据·后端·flink
Lx35218 小时前
复杂MapReduce作业设计:多阶段处理的最佳实践
大数据·hadoop
武子康21 小时前
大数据-100 Spark DStream 转换操作全面总结:map、reduceByKey 到 transform 的实战案例
大数据·后端·spark
expect7g1 天前
Flink KeySelector
大数据·后端·flink
阿里云大数据AI技术2 天前
StarRocks 助力数禾科技构建实时数仓:从数据孤岛到智能决策
大数据
Lx3522 天前
Hadoop数据处理优化:减少Shuffle阶段的性能损耗
大数据·hadoop
努力的小郑2 天前
从一次分表实践谈起:我们真的需要复杂的分布式ID吗?
分布式·后端·面试
武子康2 天前
大数据-99 Spark Streaming 数据源全面总结:原理、应用 文件流、Socket、RDD队列流
大数据·后端·spark