RDD转换为DataFrame
注意:如果需要RDD与DF或者DS之间操作,那么都需要引入 import spark.implicits._ (spark不是包名,而是sparkSession对象的名称)
前置条件:导入隐式转换并创建一个RDD
1)通过手动确定转换
scala> peopleRDD.map{x=>val para = x.split(“,”);(para(0),para(1).trim.toInt)}.toDF(“name”,”age”)
res1: org.apache.spark.sql.DataFrame = [name: string, age: int]
2)通过反射确定(需要用到样例类)
(1)创建一个样例类
scala> case class People(name:String, age:Int)
(2)根据样例类将RDD转换为DataFrame
3)通过编程的方式(了解)
(1)导入所需的类型
scala> import org.apache.spark.sql.types._
import org.apache.spark.sql.types._
(2)创建Schema
scala> val structType: StructType = StructType(StructField(“name”, StringType) :: StructField(“age”, IntegerType) :: Nil)
structType: org.apache.spark.sql.types.StructType = StructType(StructField(name,StringType,true), StructField(age,IntegerType,true))
(3)导入所需的类型
scala> import org.apache.spark.sql.Row
import org.apache.spark.sql.Row
(4)根据给定的类型创建二元组RDD
scala> val data = peopleRDD.map{ x => val para = x.split(“,”);Row(para(0),para(1).trim.toInt)}
data: org.apache.spark.rdd.RDD[org.apache.spark.sql.Row] = MapPartitionsRDD[6] at map at <console>:33
(5)根据数据及给定的schema创建DataFrame
scala> val dataFrame = spark.createDataFrame(data, structType)
dataFrame: org.apache.spark.sql.DataFrame = [name: string, age: int]