python
class model(object):
def __init__(self):
self.data = sc.textFile('path/to/data.csv')
# other misc setup
def run_model(self):
self.data = self.data.map(self.transformation_function)
def transformation_function(self,row):
row = row.split(',')
return row[0]+row[1]
test = model()
test.run_model()
test.data.take(10)
在pyspark中调用类方法,报错
Could not serialize object: Exception: It appears that you are attempting to broadcast an RDD or reference an RDD from an action or transformation. RDD transformations and actions can only be invoked by the driver, not inside of other transformations; for example, rdd1.map(lambda x: rdd2.values.count() * x) is invalid because the values transformation and count action cannot be performed inside of the rdd1.map transformation. For more information, see SPARK-5063.
原因 :
spark不允许在action或transformation中访问SparkContext,如果你的action或transformation中引用了self,那么spark会将整个对象进行序列化,并将其发到工作节点上,来确保每个执行任务的节点都能够访问到该方法以及它所依赖的类实例状态,但是序列化有一个限制,那就是不是所有的对象都可以被序列化。例如,SparkContext、SparkSession、打开的文件句柄、数据库连接等都不能被序列化,因为它们通常绑定到特定的运行环境,无法在网络上传输或在远程节点上复原。spark把对象序列化时这其中就保留了SparkContext,即使没有显式的访问它,它也会在闭包内被引用,所以会出错。
解决
将调用的类方法定义为静态方法 @staticmethod
python
class model(object):
@staticmethod
def transformation_function(row):
row = row.split(',')
return row[0]+row[1]
def __init__(self):
self.data = sc.textFile('some.csv')
def run_model(self):
self.data = self.data.map(model.transformation_function)