SparkSubmit进程无法强制kill掉
文章目录
- SparkSubmit进程无法强制kill掉
-
-
- 写在前面
-
- 正文
-
- Flink配合Kafka使用问题的记录
-
0. 写在前面
- 操作系统:Linux(CentOS7.5)
- Spark版本:Spark3.0.0
- Scala版本:Scala2.12.1
- Flink版本:Flink-1.13.1
本文出现「SparkSubmit进程无法强制kill掉」这种情况是在使用Spark-Shell环境下执行MLib的相关程序后导致的
1. 正文
注意:SparkSubmit进程无法强制kill掉,即使是
kill -9
多次不成功!
- 新会话窗口执行kill强制命令
尝试新开一个会话窗口,在新的会话窗口强制进行
kill
,依旧是不能强制杀掉这个SparkSubmit进程
- 查看SparkSubmit对应进程号的父进程是否存在,如果存在,直接杀掉其对应的父进程。
该方法查询不到对应的父进程,请换用下方的另一种方法 > 查看这个SparkSubmit进程的父进程有哪些,命令如下方图所示:
查询到的完整信息如下代码块所示:
shell
[whybigdata@bd01 ~]$ cat /proc/2116/status
Name: java
State: Z (zombie)
Tgid: 2116
Ngid: 0
Pid: 2116
PPid: 2105 TracerPid: 0
Uid: 1000 1000 1000 1000
Gid: 1000 1000 1000 1000
FDSize: 0 Groups: 10 1000
Threads: 1
SigQ: 2/7804
SigPnd: 0000000000000000
ShdPnd: 0000000000000100
SigBlk: 0000000000000000
SigIgn: 0000000000000000
SigCgt: 2000000181005ccf
CapInh: 0000000000000000
CapPrm: 0000000000000000
CapEff: 0000000000000000
CapBnd: 0000001fffffffff
CapAmb: 0000000000000000
Seccomp: 0
Cpus_allowed: ffffffff,ffffffff,ffffffff,ffffffff
Cpus_allowed_list: 0-127
Mems_allowed: 00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000000,00000001
Mems_allowed_list: 0
voluntary_ctxt_switches: 6
nonvoluntary_ctxt_switches: 5
- 解释说明:
- Pid: 2116 --> 表示当前进程
- PPid: 2105 --> 表示当前进程对应的父进程
- 获取到SparkSubmit进程对应的父进程号后,首先强制杀掉父进程,再次查看进程是否kill成功,命令如下方所示
shell kill -9 sub_pid
可以看到我们以及成功kill掉了SparkSubmit进程了 - 重新回到旧的会话窗口,可以观察到如下图所示的进程情况:
2. Flink配合Kafka使用问题的记录
Flink通过Kafka读取数据进行分组统计求最大值,并设置了窗口的大小,将Kafka生产端Input主题的数据消费到Output主题中
shell
Caused by: java.lang.ClassCastException: cannot assign instance of org.apache.commons.collections.map.LinkedMap to field org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit of type org.apache.commons.collections.map.LinkedMap in instance of org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer
at java.io.ObjectStreamClass$FieldReflector.setObjFieldValues(ObjectStreamClass.java:2287)
at java.io.ObjectStreamClass.setObjFieldValues(ObjectStreamClass.java:1417)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2293)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2287)
at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2211)
at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2069)
at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1573)
at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:615)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:600)
at org.apache.flink.util.InstantiationUtil.deserializeObject(InstantiationUtil.java:587)
at org.apache.flink.util.InstantiationUtil.readObjectFromConfig(InstantiationUtil.java:541)
at org.apache.flink.streaming.api.graph.StreamConfig.getStreamOperatorFactory(StreamConfig.java:322)
... 7 more
Caused by: java.lang.ClassCastException: cannot assign instance of org.apache.commons.collections.map.LinkedMap to field org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumerBase.pendingOffsetsToCommit of type org.apache.commons.collections.map.LinkedMap in instance of org.apache.flink.streaming.connectors.kafka.FlinkKafkaConsumer >
出现错误的原因是:Kafka库与Flink的反向类加载方法不兼容,修改 Flink安装目录下的
conf/flink-conf.yaml
并重新启动Flink >classloader.resolve-order: parent-first
注意,在Flink中执行bin/flink run --class class_refrence your.jar
命令前要将Jar包所需要的依赖放进到 Flink安装目录下的lib
目录中。
全文结束!!!