一、起因
kakfa某块数据盘损坏,且数据无法恢复,需清空换新盘
二、梳理操作流程
查看topic信息
sh ./kafka-topics --bootstrap-server ***:9092 --list --exclude-internal
查看某个topic数据分布情况
sh ./kafka-topics --bootstrap-server ***:9092 --describe --topic sr-event --exclude-intern
发现有部分topic只有1副本
进行副本重平衡
1、进入 cdh bin 目录:/opt/cloudera/parcels/CDH-6.3.2-1.cdh6.3.2.p0.1605554/bin
2、上传increase-replication-factor.json文件到该目录
3、执行 sh ./kafka-reassign-partitions --zookeeper ***:2181 --reassignment-json-file increase-replication-factor.json --execute
4、json文件内容如下:
{"version":1,
"partitions":[
{"topic":"ADV_TRACKINGIO","partition":0,"replicas":[557,558,559]},
{"topic":"ADV_TRACKINGIO","partition":1,"replicas":[557,558,559]},
{"topic":"ADV_TRACKINGIO","partition":2,"replicas":[557,558,559]},
{"topic":"User","partition":0,"replicas":[557,558,559]},
{"topic":"User","partition":1,"replicas":[557,558,559]},
{"topic":"User","partition":2,"replicas":[557,558,559]},
{"topic":"ADV","partition":0,"replicas":[557,558,559]},
{"topic":"ADV","partition":1,"replicas":[557,558,559]},
{"topic":"ADV","partition":2,"replicas":[557,558,559]},
]
}
发现由于那台硬盘损坏,无法重平衡
查看平衡状态信息,果然没同步成功
./kafka-reassign-partitions --zookeeper ***:2181 --reassignment-json-file /backup/increase-replication-factor.json --verify
这时硬盘彻底坏了,只能下线这台broker。cdh直接操作下线,此时会有异常信息,副本数滞后。数据还是能正常读写。
硬盘换好后,cdh上重新启动broker,异常信息逐渐消除,直至完全正常。
三、总结
如果不是恰好碰到硬盘坏在了那个单副本上,正常都是3副本的情况,可以直接下架该broker,修复后重新加入,中间就不会出现副本异常。
最坏的情况遇到了单副本分区数据就在这台坏掉的盘里,只能消费完历史数据,换盘后丢弃该部分数据。