Kafka配置SASL_SSL认证传输加密

本文分享自天翼云开发者社区《Kafka配置SASL_SSL认证传输加密》,作者:王****帅

一、SSL证书配置

1、生成证书

如我输入命令如下:依次是 密码---重输密码---名与姓---组织单位---组织名---城市---省份---国家两位代码---密码---重输密码,后面告警不用管,此步骤要注意的是,名与姓这一项必须输入域名,如 "localhost",切记不可以随意写,我曾尝试使用其他字符串,在后面客户端生成证书认证的时候一直有问题。

bash 复制代码
keytool -keystore server.keystore.jks -alias localhost -validity 3650 -genkey Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: localhost What is the name of your organizational unit? [Unknown]: CH-kafka What is the name of your organization? [Unknown]: kafkadev What is the name of your City or Locality? [Unknown]: shanghai What is the name of your State or Province? [Unknown]: shanghai What is the two-letter country code for this unit? [Unknown]: CH Is CN=localhost, OU=CH-kafka, O=kafkadev, L=shanghai, ST=shanghai, C=CH correct? [no]: yes Enter key password for <localhost> (RETURN if same as keystore password): Re-enter new password: Warning: The JKS keystore uses a proprietary format. It is recommended to migrate to PKCS12 which is an industry standard format using "keytool -importkeystore -srckeystore server.keystore.jks -destkeystore server.keystore.jks -deststoretype pkcs12". 

完成上面步骤,可使用命令 keytool -list -v -keystore server.keystore.jks 来验证生成证书的内容

2、生成CA

通过第一步,集群中的每台机器都生成一对公私钥,和一个证书来识别机器。但是,证书是未签名的,这意味着攻击者可以创建一个这样的证书来伪装成任何机器。

因此,通过对集群中的每台机器进行签名来防止伪造的证书。证书颁发机构(CA)负责签名证书。CA的工作机制像一个颁发护照的政府。政府印章(标志)每本护照,这样护照很难伪造。其他政府核实护照的印章,以确保护照是真实的。同样,CA签名的证书和加密保证签名证书很难伪造。因此,只要CA是一个真正和值得信赖的权威,client就能有较高的保障连接的是真正的机器。如下,生成的CA是一个简单的公私钥对和证书,用于签名其他的证书,下面为输入命令,依次提示输入为 密码---重输密码---国家两位代码---省份---城市---名与姓---组织名---组织单位---名与姓(域名)---邮箱 ,此输入步骤与上面生成证书世输入步骤相反,输入值要与第一步一致,邮箱可不输入

bash 复制代码
openssl req -new -x509 -keyout ca-key -out ca-cert -days 3650 Generating a 2048 bit RSA private key .........................................................................+++ ..................+++ writing new private key to 'ca-key' Enter PEM pass phrase: Verifying - Enter PEM pass phrase: ----- You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]:CH State or Province Name (full name) []:shanghai Locality Name (eg, city) [Default City]:shanghai Organization Name (eg, company) [Default Company Ltd]:kafkadev Organizational Unit Name (eg, section) []:CH-kafka Common Name (eg, your name or your server's hostname) []:localhost Email Address []: 将生成的CA添加到**clients' truststore(客户的信任库)**,以便client可以信任这个CA: keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert 

3、签名证书

步骤2 生成的CA来签名所有步骤1生成的证书,首先,你需要从密钥仓库导出证书:

bash 复制代码
keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file 

然后用CA签名:{validity},{ca-password} 两个为参数,

bash 复制代码
openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days {validity} -CAcreateserial -passin pass:{ca-password}

最后,你需要导入CA的证书和已签名的证书到密钥仓库:

bash 复制代码
keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed 

上文中的各参数解释如下:

keystore: 密钥仓库的位置

ca-cert: CA的证书

ca-key: CA的私钥

ca-password: CA的密码

cert-file: 出口,服务器的未签名证书

cert-signed: 已签名的服务器证书

上面步骤所有执行脚本如下:注意密码修改为自己的密码,以防混淆,所有步骤密码最好设为同一个

bash 复制代码
#!/bin/bash
#Step 1
keytool -keystore server.keystore.jks -alias localhost -validity 3650 -keyalg RSA -genkey #Step 2 openssl req -new -x509 -keyout ca-key -out ca-cert -days 3650 keytool -keystore server.truststore.jks -alias CARoot -import -file ca-cert keytool -keystore client.truststore.jks -alias CARoot -import -file ca-cert #Step 3 keytool -keystore server.keystore.jks -alias localhost -certreq -file cert-file openssl x509 -req -CA ca-cert -CAkey ca-key -in cert-file -out cert-signed -days 3650 -CAcreateserial -passin pass:123456 keytool -keystore server.keystore.jks -alias CARoot -import -file ca-cert keytool -keystore server.keystore.jks -alias localhost -import -file cert-signed #Step 4 keytool -keystore client.keystore.jks -alias localhost -validity 3650 -keyalg RSA -genkey

二、配置zookeeper的安全认证

1、在zookeeper的conf文件夹下创建jaas.conf安全配置文件

此文件中定义了两个用户 admin以及kafka 等于号后面是用户对应的密码

此文件定义的是连接zookeeper服务器的用户 JAAS配置节点默认为Server(节点名不可修改,修改后会报错)

bash 复制代码
Server {
  org.apache.zookeeper.server.auth.DigestLoginModule required
  user_admin="!234Qwer"
  user_kafka="clearwater001"; }; 

2、在zookeeper的配置文件zoo.cfg中添加认证配置源文件如下

bash 复制代码
tickTime=2000
initLimit=10 syncLimit=5 dataDir=/usr/local/zookeeper/apache-zookeeper-3.5.9-bin/data dataLogDir=/usr/local/zookeeper/apache-zookeeper-3.5.9-bin/logs clientPort=2181 #sasl认证 authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider requireClientAuthScheme=sasl jaasLoginRenew=3600000 

3、在zkEnv.sh启动环境脚本中添加jvm参数,将jaas的配置文件位置作为JVM参数传递给每个客户端的JVM

bash 复制代码
  LIBPATH=("${ZOOKEEPER_PREFIX}"/share/zookeeper/*.jar) else #release tarball format for i in "$ZOOBINDIR"/../zookeeper-*.jar do CLASSPATH="$i:$CLASSPATH" done LIBPATH=("${ZOOBINDIR}"/../lib/*.jar) fi for i in "${LIBPATH[@]}" do CLASSPATH="$i:$CLASSPATH" done #make it work for developers for d in "$ZOOBINDIR"/../build/lib/*.jar do CLASSPATH="$d:$CLASSPATH" done for d in "$ZOOBINDIR"/../zookeeper-server/target/lib/*.jar do CLASSPATH="$d:$CLASSPATH" done #make it work for developers CLASSPATH="$ZOOBINDIR/../build/classes:$CLASSPATH" #make it work for developers CLASSPATH="$ZOOBINDIR/../zookeeper-server/target/classes:$CLASSPATH" case "`uname`" in CYGWIN*|MINGW*) cygwin=true ;; *) cygwin=false ;; esac if $cygwin then CLASSPATH=`cygpath -wp "$CLASSPATH"` fi #echo "CLASSPATH=$CLASSPATH" # default heap for zookeeper server ZK_SERVER_HEAP="${ZK_SERVER_HEAP:-1000}" export SERVER_JVMFLAGS="-Xmx${ZK_SERVER_HEAP}m $SERVER_JVMFLAGS" #JVM参数 export SERVER_JVMFLAGS=" -Djava.security.auth.login.config=/usr/local/zookeeper/apache-zookeeper-3.5.9-bin/conf/jaas.conf" # default heap for zookeeper client ZK_CLIENT_HEAP="${ZK_CLIENT_HEAP:-256}" export CLIENT_JVMFLAGS="-Xmx${ZK_CLIENT_HEAP}m $CLIENT_JVMFLAGS" 

三、配置kafka的安全认证

1、在kafka的conf目录下创建jaas.conf认证文件

username和password属性 用来定义kafka中各个broker节点之间相互通信的用户

user_用来定义连接到kafka中各个broker的用户 这些用户可供生产者以及消费者进行使用

两个用户的配置均在JAAS默认配置节点KafkaServer中进行配置

broker连接到zookeeper的用户在JAAS默认配置节点Client中进行配置,从上面zookeeper中的jaas文件中选择一个用户进行使用

bash 复制代码
KafkaServer {
        org.apache.kafka.common.security.plain.PlainLoginModule required
        username="admin"
        password="clearwater" user_admin="clearwater" user_kafka="!234Qwer"; }; Client { org.apache.kafka.common.security.plain.PlainLoginModule required username="kafka" password="clearwater001"; }; 
复制代码
 

2、在kafka的conf目录下创建kafka_client_jaas.conf认证文件

bash 复制代码
KafkaClient {
        org.apache.kafka.common.security.plain.PlainLoginModule required
        username="kafka"
        password="!234Qwer"; }; 

3、在kafka的bin目录下kafka-server-start.sh的启动脚本中配置环境变量,指定jaas.conf文件

bash 复制代码
if [ $# -lt 1 ]; then echo "USAGE: $0 [-daemon] server.properties [--override property=value]*" exit 1 fi base_dir=$(dirname $0) if [ "x$KAFKA_LOG4J_OPTS" = "x" ]; then export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file:$base_dir/../config/log4j.properties" fi #环境变量 if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx1G -Xms1G -Djava.security.auth.login.config=/usr/local/kafka_2.12-2.8.0/config/jaas.conf" fi EXTRA_ARGS=${EXTRA_ARGS-'-name kafkaServer -loggc'} COMMAND=$1 case $COMMAND in -daemon) EXTRA_ARGS="-daemon "$EXTRA_ARGS shift ;; *) ;; esac exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka "$@" 

4、在kafka的bin目录下kafka-console-producer.sh的启动脚本中配置环境变量,指定jaas.conf文件

bash 复制代码
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx512M -Djava.security.auth.login.config=/usr/local/kafka_2.12-2.8.0/config/kafka_client_jaas.conf" fi exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleProducer "$@" 

5、在kafka的bin目录下kafka-server-start.sh的启动脚本中配置环境变量,指定jaas.conf文件

bash 复制代码
if [ "x$KAFKA_HEAP_OPTS" = "x" ]; then export KAFKA_HEAP_OPTS="-Xmx512M -Djava.security.auth.login.config=/usr/local/kafka_2.12-2.8.0/config/kafka_client_jaas.conf" fi exec $(dirname $0)/kafka-run-class.sh kafka.tools.ConsoleConsumer "$@" 

6、在kafka的bin目录下创建client-ssl.properties认证文件(执行生产者和消费者命令时指定)

bash 复制代码
security.protocol=SASL_SSL
ssl.endpoint.identification.algorithm=
sasl.mechanism=PLAIN group.id=test ssl.truststore.location=/usr/local/kafka_2.12-2.8.0/ssl/client.truststore.jks ssl.truststore.password=clearwater001! 

7、配置kafka的server.properties配置文件,添加如下内容

bash 复制代码
#sasl_ssl
listeners=SASL_SSL://172.17.0.53:9093
advertised.listeners=SASL_SSL://172.17.0.53:9093 security.inter.broker.protocol=SASL_SSL sasl.enabled.mechanisms=PLAIN sasl.mechanism.inter.broker.protocol=PLAIN authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer allow.everyone.if.no.acl.found=true ssl.keystore.location=/usr/local/kafka_2.12-2.8.0/ssl/server.keystore.jks ssl.keystore.password=clearwater001! ssl.key.password=clearwater001! ssl.truststore.location=/usr/local/kafka_2.12-2.8.0/ssl/server.truststore.jks ssl.truststore.password=clearwater001! ssl.endpoint.identification.algorithm= 

8、重启zookeeper和kafka,创建topic(命令在第四节),添加生产者和消费者授权

bash 复制代码
#生产者授权
./kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:"kafka" --producer --topic "test" #消费者授权 ./kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:"kafka" --consumer --topic "test" --group '*' 

四、相关启动命令

1、启动zookeeper

bash 复制代码
/usr/local/zookeeper/apache-zookeeper-3.5.9-bin/bin/zkServer.sh start 

2、启动kafka-server

bash 复制代码
./kafka-server-start.sh  -daemon /usr/local/kafka_2.12-2.8.0/config/server.properties  

3、创建topic

bash 复制代码
./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test #查看topic list ./kafka-topics.sh --list --zookeeper localhost:2181 

4、加密前生产消费消息(一般使用新版本命令)

bash 复制代码
###生产消息###
#老版本
./kafka-console-producer.sh --broker-list localhost:9092 --topic test
#新版本 ./kafka-console-producer.sh --bootstrap-server localhost:9092 --topic test ###消费消息### #老版本 ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning #新版本 ./kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning 

5、加密后生产消费消息命令

bash 复制代码
#生产消息 
./kafka-console-producer.sh --bootstrap-server 172.17.0.53:9093 --topic test --producer.config client-ssl.properties #消费消息 ./kafka-console-consumer.sh --bootstrap-server 172.17.0.53:9093 --topic test --from-beginning --consumer.config client-ssl.properties
相关推荐
太阳伞下的阿呆1 天前
kafka高吞吐持久化方案(2)
分布式·kafka·高并发·重入锁
Chasing__Dreams1 天前
kafka--基础知识点--19--消息重复
分布式·kafka
import_random2 天前
[kafka]伪集群搭建,各个节点配置文件中listeners参数的配置
kafka
Mr.朱鹏3 天前
SQL深度分页问题案例实战
java·数据库·spring boot·sql·spring·spring cloud·kafka
山沐与山3 天前
【MQ】Kafka与RocketMQ深度对比
分布式·kafka·rocketmq
yumgpkpm3 天前
Cloudera CDP7、CDH5、CDH6 在华为鲲鹏 ARM 麒麟KylinOS做到无缝切换平缓迁移过程
大数据·arm开发·华为·flink·spark·kafka·cloudera
树下水月3 天前
Easyoole 使用rdkafka 进行kafka的创建topic创建 删除 以及数据发布 订阅
分布式·kafka
Cat God 0073 天前
基于Docker搭建kafka集群
docker·容器·kafka
Cat God 0073 天前
基于 Docker 部署 Kafka(KRaft + SASL/PLAIN 认证)
docker·容器·kafka