kafka开启kerberos

一、基本环境准备

  1. 创建票据创建Kerberos主体(Principal):

使用kadmin.local或kadmin命令为Zookeeper和Kafka服务创建Kerberos主体。例如:

注意有几台机器创建几个

kadmin.local -q "addprinc -randkey zookeeper/dshieldcdh01@HADOOP139.COM"

kadmin.local -q "addprinc -randkey zookeeper/dshieldcdh02@HADOOP139.COM"

kadmin.local -q "addprinc -randkey zookeeper/dshieldcdh03@HADOOP139.COM"

kadmin.local -q "addprinc -randkey kafka/dshieldcdh01@HADOOP139.COM"

kadmin.local -q "addprinc -randkey kafka/dshieldcdh02@HADOOP139.COM"

kadmin.local -q "addprinc -randkey kafka/dshieldcdh03@HADOOP139.COM"

  1. 验证主体是否创建成功

kadmin.local -q "listprincs"

root@dshieldcdh02 \~\]# kadmin.local -q "listprincs" Authenticating as principal root/admin@HADOOP139.COM with password. K/M@HADOOP139.COM host/dshieldcdh01@HADOOP139.COM host/dshieldcdh02@HADOOP139.COM kadmin/admin@HADOOP139.COM kadmin/changepw@HADOOP139.COM kadmin/dshieldcdh02@HADOOP139.COM kafka/dshieldcdh01@HADOOP139.COM kafka/dshieldcdh02@HADOOP139.COM kafka/dshieldcdh03@HADOOP139.COM kiprop/dshieldcdh02@HADOOP139.COM krbtgt/HADOOP139.COM@HADOOP139.COM root/admin@HADOOP139.COM zookeeper/dshieldcdh01@HADOOP139.COM zookeeper/dshieldcdh02@HADOOP139.COM [zookeeper/dshieldcdh03@HADOOP139.COM](mailto:zookeeper/dshieldcdh03@HADOOP139.COM) 1. 创建keytab mkdir /etc/security/keytabs/ kadmin.local -q "xst -k /etc/security/keytabs/kafka.keytab [kafka/dshieldcdh01@HADOOP139.COM](mailto:kafka/dshieldcdh01@HADOOP139.COM)" kadmin.local -q "xst -k /etc/security/keytabs/kafka.keytab [kafka/dshieldcdh02@HADOOP139.COM](mailto:kafka/dshieldcdh02@HADOOP139.COM)" kadmin.local -q "xst -k /etc/security/keytabs/kafka.keytab [kafka/dshieldcdh03@HADOOP139.COM](mailto:kafka/dshieldcdh03@HADOOP139.COM)" 1. 验证KeyTab文件内容: klist -kt /etc/security/keytabs/zookeeper.keytab klist -kt /etc/security/keytabs/kafka.keytab kinit -kt /etc/security/keytabs/zookeeper.keytab [zookeeper/dshieldcdh02@HADOOP139.COM](mailto:zookeeper/dshieldcdh02@HADOOP139.COM) 1. 将keytab文件拷贝到其他两天zookeeper上,需要将keytab文件拷贝过去才可以使用 scp /etc/security/keytabs/\*keytab root@dshieldcdh01:/etc/security/keytabs/ scp /etc/security/keytabs/\*keytab root@dshieldcdh03:/etc/security/keytabs/ 1. 在其他机器上验证keytab文件可用 kinit -kt /etc/security/keytabs/zookeeper.keytab [zookeeper/dshieldcdh01@HADOOP139.COM](mailto:zookeeper/dshieldcdh01@HADOOP139.COM) 二、Zookeeper配置Kerberos 1. 配置Zookeeper的JAAS文件: 在Zookeeper的配置目录下创建JAAS配置文件(如zookeeper_jaas.conf),内容如下: java Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/zookeeper.keytab" principal="zookeeper/dshieldcdh01@HADOOP139.COM" useTicketCache=false; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/zookeeper.keytab" principal="zookeeper/dshieldcdh01@HADOOP139.COM" useTicketCache=false; }; 注意修改principal和keyTab路径以匹配实际环境。 在Zookeeper的启动脚本中添加JVM参数,指定JAAS配置文件的路径。 配置zookeeper的kerberos验证,切换到配置文件目录下cd conf,添加zoo.cfg配置文件,cp zoo_sample.cfg zoo.cfg打开zoo.cfg配置文件,添加配置,修改Zookeeper的配置文件cat zoo.cfg 启用SASL认证,并指定认证提供者。 authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider jaasLoginRenew=3600000 kerberos.removeHostFromPrincipal=true kerberos.removeRealmFromPrincipal=true export JVMFLAGS="-Djava.security.auth.login.config= /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_jaas.conf" scp /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_jaas.conf root@dshieldcdh02:/usr/local/apache-zookeeper-3.6.4/conf scp /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_jaas.conf [root@dshieldcdh03:/usr/local/apache-zookeeper-3.6.4/conf](mailto:root@dshieldcdh03:/usr/local/apache-zookeeper-3.6.4/conf) \[root@dshieldcdh03 \~\]# cat /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_jaas.conf Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/etc/security/keytabs/zookeeper.keytab" principal="zookeeper/dshieldcdh03@HADOOP139.COM"; }; cat /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_client_jaas.conf Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/usr/local/apacje-zookeeper-3.6.4/conf/zk.service.keytab" principal="zookeeper/dshieldcdh03@HADOOP139.COM"; }; 三、Kafka配置Kerberos 将kafka用户的keytab文件拷贝到其他服务器上 scp /etc/security/keytabs/kafka.keytab root@ dshieldcdh02:/etc/security/keytabs/kafka.keytab 配置Kafka的JAAS文件: 在Kafka的配置目录下创建JAAS配置文件(如kafka_client_jaas.conf),内容如下: kafka_client_jaas.conf KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/zookeeper.keytab" storeKey=true useTicketCache=false serviceName="zookeeper" principal=" zookeeper/dshieldcdh01@HADOOP139.COM"; }; com.sun.security.jgss.krb5.initiate { com.sun.security.auth.module.Krb5LoginModule required renewTGT=false doNotPrompt=true useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; kafka_server_jaas.conf KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/zookeeper.keytab" storeKey=true useTicketCache=false serviceName="zookeeper" principal=" zookeeper/dshieldcdh01@HADOOP139.COM"; }; com.sun.security.jgss.krb5.initiate { com.sun.security.auth.module.Krb5LoginModule required renewTGT=false doNotPrompt=true useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; 注意修改principal、keyTab路径和serviceName以匹配实际环境。 修改Kafka的启动脚本: 在Kafka的启动脚本中添加JVM参数,指定JAAS配置文件的路径。 cat kafka_client_jaas.conf kafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true serviceName=kafka keyTab="/etc/security/keytabs/kafka.keytab" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; cat server.properties broker.id=1 hostname=dshieldcdh01 listerners=SASL_PLAINTEXT://dshieldcdh01:9092 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=GSSAPI sasl.enabled.mechanisms= GSSAPI sasl.kerberos.service.name=kaka zookeeper.connect=dshieldcdh01:2181, dshieldcdh02:2181, dshieldcdh03:2181 zookeeper.set.acl=true zookeeper.connection.timeout.ms=18000 \[kafka@dshieldcdh01 config\]$ pwd /usr/local/kafka/config \[kafka@dshieldcdh01 config\]$ scp kafka_jaas.conf dshieldcdh02:/usr/local/kafka/config scp kafka_jaas.conf dshieldcdh03:/usr/local/kafka/config #kerberos listeners=SASL_PLAINTEXT://ambarim2:9092 advertised.listeners=SASL_PLAINTEXT://ambarim2:9092 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=GSSAPI principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal isasl.enabled.mechanisms=GSSAPI zookeeper.connect=dshieldcdh01:2181,dshieldcdh02:2181,dshieldcdh03:2181

相关推荐
Danileaf_Guo1 小时前
256台H100服务器算力中心的带外管理网络建设方案
运维·服务器
拾贰_C3 小时前
【Linux | Windows | Terminal Command】 Linux---grep | Windows--- findstr
linux·运维·服务器
虹科网络安全4 小时前
艾体宝洞察 | 利用“隐形字符”的钓鱼邮件:传统防御为何失效,AI安全意识培训如何补上最后一道防线
运维·网络·安全
石像鬼₧魂石4 小时前
Kali Linux 网络端口深度扫描
linux·运维·网络
alengan4 小时前
linux上面写python3日志服务器
linux·运维·服务器
yBmZlQzJ5 小时前
免费内网穿透-端口转发配置介绍
运维·经验分享·docker·容器·1024程序员节
JH30735 小时前
docker 新手入门:10分钟搞定基础使用
运维·docker·容器
小卒过河01046 小时前
使用apache nifi 从数据库文件表路径拉取远程文件至远程服务器目的地址
运维·服务器·数据库
Empty_7776 小时前
DevOps理念
运维·devops