kafka开启kerberos

一、基本环境准备

  1. 创建票据创建Kerberos主体(Principal):

使用kadmin.local或kadmin命令为Zookeeper和Kafka服务创建Kerberos主体。例如:

注意有几台机器创建几个

kadmin.local -q "addprinc -randkey zookeeper/dshieldcdh01@HADOOP139.COM"

kadmin.local -q "addprinc -randkey zookeeper/dshieldcdh02@HADOOP139.COM"

kadmin.local -q "addprinc -randkey zookeeper/dshieldcdh03@HADOOP139.COM"

kadmin.local -q "addprinc -randkey kafka/dshieldcdh01@HADOOP139.COM"

kadmin.local -q "addprinc -randkey kafka/dshieldcdh02@HADOOP139.COM"

kadmin.local -q "addprinc -randkey kafka/dshieldcdh03@HADOOP139.COM"

  1. 验证主体是否创建成功

kadmin.local -q "listprincs"

root@dshieldcdh02 \~\]# kadmin.local -q "listprincs" Authenticating as principal root/admin@HADOOP139.COM with password. K/M@HADOOP139.COM host/dshieldcdh01@HADOOP139.COM host/dshieldcdh02@HADOOP139.COM kadmin/admin@HADOOP139.COM kadmin/changepw@HADOOP139.COM kadmin/dshieldcdh02@HADOOP139.COM kafka/dshieldcdh01@HADOOP139.COM kafka/dshieldcdh02@HADOOP139.COM kafka/dshieldcdh03@HADOOP139.COM kiprop/dshieldcdh02@HADOOP139.COM krbtgt/HADOOP139.COM@HADOOP139.COM root/admin@HADOOP139.COM zookeeper/dshieldcdh01@HADOOP139.COM zookeeper/dshieldcdh02@HADOOP139.COM [zookeeper/dshieldcdh03@HADOOP139.COM](mailto:zookeeper/dshieldcdh03@HADOOP139.COM) 1. 创建keytab mkdir /etc/security/keytabs/ kadmin.local -q "xst -k /etc/security/keytabs/kafka.keytab [kafka/dshieldcdh01@HADOOP139.COM](mailto:kafka/dshieldcdh01@HADOOP139.COM)" kadmin.local -q "xst -k /etc/security/keytabs/kafka.keytab [kafka/dshieldcdh02@HADOOP139.COM](mailto:kafka/dshieldcdh02@HADOOP139.COM)" kadmin.local -q "xst -k /etc/security/keytabs/kafka.keytab [kafka/dshieldcdh03@HADOOP139.COM](mailto:kafka/dshieldcdh03@HADOOP139.COM)" 1. 验证KeyTab文件内容: klist -kt /etc/security/keytabs/zookeeper.keytab klist -kt /etc/security/keytabs/kafka.keytab kinit -kt /etc/security/keytabs/zookeeper.keytab [zookeeper/dshieldcdh02@HADOOP139.COM](mailto:zookeeper/dshieldcdh02@HADOOP139.COM) 1. 将keytab文件拷贝到其他两天zookeeper上,需要将keytab文件拷贝过去才可以使用 scp /etc/security/keytabs/\*keytab root@dshieldcdh01:/etc/security/keytabs/ scp /etc/security/keytabs/\*keytab root@dshieldcdh03:/etc/security/keytabs/ 1. 在其他机器上验证keytab文件可用 kinit -kt /etc/security/keytabs/zookeeper.keytab [zookeeper/dshieldcdh01@HADOOP139.COM](mailto:zookeeper/dshieldcdh01@HADOOP139.COM) 二、Zookeeper配置Kerberos 1. 配置Zookeeper的JAAS文件: 在Zookeeper的配置目录下创建JAAS配置文件(如zookeeper_jaas.conf),内容如下: java Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/zookeeper.keytab" principal="zookeeper/dshieldcdh01@HADOOP139.COM" useTicketCache=false; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true keyTab="/etc/security/keytabs/zookeeper.keytab" principal="zookeeper/dshieldcdh01@HADOOP139.COM" useTicketCache=false; }; 注意修改principal和keyTab路径以匹配实际环境。 在Zookeeper的启动脚本中添加JVM参数,指定JAAS配置文件的路径。 配置zookeeper的kerberos验证,切换到配置文件目录下cd conf,添加zoo.cfg配置文件,cp zoo_sample.cfg zoo.cfg打开zoo.cfg配置文件,添加配置,修改Zookeeper的配置文件cat zoo.cfg 启用SASL认证,并指定认证提供者。 authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider jaasLoginRenew=3600000 kerberos.removeHostFromPrincipal=true kerberos.removeRealmFromPrincipal=true export JVMFLAGS="-Djava.security.auth.login.config= /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_jaas.conf" scp /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_jaas.conf root@dshieldcdh02:/usr/local/apache-zookeeper-3.6.4/conf scp /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_jaas.conf [root@dshieldcdh03:/usr/local/apache-zookeeper-3.6.4/conf](mailto:root@dshieldcdh03:/usr/local/apache-zookeeper-3.6.4/conf) \[root@dshieldcdh03 \~\]# cat /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_jaas.conf Server { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/etc/security/keytabs/zookeeper.keytab" principal="zookeeper/dshieldcdh03@HADOOP139.COM"; }; cat /usr/local/apache-zookeeper-3.6.4/conf/zookeeper_client_jaas.conf Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true useTicketCache=false keyTab="/usr/local/apacje-zookeeper-3.6.4/conf/zk.service.keytab" principal="zookeeper/dshieldcdh03@HADOOP139.COM"; }; 三、Kafka配置Kerberos 将kafka用户的keytab文件拷贝到其他服务器上 scp /etc/security/keytabs/kafka.keytab root@ dshieldcdh02:/etc/security/keytabs/kafka.keytab 配置Kafka的JAAS文件: 在Kafka的配置目录下创建JAAS配置文件(如kafka_client_jaas.conf),内容如下: kafka_client_jaas.conf KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/zookeeper.keytab" storeKey=true useTicketCache=false serviceName="zookeeper" principal=" zookeeper/dshieldcdh01@HADOOP139.COM"; }; com.sun.security.jgss.krb5.initiate { com.sun.security.auth.module.Krb5LoginModule required renewTGT=false doNotPrompt=true useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; kafka_server_jaas.conf KafkaServer { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; KafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; Client { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true keyTab="/etc/security/keytabs/zookeeper.keytab" storeKey=true useTicketCache=false serviceName="zookeeper" principal=" zookeeper/dshieldcdh01@HADOOP139.COM"; }; com.sun.security.jgss.krb5.initiate { com.sun.security.auth.module.Krb5LoginModule required renewTGT=false doNotPrompt=true useKeyTab=true keyTab="/etc/security/keytabs/kafka.keytab" storeKey=true useTicketCache=false serviceName="kafka" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; 注意修改principal、keyTab路径和serviceName以匹配实际环境。 修改Kafka的启动脚本: 在Kafka的启动脚本中添加JVM参数,指定JAAS配置文件的路径。 cat kafka_client_jaas.conf kafkaClient { com.sun.security.auth.module.Krb5LoginModule required useKeyTab=true storeKey=true serviceName=kafka keyTab="/etc/security/keytabs/kafka.keytab" principal="kafka/dshieldcdh01@HADOOP139.COM"; }; cat server.properties broker.id=1 hostname=dshieldcdh01 listerners=SASL_PLAINTEXT://dshieldcdh01:9092 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=GSSAPI sasl.enabled.mechanisms= GSSAPI sasl.kerberos.service.name=kaka zookeeper.connect=dshieldcdh01:2181, dshieldcdh02:2181, dshieldcdh03:2181 zookeeper.set.acl=true zookeeper.connection.timeout.ms=18000 \[kafka@dshieldcdh01 config\]$ pwd /usr/local/kafka/config \[kafka@dshieldcdh01 config\]$ scp kafka_jaas.conf dshieldcdh02:/usr/local/kafka/config scp kafka_jaas.conf dshieldcdh03:/usr/local/kafka/config #kerberos listeners=SASL_PLAINTEXT://ambarim2:9092 advertised.listeners=SASL_PLAINTEXT://ambarim2:9092 security.inter.broker.protocol=SASL_PLAINTEXT sasl.mechanism.inter.broker.protocol=GSSAPI principal.to.local.class=kafka.security.auth.KerberosPrincipalToLocal isasl.enabled.mechanisms=GSSAPI zookeeper.connect=dshieldcdh01:2181,dshieldcdh02:2181,dshieldcdh03:2181

相关推荐
孙克旭_5 分钟前
day051-ansible循环、判断与jinja2模板
linux·运维·服务器·网络·ansible
总有刁民想爱朕ha36 分钟前
零基础搭建监控系统:Grafana+InfluxDB 保姆级教程,5分钟可视化服务器性能!
运维·服务器·grafana
Mr_Orangechen1 小时前
Linux 下使用 VS Code 远程 GDB 调试 ARM 程序
linux·运维·arm开发
撰卢1 小时前
【个人笔记】负载均衡
运维·笔记·负载均衡
lilian1292 小时前
linux系统mysql性能优化
linux·运维·mysql
weixin_516023073 小时前
Geant4 安装---Ubuntu
linux·运维·ubuntu
wanhengidc3 小时前
企业选择大带宽服务器租用的原因有哪些?
运维·服务器
罗技1233 小时前
用Filebeat OSS 7.10.2将收集日志到Easysearch
运维·es
egoist20234 小时前
【Linux仓库】虚拟地址空间【进程·陆】
linux·运维·服务器·操作系统·进程·虚拟地址空间·fork
云和数据.ChenGuang5 小时前
自动化运维工具jenkins问题
运维·自动化·jenkins·运维面试题·运维试题