目录
[一. 前言](#一. 前言)
[四. 使用 SASL/Kerberos 的身份验证](#四. 使用 SASL/Kerberos 的身份验证)
[4.1. 先决条件(Prerequisites)](#4.1. 先决条件(Prerequisites))
[4.1.1. Kerberos](#4.1.1. Kerberos)
[4.1.2. 创建 Kerberos 主体(Create Kerberos Principals)](#4.1.2. 创建 Kerberos 主体(Create Kerberos Principals))
[4.1.3. 确保使用主机名可以访问所有主机](#4.1.3. 确保使用主机名可以访问所有主机)
[4.2. 配置 Kafka Broker(Configuring Kafka Brokers)](#4.2. 配置 Kafka Broker(Configuring Kafka Brokers))
[4.3. 配置 Kafka 客户端(Configuring Kafka Clients)](#4.3. 配置 Kafka 客户端(Configuring Kafka Clients))
一. 前言
接上一篇《(一)Kafka 安全之使用 SASL 进行身份验证 ------ JAAS 配置、SASL 配置》,本文从第四节开始。
四. 使用 SASL/Kerberos 的身份验证
4.1. 先决条件(Prerequisites)
4.1.1. Kerberos
原文引用:If your organization is already using a Kerberos server (for example, by using Active Directory), there is no need to install a new server just for Kafka. Otherwise you will need to install one, your Linux vendor likely has packages for Kerberos and a short guide on how to install and configure it (Ubuntu, Redhat). Note that if you are using Oracle Java, you will need to download JCE policy files for your Java version and copy them to $JAVA_HOME/jre/lib/security.
如果您的组织已经在使用 Kerberos 服务器(例如,使用 Active Directory),则不需要仅为Kafka 安装新服务器。否则,您将需要安装一个,您的 Linux 供应商可能有 Kerberos 软件包和关于如何安装和配置它的简短指南(Ubuntu、Redhat)。请注意,如果您使用的是 Oracle Java,则需要下载 Java 版本的 JCE 策略文件,并将它们复制到 $Java_HOME/jre/lib/security。
4.1.2. 创建 Kerberos 主体(Create Kerberos Principals)
原文引用:If you are using the organization's Kerberos or Active Directory server, ask your Kerberos administrator for a principal for each Kafka broker in your cluster and for every operating system user that will access Kafka with Kerberos authentication (via clients and tools).
If you have installed your own Kerberos, you will need to create these principals yourself using the following commands:
如果您使用的是组织的 Kerberos 或 Active Directory 服务器,请向您的 Kerberos 管理员询问集群中每个 Kafka Broker 以及将使用 Kerberos 身份验证(通过客户端和工具)访问 Kafka 的每个操作系统用户的主体。
如果您已经安装了自己的 Kerberos,则需要使用以下命令自己创建这些主体:
bash
> sudo /usr/sbin/kadmin.local -q 'addprinc -randkey kafka/{hostname}@{REALM}'
> sudo /usr/sbin/kadmin.local -q "ktadd -k /etc/security/keytabs/{keytabname}.keytab kafka/{hostname}@{REALM}"
4.1.3. 确保使用主机名可以访问所有主机
原文引用:it is a Kerberos requirement that all your hosts can be resolved with their FQDNs.
Kerberos 要求所有主机都可以使用它们的 FQDNs 进行解析。
4.2. 配置 Kafka Broker(Configuring Kafka Brokers)
原文引用:1. Add a suitably modified JAAS file similar to the one below to each Kafka broker's config directory, let's call it kafka_server_jaas.conf for this example (note that each broker should have its own keytab):
- 在每个 Kafka Broker 的配置目录中添加一个经过适当修改的 JAAS 文件,类似于下面的文件,在本例中称之为 kafka_server_JAAS.conf(注意,每个 Broker 都应该有自己的 keytab):
bash
KafkaServer {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
};
// Zookeeper client authentication
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
storeKey=true
keyTab="/etc/security/keytabs/kafka_server.keytab"
principal="kafka/kafka1.hostname.com@EXAMPLE.COM";
};
原文引用:KafkaServer section in the JAAS file tells the broker which principal to use and the location of the keytab where this principal is stored. It allows the broker to login using the keytab specified in this section. See notes for more details on Zookeeper SASL configuration.
JAAS 文件中的 KafkaServer 部分告诉 Broker 要使用哪个主体以及存储该主体的 keytab 的位置。它允许 Broker 使用本节中指定的 keytab 登录。有关 Zookeeper SASL 配置的更多详细信息,请参阅注释。
原文引用:2. Pass the JAAS and optionally the krb5 file locations as JVM parameters to each Kafka broker (see here for more details):
- 将 JAAS 和可选的 krb5 文件位置作为 JVM 参数传递给每个 Kafka Broker(有关更多详细信息,请参阅此处):
css
-Djava.security.krb5.conf=/etc/kafka/krb5.conf
-Djava.security.auth.login.config=/etc/kafka/kafka_server_jaas.conf
原文引用:3. Make sure the keytabs configured in the JAAS file are readable by the operating system user who is starting kafka broker.
- 确保启动 Kafka Broker 的操作系统用户能够读取 JAAS 文件中配置的 keytabs。
原文引用:4. Configure SASL port and SASL mechanisms in server.properties as described here. For example:
- 在 server.properties 中配置 SASL 端口和 SASL 机制,如下所述。例如
css
listeners=SASL_PLAINTEXT://host.name:port
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=GSSAPI
sasl.enabled.mechanisms=GSSAPI
原文引用:We must also configure the service name in server.properties, which should match the principal name of the kafka brokers. In the above example, principal is "kafka/kafka1.hostname.com@EXAMPLE.com", so:
我们还必须在 server.properties 中配置服务名称,该名称应与 Kafka Broker 的主体名称相匹配。在上面的例子中,主体是"kafka/kafka1.hostname.com@EXAMPLE.com",所以:
css
sasl.kerberos.service.name=kafka
4.3. 配置 Kafka 客户端(Configuring Kafka Clients)
要在客户端上配置 SASL 身份验证,请执行以下操作(To configure SASL authentication on the clients):
原文引用:1. Clients (producers, consumers, connect workers, etc) will authenticate to the cluster with their own principal (usually with the same name as the user running the client), so obtain or create these principals as needed. Then configure the JAAS configuration property for each client. Different clients within a JVM may run as different users by specifying different principals. The property sasl.jaas.config in producer.properties or consumer.properties describes how clients like producer and consumer can connect to the Kafka Broker. The following is an example configuration for a client using a keytab (recommended for long-running processes):
- 客户端(生产者、消费者、connect works 等)将使用自己的主体(通常与运行客户端的用户同名)向集群进行身份验证,因此根据需要获取或创建这些主体。然后为每个客户端配置 JAAS 配置属性。JVM 中的不同客户端可以通过指定不同的主体作为不同的用户运行。producer.properties 或 consumer.properties 中的属性 sasl.jas.config 描述了像 producer 和 consumer 这样的客户端如何连接到 Kafka Broker。以下是使用 keytab 的客户端的配置示例(建议用于长时间运行的进程):
css
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useKeyTab=true \
storeKey=true \
keyTab="/etc/security/keytabs/kafka_client.keytab" \
principal="kafka-client-1@EXAMPLE.COM";
原文引用:For command-line utilities like kafka-console-consumer or kafka-console-producer, kinit can be used along with "useTicketCache=true" as in:
对于命令行实用程序,如 kafka-console-consumer 或 kafka-console-producer,kinit 可以与"useTicketCache=true"一起使用,如下所示:
css
sasl.jaas.config=com.sun.security.auth.module.Krb5LoginModule required \
useTicketCache=true;
原文引用:JAAS configuration for clients may alternatively be specified as a JVM parameter similar to brokers as described here. Clients use the login section named
KafkaClient
. This option allows only one user for all client connections from a JVM.
客户端的 JAAS 配置也可以指定为 JVM 参数,类似于这里描述的 Broker。客户端使用名为KafkaClient 的登录部分。此选项只允许一个用户连接 JVM 中的所有客户端连接。
原文引用:2. Make sure the keytabs configured in the JAAS configuration are readable by the operating system user who is starting kafka client.
- 确保启动 Kafka 客户端的操作系统用户能够读取在 JAAS 配置中配置的 keytabs。
原文引用:3. Optionally pass the krb5 file locations as JVM parameters to each client JVM (see here for more details):
- 可以选择将 krb5 文件位置作为 JVM 参数传递给每个客户端 JVM(有关更多详细信息,请参阅此处):
css
-Djava.security.krb5.conf=/etc/kafka/krb5.conf
原文引用:4. Configure the following properties in producer.properties or consumer.properties:
- 在 producer.properties 或 consumer.properties 中配置以下属性:
css
security.protocol=SASL_PLAINTEXT (or SASL_SSL)
sasl.mechanism=GSSAPI
sasl.kerberos.service.name=kafka