(一)Kafka 安全之使用 SSL 的加密和身份验证

目录

[一. 前言](#一. 前言)

[二. 使用 SSL 的加密和身份验证](#二. 使用 SSL 的加密和身份验证)

[2.1. 为每个 Kafka Broker 生成 SSL 密钥和证书](#2.1. 为每个 Kafka Broker 生成 SSL 密钥和证书)

[2.1.1. 主机名验证(Host Name Verification)](#2.1.1. 主机名验证(Host Name Verification))

[2.1.2. 注意(Note)](#2.1.2. 注意(Note))


一. 前言

SSL(Secure Sockets Layer)是一种网络协议,提供了一种在客户端和服务器之间建立安全连接的方法。启用 SSL 后,Kafka 集群中的所有数据传输,包括生产者、消费者与 Broker 之间的消息交互都会被加密,确保敏感信息在网络传输过程中不被窃听或篡改。

二. 使用 SSL 的加密和身份验证

原文引用:Apache Kafka allows clients to use SSL for encryption of traffic as well as authentication. By default, SSL is disabled but can be turned on if needed. The following paragraphs explain in detail how to set up your own PKI infrastructure, use it to create certificates and configure Kafka to use these.

Apache Kafka 允许客户端使用 SSL 对流量进行加密和身份验证。默认情况下,SSL 是禁用的,但如果需要,可以打开。以下段落详细解释了如何设置自己的 PKI 基础设施,使用它来创建证书,并配置 Kafka 来使用这些证书。

2.1. 为每个 Kafka Broker 生成 SSL 密钥和证书

原文引用:The first step of deploying one or more brokers with SSL support is to generate a public/private keypair for every server. Since Kafka expects all keys and certificates to be stored in keystores we will use Java's keytool command for this task. The tool supports two different keystore formats, the Java specific jks format which has been deprecated by now, as well as PKCS12. PKCS12 is the default format as of Java version 9, to ensure this format is being used regardless of the Java version in use all following commands explicitly specify the PKCS12 format.

部署一个或多个支持 SSL 的 Broker 程序的第一步是为每个服务器生成一个 public/priate 密钥对。由于 Kafka 希望所有密钥和证书都存储在密钥库中,因此我们将使用 Java 的 keytool 命令来完成此任务。该工具支持两种不同的密钥存储格式,一种是 Java 特定的 jks 格式(目前已弃用),另一种是 PKCS12。PKCS12 是 Java 9 版本的默认格式,为了确保无论使用的 Java 版本如何都使用此格式,以下所有命令都明确指定 PKCS12 格式。

css 复制代码
> keytool -keystore {keystorefile} -alias localhost -validity {validity} -genkey -keyalg RSA -storetype pkcs12

原文引用:You need to specify two parameters in the above command:

  1. keystorefile: the keystore file that stores the keys (and later the certificate) for this broker. The keystore file contains the private and public keys of this broker, therefore it needs to be kept safe. Ideally this step is run on the Kafka broker that the key will be used on, as this key should never be transmitted/leave the server that it is intended for.
  2. validity: the valid time of the key in days. Please note that this differs from the validity period for the certificate, which will be determined in Signing the certificate. You can use the same key to request multiple certificates: if your key has a validity of 10 years, but your CA will only sign certificates that are valid for one year, you can use the same key with 10 certificates over time.

您需要在上面的命令中指定两个参数:

  1. keystorefile:存储此 Broker 的密钥(以及后来的证书)的密钥库文件。密钥库文件包含该 Broker 的私钥和公钥,因此需要保持安全。理想情况下,这一步骤是在密钥将在其上使用的Kafka Broker 上运行的,因为该密钥永远不应该被传输/离开其目的服务器。
  2. validity:密钥的有效时间,以天为单位。请注意,这与证书的有效期不同,该有效期将在签署证书时确定。您可以使用同一密钥申请多个证书:如果您的密钥的有效期为10年,但您的CA 只会签署有效期为一年的证书,那么您可以在一段时间内将同一密钥与10个证书一起使用。

原文引用:To obtain a certificate that can be used with the private key that was just created a certificate signing request needs to be created. This signing request, when signed by a trusted CA results in the actual certificate which can then be installed in the keystore and used for authentication purposes.

To generate certificate signing requests run the following command for all server keystores created so far.

要获得可与刚创建的私钥一起使用的证书,需要创建证书签名请求。当由受信任的 CA 签名时,此签名请求会生成实际证书,然后可以将该证书安装在密钥库中并用于身份验证。

要生成证书签名请求,请对迄今为止创建的所有服务器密钥库运行以下命令。

css 复制代码
> keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}

原文引用:This command assumes that you want to add hostname information to the certificate, if this is not the case, you can omit the extension parameter -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}. Please see below for more information on this.

此命令假定您要将主机名信息添加到证书中,如果不是这样,则可以省略扩展参数 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}。有关此方面的更多信息,请参见下文。

2.1.1. 主机名验证(Host Name Verification)

原文引用:Host name verification, when enabled, is the process of checking attributes from the certificate that is presented by the server you are connecting to against the actual hostname or ip address of that server to ensure that you are indeed connecting to the correct server.

主机名验证(如果启用)是根据服务器的实际主机名或 ip 地址检查连接到的服务器提供的证书中的属性的过程,以确保您确实连接到了正确的服务器。

原文引用:The main reason for this check is to prevent man-in-the-middle attacks. For Kafka, this check has been disabled by default for a long time, but as of Kafka 2.0.0 host name verification of servers is enabled by default for client connections as well as inter-broker connections.

进行此检查的主要原因是为了防止中间人攻击。对于 Kafka,此检查已被默认禁用很长一段时间,但从 Kafka 2.0.0 起,客户端连接和 Broker 间连接默认启用服务器的主机名验证。

原文引用:Server host name verification may be disabled by setting ssl.endpoint.identification.algorithm to an empty string.

For dynamically configured broker listeners, hostname verification may be disabled using kafka-configs.sh:

可以通过将 ssl.endpoint.identification.agorithm 设置为空字符串来禁用服务器主机名验证。

对于动态配置的 Broker 监听器,可以使用 kafka-configs.sh 禁用主机名验证:

css 复制代码
> bin/kafka-configs.sh --bootstrap-server localhost:9093 --entity-type brokers --entity-name 0 --alter --add-config "listener.name.internal.ssl.endpoint.identification.algorithm="

2.1.2. 注意(Note)

原文引用:Normally there is no good reason to disable hostname verification apart from being the quickest way to "just get it to work" followed by the promise to "fix it later when there is more time"!

通常情况下,禁用主机名验证没有什么好的理由,除了这是"让它正常工作"的最快方法,然后承诺"稍后有更多时间时修复它"!

原文引用:Getting hostname verification right is not that hard when done at the right time, but gets much harder once the cluster is up and running - do yourself a favor and do it now!

在正确的时间进行主机名验证并不是那么困难,但一旦集群启动并运行,就会变得非常困难------帮自己一个忙,现在就做!

原文引用:If host name verification is enabled, clients will verify the server's fully qualified domain name (FQDN) or ip address against one of the following two fields:

  1. Common Name (CN)
  2. Subject Alternative Name (SAN)

如果启用了主机名验证,客户端将根据以下两个字段之一验证服务器的完全限定域名(FQDN)或 ip 地址:

  1. 通用名称(CN)。
  2. 使用者备选名称(SAN)

原文引用:While Kafka checks both fields, usage of the common name field for hostname verification has been deprecated since 2000 and should be avoided if possible. In addition the SAN field is much more flexible, allowing for multiple DNS and IP entries to be declared in a certificate.

虽然 Kafka 同时检查这两个字段,但自2000年以来,使用通用名称字段进行主机名验证的做法一直受到反对,如果可能的话,应该避免使用。此外,SAN 字段更加灵活,允许在证书中声明多个 DNS 和 IP 条目。

原文引用:Another advantage is that if the SAN field is used for hostname verification the common name can be set to a more meaningful value for authorization purposes. Since we need the SAN field to be contained in the signed certificate, it will be specified when generating the signing request. It can also be specified when generating the keypair, but this will not automatically be copied into the signing request.

To add a SAN field append the following argument -ext SAN=DNS:{FQDN},IP:{IPADDRESS} to the keytool command:

另一个优点是,如果 SAN 字段用于主机名验证,则出于授权目的,可以将通用名称设置为更有意义的值。由于我们需要在签名证书中包含 SAN 字段,因此在生成签名请求时会指定该字段。它也可以在生成密钥对时指定,但不会自动复制到签名请求中。

要添加 SAN 字段,请将以下参数 -ext SAN=DNS:{FQDN},IP:{IPADDRESS} 附加到 keytool 命令:

css 复制代码
> keytool -keystore server.keystore.jks -alias localhost -validity {validity} -genkey -keyalg RSA -destkeystoretype pkcs12 -ext SAN=DNS:{FQDN},IP:{IPADDRESS1}
相关推荐
EasyNVR3 小时前
NVR管理平台EasyNVR多个NVR同时管理:全方位安防监控视频融合云平台方案
安全·音视频·监控·视频监控
KevinAha5 小时前
Kafka 3.5 源码导读
kafka
求积分不加C5 小时前
-bash: ./kafka-topics.sh: No such file or directory--解决方案
分布式·kafka
nathan05295 小时前
javaer快速上手kafka
分布式·kafka
黑客Ash6 小时前
【D01】网络安全概论
网络·安全·web安全·php
阿龟在奔跑7 小时前
引用类型的局部变量线程安全问题分析——以多线程对方法局部变量List类型对象实例的add、remove操作为例
java·jvm·安全·list
.Ayang7 小时前
SSRF漏洞利用
网络·安全·web安全·网络安全·系统安全·网络攻击模型·安全架构
.Ayang7 小时前
SSRF 漏洞全解析(概述、攻击流程、危害、挖掘与相关函数)
安全·web安全·网络安全·系统安全·网络攻击模型·安全威胁分析·安全架构
激流丶8 小时前
【Kafka 实战】Kafka 如何保证消息的顺序性?
java·后端·kafka