Kafka(二):在WSL搭建Schema Registry

目录

  • [1 Avro与Schema Registry](#1 Avro与Schema Registry)
  • [2 搭建Schema Registry](#2 搭建Schema Registry)
    • [2.1 下载Confluent并解压](#2.1 下载Confluent并解压)
    • [2.2 设置环境变量](#2.2 设置环境变量)
    • [2.3 修改配置](#2.3 修改配置)
    • [2.4 启动服务](#2.4 启动服务)
  • [3 API列表](#3 API列表)

1 Avro与Schema Registry

Apache Avro 是一种高效的数据序列化系统,用于在不同的应用程序和平台之间传输和存储数据。它提供了一种紧凑且高效的二进制数据编码格式,相比其他常见的序列化方式,Avro能够实现更快的序列化和更小的数据存储。

而Confluent Schema Registry是由Confluent公司提供的一个开源组件,旨在解决分布式系统中的数据模式演化和兼容性的问题。它是建立在Apache Avro之上的一个服务,可以用于集中管理和存储Avro数据的模式(Schema),确保分布式系统中的数据一致性和兼容性。它广泛应用于事件流处理平台(如Kafka),为数据流的可靠性和互操作性提供了支持。

2 搭建Schema Registry

2.1 下载Confluent并解压

bash 复制代码
curl -O https://packages.confluent.io/archive/7.4/confluent-community-7.4.3.tar.gz
sudo tar xvf confluent-community-7.4.3.tar.gz -C /usr/local/bin

2.2 设置环境变量

bash 复制代码
vim ~/.bashrc

添加:

export SCHEMA_REG_HOME=/usr/local/bin/confluent-7.4.3

export PAHT=PAHT:{SCHEMA_REG_HOME}/bin

bash 复制代码
source ~/.bashrc

2.3 修改配置

获取本机IP

bash 复制代码
ip addr

修改配置文件:

bash 复制代码
vim /usr/local/bin/confluent-7.4.3/etc/schema-registry/schema-registry.properties

listeners=http://172.26.143.96:8081

kafkastore.bootstrap.servers=PLAINTEXT://172.26.143.96:9092

2.4 启动服务

bash 复制代码
schema-registry-start $SCHEMA_REG_HOME/etc/schema-registry/schema-registry.properties

调用API

bash 复制代码
curl -X GET http://172.26.143.96:8081/subjects
curl -X GET http://172.26.143.96:8081/subjects/product-value/versions
curl -X GET http://172.26.143.96:8081/schemas/ids/3

3 API列表

更多信息可参考:
Confluent Schema Registry开发指南

bash 复制代码
# Register a new version of a schema under the subject "Kafka-key"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://localhost:8081/subjects/Kafka-key/versions
  {"id":1}

# Register a new version of a schema under the subject "Kafka-value"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
     http://localhost:8081/subjects/Kafka-value/versions
  {"id":1}

# List all subjects
$ curl -X GET http://localhost:8081/subjects
  ["Kafka-value","Kafka-key"]

# List all schema versions registered under the subject "Kafka-value"
$ curl -X GET http://localhost:8081/subjects/Kafka-value/versions
  [1]

# Fetch a schema by globally unique id 1
$ curl -X GET http://localhost:8081/schemas/ids/1
  {"schema":"\"string\""}

# Fetch version 1 of the schema registered under subject "Kafka-value"
$ curl -X GET http://localhost:8081/subjects/Kafka-value/versions/1
  {"subject":"Kafka-value","version":1,"id":1,"schema":"\"string\""}

# Fetch the most recently registered schema under subject "Kafka-value"
$ curl -X GET http://localhost:8081/subjects/Kafka-value/versions/latest
  {"subject":"Kafka-value","version":1,"id":1,"schema":"\"string\""}

# Delete version 3 of the schema registered under subject "Kafka-value"
$ curl -X DELETE http://localhost:8081/subjects/Kafka-value/versions/3
  3

# Delete all versions of the schema registered under subject "Kafka-value"
$ curl -X DELETE http://localhost:8081/subjects/Kafka-value
  [1, 2, 3, 4, 5]

# Check whether a schema has been registered under subject "Kafka-key"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://localhost:8081/subjects/Kafka-key
  {"subject":"Kafka-key","version":1,"id":1,"schema":"\"string\""}

# Test compatibility of a schema with the latest schema under subject "Kafka-value"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://localhost:8081/compatibility/subjects/Kafka-value/versions/latest
  {"is_compatible":true}

# Get top level config
$ curl -X GET http://localhost:8081/config
  {"compatibilityLevel":"BACKWARD"}

# Update compatibility requirements globally
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"compatibility": "NONE"}' \
    http://localhost:8081/config
  {"compatibility":"NONE"}

# Update compatibility requirements under the subject "Kafka-value"
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"compatibility": "BACKWARD"}' \
    http://localhost:8081/config/Kafka-value
  {"compatibility":"BACKWARD"}
相关推荐
biubiubiu07067 分钟前
Kafka消费者相关
分布式·kafka·linq
yyueshen9 分钟前
RabbitMQ系列(一)架构解析
分布式·架构·rabbitmq
吃海鲜的骆驼9 分钟前
服务异步通讯与RabbitMQ
java·分布式·后端·rabbitmq
junzhen_chen11 分钟前
Kafka可视化工具EFAK(Kafka-eagle)安装部署
分布式·kafka
m0_7482338813 分钟前
RabbitMQ 集群部署方案
分布式·rabbitmq·ruby
深度Linux1 小时前
深入探讨Ceph:分布式存储架构的未来
分布式·ceph·架构·c/c++
后季暖4 小时前
kafka stream对比flink
分布式·flink·kafka
知初~11 小时前
Spark内存并行计算框架
大数据·分布式·spark
信徒_14 小时前
kafka consumer 手动 ack
分布式·kafka·linq
小猫猫猫◍˃ᵕ˂◍16 小时前
miqiu的分布式锁(二):实战——用JMeter验证JVM锁能否解决MySQL超卖问题
jvm·分布式·jmeter