Kafka(二):在WSL搭建Schema Registry

目录

  • [1 Avro与Schema Registry](#1 Avro与Schema Registry)
  • [2 搭建Schema Registry](#2 搭建Schema Registry)
    • [2.1 下载Confluent并解压](#2.1 下载Confluent并解压)
    • [2.2 设置环境变量](#2.2 设置环境变量)
    • [2.3 修改配置](#2.3 修改配置)
    • [2.4 启动服务](#2.4 启动服务)
  • [3 API列表](#3 API列表)

1 Avro与Schema Registry

Apache Avro 是一种高效的数据序列化系统,用于在不同的应用程序和平台之间传输和存储数据。它提供了一种紧凑且高效的二进制数据编码格式,相比其他常见的序列化方式,Avro能够实现更快的序列化和更小的数据存储。

而Confluent Schema Registry是由Confluent公司提供的一个开源组件,旨在解决分布式系统中的数据模式演化和兼容性的问题。它是建立在Apache Avro之上的一个服务,可以用于集中管理和存储Avro数据的模式(Schema),确保分布式系统中的数据一致性和兼容性。它广泛应用于事件流处理平台(如Kafka),为数据流的可靠性和互操作性提供了支持。

2 搭建Schema Registry

2.1 下载Confluent并解压

bash 复制代码
curl -O https://packages.confluent.io/archive/7.4/confluent-community-7.4.3.tar.gz
sudo tar xvf confluent-community-7.4.3.tar.gz -C /usr/local/bin

2.2 设置环境变量

bash 复制代码
vim ~/.bashrc

添加:

export SCHEMA_REG_HOME=/usr/local/bin/confluent-7.4.3

export PAHT=PAHT:{SCHEMA_REG_HOME}/bin

bash 复制代码
source ~/.bashrc

2.3 修改配置

获取本机IP

bash 复制代码
ip addr

修改配置文件:

bash 复制代码
vim /usr/local/bin/confluent-7.4.3/etc/schema-registry/schema-registry.properties

listeners=http://172.26.143.96:8081

kafkastore.bootstrap.servers=PLAINTEXT://172.26.143.96:9092

2.4 启动服务

bash 复制代码
schema-registry-start $SCHEMA_REG_HOME/etc/schema-registry/schema-registry.properties

调用API

bash 复制代码
curl -X GET http://172.26.143.96:8081/subjects
curl -X GET http://172.26.143.96:8081/subjects/product-value/versions
curl -X GET http://172.26.143.96:8081/schemas/ids/3

3 API列表

更多信息可参考:
Confluent Schema Registry开发指南

bash 复制代码
# Register a new version of a schema under the subject "Kafka-key"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://localhost:8081/subjects/Kafka-key/versions
  {"id":1}

# Register a new version of a schema under the subject "Kafka-value"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
     http://localhost:8081/subjects/Kafka-value/versions
  {"id":1}

# List all subjects
$ curl -X GET http://localhost:8081/subjects
  ["Kafka-value","Kafka-key"]

# List all schema versions registered under the subject "Kafka-value"
$ curl -X GET http://localhost:8081/subjects/Kafka-value/versions
  [1]

# Fetch a schema by globally unique id 1
$ curl -X GET http://localhost:8081/schemas/ids/1
  {"schema":"\"string\""}

# Fetch version 1 of the schema registered under subject "Kafka-value"
$ curl -X GET http://localhost:8081/subjects/Kafka-value/versions/1
  {"subject":"Kafka-value","version":1,"id":1,"schema":"\"string\""}

# Fetch the most recently registered schema under subject "Kafka-value"
$ curl -X GET http://localhost:8081/subjects/Kafka-value/versions/latest
  {"subject":"Kafka-value","version":1,"id":1,"schema":"\"string\""}

# Delete version 3 of the schema registered under subject "Kafka-value"
$ curl -X DELETE http://localhost:8081/subjects/Kafka-value/versions/3
  3

# Delete all versions of the schema registered under subject "Kafka-value"
$ curl -X DELETE http://localhost:8081/subjects/Kafka-value
  [1, 2, 3, 4, 5]

# Check whether a schema has been registered under subject "Kafka-key"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://localhost:8081/subjects/Kafka-key
  {"subject":"Kafka-key","version":1,"id":1,"schema":"\"string\""}

# Test compatibility of a schema with the latest schema under subject "Kafka-value"
$ curl -X POST -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"schema": "{\"type\": \"string\"}"}' \
    http://localhost:8081/compatibility/subjects/Kafka-value/versions/latest
  {"is_compatible":true}

# Get top level config
$ curl -X GET http://localhost:8081/config
  {"compatibilityLevel":"BACKWARD"}

# Update compatibility requirements globally
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"compatibility": "NONE"}' \
    http://localhost:8081/config
  {"compatibility":"NONE"}

# Update compatibility requirements under the subject "Kafka-value"
$ curl -X PUT -H "Content-Type: application/vnd.schemaregistry.v1+json" \
    --data '{"compatibility": "BACKWARD"}' \
    http://localhost:8081/config/Kafka-value
  {"compatibility":"BACKWARD"}
相关推荐
zhixingheyi_tian1 小时前
Spark 之 SparkSessionExtensions
大数据·分布式·spark
ProtonBase1 小时前
分布式 Data Warebase - 构筑 AI 时代数据基石
大数据·数据库·数据仓库·人工智能·分布式·数据分析·数据库系统
天冬忘忧3 小时前
Kafka 分区分配及再平衡策略深度解析与消费者事务和数据积压的简单介绍
分布式·kafka
珍珠是蚌的眼泪3 小时前
kafka进阶_2.存储消息
kafka·offset·isr·hw·leo·数据一致性·lso
出发行进5 小时前
Flink的Standalone集群模式安装部署
大数据·linux·分布式·数据分析·flink
zhengyquan5 小时前
华为HCCDA云技术认证--分布式云架构
分布式·华为·架构·华为云·云计算·华为认证
太阳伞下的阿呆6 小时前
kafka-clients之生产者发送流程
分布式·kafka·高并发·mq
Mr.Demo.6 小时前
[RabbitMQ] 重试机制+TTL+死信队列
分布式·rabbitmq
jlting1958 小时前
Flink——source数据来源分类
flink·kafka
mit6.8249 小时前
[Redis#3] 通用命令 | 数据类型 | 内部编码 | 单线程 | 快的原因
linux·redis·分布式