使用k6进行kafka负载测试

1.安装环境

kafka环境

参考Docker搭建kafka环境-CSDN博客

xk6-kafka环境

bash 复制代码
./xk6 build --with github.com/mostafa/xk6-kafka@latest

查看安装情况

2.编写脚本

test_kafka.js

javascript 复制代码
// Either import the module object
import * as kafka from "k6/x/kafka";

// Or individual classes and constants
import {
    Writer,
    Reader,
    Connection,
    SchemaRegistry,
    SCHEMA_TYPE_STRING,
} from "k6/x/kafka";

// Creates a new Writer object to produce messages to Kafka
const writer = new Writer({
    // WriterConfig object
    brokers: ["localhost:9092"],
    topic: "my-topic",
});

const reader = new Reader({
    // ReaderConfig object
    brokers: ["localhost:9092"],
    topic: "my-topic",
});

const connection = new Connection({
    // ConnectionConfig object
    address: "localhost:9092",
});

const schemaRegistry = new SchemaRegistry();
// Can accept a SchemaRegistryConfig object

if (__VU == 0) {
    // Create a topic on initialization (before producing messages)
    connection.createTopic({
    // TopicConfig object
    topic: "my-topic",
    });
}

export default function () {
    // Fetch the list of all topics
    const topics = connection.listTopics();
    console.log(topics); // list of topics

    // Produces message to Kafka
    writer.produce({
    // ProduceConfig object
    messages: [
        // Message object(s)
        {
        key: schemaRegistry.serialize({
            data: "my-key",
            schemaType: SCHEMA_TYPE_STRING,
        }),
        value: schemaRegistry.serialize({
            data: "my-value",
            schemaType: SCHEMA_TYPE_STRING,
        }),
        },
    ],
    });

    // Consume messages from Kafka
    let messages = reader.consume({
    // ConsumeConfig object
    limit: 10,
    });

    // your messages
    console.log(messages);

    // You can use checks to verify the contents,
    // length and other properties of the message(s)

    // To serialize the data back into a string, you should use
    // the deserialize method of the Schema Registry client. You
    // can use it inside a check, as shown in the example scripts.
    let deserializedValue = schemaRegistry.deserialize({
    data: messages[0].value,
    schemaType: SCHEMA_TYPE_STRING,
    });
}

export function teardown(data) {
    // Delete the topic
    connection.deleteTopic("my-topic");

    // Close all connections
    writer.close();
    reader.close();
    connection.close();
}

3.运行测试

运作之前先开启kafka服务,打开终端输入命令

bash 复制代码
./k6 run test_kafka.js --vus 50 --duration 10s

测试结果

相关推荐
DemonAvenger1 天前
Kafka性能调优:从参数配置到硬件选择的全方位指南
性能优化·kafka·消息队列
yumgpkpm3 天前
AI视频生成:Wan 2.2(阿里通义万相)在华为昇腾下的部署?
人工智能·hadoop·elasticsearch·zookeeper·flink·kafka·cloudera
予枫的编程笔记3 天前
【Kafka高级篇】避开Kafka原生重试坑,Java业务端自建DLQ体系,让消息不丢失、不积压
java·kafka·死信队列·消息中间件·消息重试·dlq·java业务开发
倚肆3 天前
在 Windows Docker 中安装 Kafka 并映射 Windows 端口
docker·kafka
Sheffield3 天前
如果把ZooKeeper按字面意思比作动物园管理员……
elasticsearch·zookeeper·kafka
雪碧聊技术3 天前
kafka的下载、安装、启动
kafka
予枫的编程笔记3 天前
【Kafka高级篇】Kafka监控不踩坑:JMX指标暴露+Prometheus+Grafana可视化全流程
kafka·grafana·prometheus·可观测性·jmx·kafka集群调优·中间件监控
星辰_mya3 天前
消息队列遇到Producer发送慢
分布式·kafka
AutoMQ3 天前
一行配置让你的 Apache Kafka RTO 缩短一半
kafka