kafka-生产者发送消息&消费者消费消息

文章目录

1、生产者发送消息&消费者消费消息

1.1、获取 kafka-console-producer.sh 的帮助信息

clike 复制代码
[root@localhost ~]# kafka-console-producer.sh --help
This tool helps to read data from standard input and publish it to Kafka.
Option                                   Description                            
------                                   -----------                            
--batch-size <Integer: size>             Number of messages to send in a single 
                                           batch if they are not being sent     
                                           synchronously. (default: 200)        
--bootstrap-server <String: server to    REQUIRED unless --broker-list          
  connect to>                              (deprecated) is specified. The server
                                           (s) to connect to. The broker list   
                                           string in the form HOST1:PORT1,HOST2:
                                           PORT2.                               
--broker-list <String: broker-list>      DEPRECATED, use --bootstrap-server     
                                           instead; ignored if --bootstrap-     
                                           server is specified.  The broker     
                                           list string in the form HOST1:PORT1, 
                                           HOST2:PORT2.                         
--compression-codec [String:             The compression codec: either 'none',  
  compression-codec]                       'gzip', 'snappy', 'lz4', or 'zstd'.  
                                           If specified without value, then it  
                                           defaults to 'gzip'                   
--help                                   Print usage information.               
--line-reader <String: reader_class>     The class name of the class to use for 
                                           reading lines from standard in. By   
                                           default each line is read as a       
                                           separate message. (default: kafka.   
                                           tools.                               
                                           ConsoleProducer$LineMessageReader)   
--max-block-ms <Long: max block on       The max time that the producer will    
  send>                                    block for during a send request      
                                           (default: 60000)                     
--max-memory-bytes <Long: total memory   The total memory used by the producer  
  in bytes>                                to buffer records waiting to be sent 
                                           to the server. (default: 33554432)   
--max-partition-memory-bytes <Long:      The buffer size allocated for a        
  memory in bytes per partition>           partition. When records are received 
                                           which are smaller than this size the 
                                           producer will attempt to             
                                           optimistically group them together   
                                           until this size is reached.          
                                           (default: 16384)                     
--message-send-max-retries <Integer>     Brokers can fail receiving the message 
                                           for multiple reasons, and being      
                                           unavailable transiently is just one  
                                           of them. This property specifies the 
                                           number of retries before the         
                                           producer give up and drop this       
                                           message. (default: 3)                
--metadata-expiry-ms <Long: metadata     The period of time in milliseconds     
  expiration interval>                     after which we force a refresh of    
                                           metadata even if we haven't seen any 
                                           leadership changes. (default: 300000)
--producer-property <String:             A mechanism to pass user-defined       
  producer_prop>                           properties in the form key=value to  
                                           the producer.                        
--producer.config <String: config file>  Producer config properties file. Note  
                                           that [producer-property] takes       
                                           precedence over this config.         
--property <String: prop>                A mechanism to pass user-defined       
                                           properties in the form key=value to  
                                           the message reader. This allows      
                                           custom configuration for a user-     
                                           defined message reader. Default      
                                           properties include:                  
                                                parse.key=true|false                  
                                                key.separator=<key.separator>         
                                                ignore.error=true|false               
--request-required-acks <String:         The required acks of the producer      
  request required acks>                   requests (default: 1)                
--request-timeout-ms <Integer: request   The ack timeout of the producer        
  timeout ms>                              requests. Value must be non-negative 
                                           and non-zero (default: 1500)         
--retry-backoff-ms <Integer>             Before each retry, the producer        
                                           refreshes the metadata of relevant   
                                           topics. Since leader election takes  
                                           a bit of time, this property         
                                           specifies the amount of time that    
                                           the producer waits before refreshing 
                                           the metadata. (default: 100)         
--socket-buffer-size <Integer: size>     The size of the tcp RECV size.         
                                           (default: 102400)                    
--sync                                   If set message send requests to the    
                                           brokers are synchronously, one at a  
                                           time as they arrive.                 
--timeout <Integer: timeout_ms>          If set and the producer is running in  
                                           asynchronous mode, this gives the    
                                           maximum amount of time a message     
                                           will queue awaiting sufficient batch 
                                           size. The value is given in ms.      
                                           (default: 1000)                      
--topic <String: topic>                  REQUIRED: The topic id to produce      
                                           messages to.                         
--version                                Display Kafka version.

1.2、生产者发送消息到某个主题

clike 复制代码
[root@localhost ~]# kafka-console-producer.sh --bootstrap-server 192.168.74.148:9092 --topic my_topic1
>1
>2
>hello
>3
>4
>5
>a
>b
>c
>

1.3、消费主题数据

默认从 leo (下次写入消息的位置)log end offset 的位置开始消费,想要从主题最开始的位置消费消息可以加上 --from-beginning

clike 复制代码
[root@localhost ~]# kafka-console-consumer.sh --bootstrap-server 192.168.74.148:9092 --topic my_topic1 --from-beginning
3
5
b
1
hello
a
c
2
4
相关推荐
只因在人海中多看了你一眼13 分钟前
分布式缓存 + 数据存储 + 消息队列知识体系
分布式·缓存
zhixingheyi_tian3 小时前
Spark 之 Aggregate
大数据·分布式·spark
KevinAha4 小时前
Kafka 3.5 源码导读
kafka
求积分不加C4 小时前
-bash: ./kafka-topics.sh: No such file or directory--解决方案
分布式·kafka
nathan05294 小时前
javaer快速上手kafka
分布式·kafka
激流丶7 小时前
【Kafka 实战】Kafka 如何保证消息的顺序性?
java·后端·kafka
谭震鸿8 小时前
Zookeeper集群搭建Centos环境下
分布式·zookeeper·centos
天冬忘忧13 小时前
Kafka 工作流程解析:从 Broker 工作原理、节点的服役、退役、副本的生成到数据存储与读写优化
大数据·分布式·kafka
工业甲酰苯胺15 小时前
Python脚本消费多个Kafka topic
开发语言·python·kafka
B站计算机毕业设计超人16 小时前
计算机毕业设计SparkStreaming+Kafka新能源汽车推荐系统 汽车数据分析可视化大屏 新能源汽车推荐系统 汽车爬虫 汽车大数据 机器学习
数据仓库·爬虫·python·数据分析·kafka·数据可视化·推荐算法