使用python的pika链接rabbitMq断裂

比如我们执行一个很长的任务的时候,执行结束ack确认发现确认失败,mq都断了。

只要是使用pyhon的pika都会出现这个问题,因为pika本身是没有主动发送心跳机制的(你用java的话是没问题的)

解决方式:

在链接中heartbeat=0

credentials = pika.PlainCredentials('xxx','xxx')

connection = pika.BlockingConnection(pika.ConnectionParameters(

host = "xxxx",port = 5672, credentials = credentials , heartbeat=0

))

解决方式2:

我亲自试过,确实有用

改写代码(引用:Python RabbitMQ/Pika 长连接断开报错Connection reset by peer和pop from an empty deque_pika.exceptions.streamlosterror: stream connection-CSDN博客

python 复制代码
"""
@author: Zhigang Jiang
@date: 2022/1/16
@description:
"""
import functools
import pika
import threading
import time


def ack_message(channel, delivery_tag):
    print(f'ack_message thread id: {threading.get_ident()}')
    if channel.is_open:
        channel.basic_ack(delivery_tag)
    else:
        # Channel is already closed, so we can't ACK this message;
        # log and/or do something that makes sense for your app in this case.
        pass


def do_work(channel, delivery_tag, body):
    print(f'do_work thread id: {threading.get_ident()}')
    print(body, "start")
    for i in range(10):
        print(i)
        time.sleep(20)
    print(body, "end")

    cb = functools.partial(ack_message, channel, delivery_tag)
    channel.connection.add_callback_threadsafe(cb)


def on_message(channel, method_frame, header_frame, body):
    print(f'on_message thread id: {threading.get_ident()}')
    delivery_tag = method_frame.delivery_tag
    t = threading.Thread(target=do_work, args=(channel, delivery_tag, body))
    t.start()


credentials = pika.PlainCredentials('username', 'password')
parameters = pika.ConnectionParameters('test.webapi.username.com', credentials=credentials, heartbeat=5)
connection = pika.BlockingConnection(parameters)
channel = connection.channel()
channel.queue_declare(queue="standard", durable=True)
channel.basic_qos(prefetch_count=1)
channel.basic_consume('standard', on_message)

print(f'main thread id: {threading.get_ident()}')
try:
    channel.start_consuming()
except KeyboardInterrupt:
    channel.stop_consuming()
connection.close()

长时间的话,家里的网抖动可能出现,我们家有时候就会断网个10几秒,有时候打游戏就会掉线:

pika.exceptions.AMQPHeartbeatTimeout: No activity or too many missed heartbeats in the last xx seconds

这种情况,把他拉起就行了,加一个

python 复制代码
while True:
    try:
        # 用户名密码,没有设置的可以省略这一步
        credentials = pika.PlainCredentials('xx', 'xx')
        connection = pika.BlockingConnection(pika.ConnectionParameters(
            host="xxxx", port=5672, credentials=credentials, heartbeat=10
        ))
        channel = connection.channel()
        channel.queue_declare(queue="xxx", durable=True)  # 如果是持久化队列就是True

        channel.basic_qos(prefetch_count=1)
        channel.basic_consume("xxx", on_message)
        print(f'main thread id: {threading.get_ident()}')
        print("开始消费")
        channel.start_consuming()
    except KeyboardInterrupt:
        # channel.stop_consuming()
        print("出现异常,可能是网络原因,重新启动"+e)
        time.sleep(30)
相关推荐
Yuer20256 小时前
用 Rust 做分布式查询引擎之前,我先写了一个最小执行 POC
开发语言·分布式·rust
张彦峰ZYF8 小时前
高并发场景下的缓存雪崩探析与应对策略
redis·分布式·缓存
张彦峰ZYF10 小时前
高并发场景下的缓存穿透问题探析与应对策略
redis·分布式
TT哇11 小时前
【RabbitMQ】@Autowired private RabbitTemplate rabbitTemplate;
java·分布式·rabbitmq
Rainly200012 小时前
工作日志之postgresql实现分布式锁
数据库·分布式·postgresql
ha_lydms12 小时前
3、Spark 函数_d/e/f/j/h/i/j/k/l
大数据·分布式·spark·函数·数据处理·dataworks·maxcompute
张彦峰ZYF12 小时前
优化分布式系统性能:热key识别与实战解决方案
redis·分布式·性能优化
张彦峰ZYF12 小时前
高并发场景下的大 Key 问题及应对策略
redis·分布式·缓存
张彦峰ZYF13 小时前
高并发场景下的缓存击穿问题探析与应对策略
redis·分布式·缓存
AC赳赳老秦15 小时前
企业级人工智能平台选型深度分析:天翼云 DeepSeek 与开源解决方案的部署考量与成本博弈
人工智能·elasticsearch·zookeeper·rabbitmq·github·时序数据库·deepseek