新手进阶Python:办公看板集成ERP跨系统同步+自动备份+AI异常复盘

大家好!我是CSDN的Python新手博主~ 上一篇我们完成了看板的多端安全适配,解决了移动端访问与数据脱敏问题,但很多企业用户反馈核心痛点:① 数据分散在看板与ERP/CRM系统,需手动录入同步,易造成数据不一致;② 缺乏数据备份机制,误删、服务器故障可能导致数据丢失;③ 跨系统同步失败后,需人工排查日志定位原因,效率低下且依赖经验。今天就带来超落地的新手实战项目------办公看板集成ERP跨系统数据同步+定时自动备份+AI异常复盘!

本次基于之前的"多端安全看板"代码,新增3大核心功能:① ERP跨系统数据同步(支持全量/增量同步,实现看板与ERP客户、订单数据双向互通);② 定时自动备份(数据库全量备份+关键文件备份,支持本地+云存储,备份校验与恢复测试);③ AI异常复盘(基于同步日志、备份日志,用AI分析失败原因并生成解决方案)。全程基于现有技术栈(Flask+MySQL+OpenAI+APScheduler),新增ERP对接模块、备份引擎、AI复盘服务,代码注释详细,新手只需配置ERP接口参数与备份规则,跟着步骤复制操作就能成功,让办公数据既互通又安全,故障可快速自愈~

一、本次学习目标

  1. 掌握跨系统API对接技巧,实现ERP与看板数据双向同步,支持全量同步初始化、增量同步更新,处理数据冲突;

  2. 学会数据库(MySQL)与文件自动备份,配置定时备份规则,实现本地存储+阿里云OSS云备份双重保障,备份后自动校验完整性;

  3. 理解AI异常复盘逻辑,基于日志内容用OpenAI分析同步/备份失败原因,生成可执行的解决方案,降低人工排查成本;

  4. 实现同步/备份状态可视化监控,在看板中展示同步进度、备份记录,支持手动触发同步/备份、恢复备份数据;

  5. 确保跨系统同步与备份流程符合合规要求,同步日志、备份记录、恢复操作全程留痕,可审计追溯。

二、前期准备

  1. 安装核心依赖库

安装核心依赖(OSS云存储、数据库备份、日志解析)

pip3 install aliyun-python-sdk-oss2 python-dotenv pymysql-connector python-dateutil -i https://pypi.tuna.tsinghua.edu.cn/simple

确保已有依赖正常(Flask、APScheduler、OpenAI等)

pip3 install --upgrade flask flask-login apscheduler openai requests pandas pymysql -i https://pypi.tuna.tsinghua.edu.cn/simple

说明:云备份以阿里云OSS为例,需提前注册阿里云账号并开通OSS服务;数据库备份基于MySQL自带mysqldump工具,需确保服务器已安装且配置环境变量;跨系统对接以常见ERP系统(如用友、金蝶)为例,适配标准RESTful API。

  1. 第三方服务与配置准备
  • ERP系统配置:登录ERP后台,申请API访问权限,获取接口基础URL、AppKey、AppSecret(或Token),梳理需同步的接口(客户列表、订单列表、库存数据)及字段映射关系;

  • 备份规则配置:确定备份周期(数据库每日凌晨2点全量备份,文件每6小时增量备份)、备份路径(本地路径:/data/backup,OSS路径:oss://office-backup/)、保留时长(本地备份保留7天,云备份保留30天);

  • 阿里云OSS配置:创建OSS Bucket(私有读写权限),记录AccessKeyId、AccessKeySecret、Bucket名称、Endpoint(地域节点);

  • 安全配置:在.env文件中补充ERP接口密钥、OSS密钥、备份加密密码,避免硬编码;给备份目录授权(chmod 777 /data/backup),确保服务器有读写权限。

  1. 数据库表优化与创建

-- 连接MySQL数据库(替换为你的数据库信息)

mysql -u office_user -p -h 47.108.xxx.xxx office_data

-- 创建跨系统同步日志表(erp_sync_log)

CREATE TABLE erp_sync_log (

id INT AUTO_INCREMENT PRIMARY KEY,

sync_type ENUM('customer', 'order', 'inventory') NOT NULL COMMENT '同步类型:客户/订单/库存',

sync_mode ENUM('full', 'increment') NOT NULL COMMENT '同步模式:全量/增量',

start_time DATETIME NOT NULL COMMENT '开始时间',

end_time DATETIME NULL COMMENT '结束时间',

total_count INT DEFAULT 0 COMMENT '总条数',

success_count INT DEFAULT 0 COMMENT '成功条数',

fail_count INT DEFAULT 0 COMMENT '失败条数',

status ENUM('processing', 'success', 'fail', 'partial_success') NOT NULL COMMENT '同步状态',

error_log TEXT NULL COMMENT '失败日志',

ai_analysis TEXT NULL COMMENT 'AI异常分析',

create_by VARCHAR(50) NOT NULL COMMENT '创建人(system/用户名)',

create_time DATETIME DEFAULT CURRENT_TIMESTAMP COMMENT '创建时间',

KEY idx_sync_type (sync_type),

KEY idx_status (status),

KEY idx_start_time (start_time)

) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='ERP跨系统同步日志表';

-- 创建备份记录表(backup_record)

CREATE TABLE backup_record (

id INT AUTO_INCREMENT PRIMARY KEY,

backup_type ENUM('database', 'file') NOT NULL COMMENT '备份类型:数据库/文件',

backup_mode ENUM('full', 'increment') NOT NULL COMMENT '备份模式:全量/增量',

backup_path VARCHAR(255) NOT NULL COMMENT '本地备份路径',

oss_path VARCHAR(255) NULL COMMENT 'OSS云备份路径',

file_name VARCHAR(100) NOT NULL COMMENT '备份文件名',

file_size DECIMAL(10,2) NOT NULL COMMENT '文件大小(MB)',

backup_time DATETIME NOT NULL COMMENT '备份时间',

check_result ENUM('success', 'fail') NOT NULL COMMENT '备份校验结果',

restore_test ENUM('passed', 'not_test', 'failed') DEFAULT 'not_test' COMMENT '恢复测试结果',

expire_time DATETIME NOT NULL COMMENT '过期时间',

create_by VARCHAR(50) NOT NULL COMMENT '创建人(system)',

KEY idx_backup_type (backup_type),

KEY idx_backup_time (backup_time),

KEY idx_expire_time (expire_time)

) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COMMENT='数据备份记录表';

三、实战:ERP同步+自动备份+AI异常复盘集成

  1. 第一步:ERP跨系统数据同步,实现双向互通

-- coding: utf-8 --

erp_sync.py ERP跨系统数据同步脚本

import requests

import json

import hashlib

import time

from datetime import datetime, timedelta

from flask import Blueprint, request, jsonify

from dotenv import load_dotenv

import os

from models import db, ERPSyncLog, Customer, Order # 关联客户、订单模型(原有)

from logger import save_operation_log

from apscheduler.schedulers.background import BackgroundScheduler

from ai_analysis import analyze_sync_error # 后续AI分析模块

加载环境变量

load_dotenv()

erp_bp = Blueprint("erp", name)

====================== ERP配置(新手修改这里) ======================

ERP_BASE_URL = os.getenv("ERP_BASE_URL") # ERP接口基础URL

ERP_APP_KEY = os.getenv("ERP_APP_KEY")

ERP_APP_SECRET = os.getenv("ERP_APP_SECRET")

字段映射:ERP字段 → 看板字段

FIELD_MAPPING = {

"customer": {

"erp_customer_id": "erp_id",

"customer_name": "name",

"contact_phone": "phone",

"contact_email": "email",

"address": "address",

"create_time": "create_time"

},

"order": {

"erp_order_id": "erp_id",

"customer_id": "customer_id",

"order_amount": "amount",

"order_status": "status",

"create_time": "create_time",

"update_time": "update_time"

}

}

增量同步时间阈值(仅同步近24小时更新的数据)

INCREMENT_SYNC_INTERVAL = 24 # 小时

====================== 核心功能:生成ERP接口签名 ======================

def generate_erp_sign(params):

"""生成ERP接口签名(防篡改,按ERP要求格式)"""

按参数名升序排序

sorted_params = sorted(params.items(), key=lambda x: x[0])

拼接参数字符串

sign_str = f"{ERP_APP_KEY}"

for key, value in sorted_params:

if value is not None and value != "":

sign_str += f"{key}{value}"

sign_str += f"{ERP_APP_SECRET}"

MD5加密

return hashlib.md5(sign_str.encode()).hexdigest().upper()

====================== 核心功能:调用ERP接口获取数据 ======================

def get_erp_data(sync_type, sync_mode):

"""调用ERP接口获取数据(全量/增量)"""

url = f"{ERP_BASE_URL}/api/v1/{sync_type}/list"

params = {

"appKey": ERP_APP_KEY,

"timestamp": int(time.time() * 1000),

"pageSize": 100,

"pageNum": 1

}

增量同步:添加时间筛选

if sync_mode == "increment":

start_time = (datetime.now() - timedelta(hours=INCREMENT_SYNC_INTERVAL)).strftime("%Y-%m-%d %H:%M:%S")

params["updateTimeStart"] = start_time

生成签名

params["sign"] = generate_erp_sign(params)

复制代码
try:
    response = requests.get(url, params=params, timeout=30)
    response.raise_for_status()  # 抛出HTTP错误
    result = response.json()
    if result.get("code") != 200:
        raise Exception(f"ERP接口返回错误:{result.get('msg')}")
    # 分页获取所有数据
    total_data = result.get("data", {}).get("list", [])
    total_pages = result.get("data", {}).get("totalPages", 1)
    for page in range(2, total_pages + 1):
        params["pageNum"] = page
        params["sign"] = generate_erp_sign(params)
        page_response = requests.get(url, params=params, timeout=30)
        page_result = page_response.json()
        total_data.extend(page_result.get("data", {}).get("list", []))
    return total_data
except Exception as e:
    raise Exception(f"获取ERP{sync_type}数据失败:{str(e)}")

====================== 核心功能:数据同步到看板数据库 ======================

def sync_erp_to_dashboard(sync_type, sync_mode, operator="system"):

"""ERP数据同步到看板(全量/增量)"""

初始化同步日志

sync_log = ERPSyncLog(

sync_type=sync_type,

sync_mode=sync_mode,

start_time=datetime.now(),

status="processing",

create_by=operator

)

db.session.add(sync_log)

db.session.commit() # 先提交日志,避免同步失败丢失记录

复制代码
total_count = 0
success_count = 0
fail_count = 0
error_log = []

try:
    # 1. 获取ERP数据
    erp_data = get_erp_data(sync_type, sync_mode)
    total_count = len(erp_data)
    if total_count == 0:
        sync_log.status = "success"
        sync_log.total_count = total_count
        sync_log.success_count = success_count
        sync_log.end_time = datetime.now()
        db.session.commit()
        return {"success": True, "msg": "无数据可同步", "log_id": sync_log.id}
    
    # 2. 字段映射与数据同步
    mapping = FIELD_MAPPING.get(sync_type)
    if not mapping:
        raise Exception(f"未配置{sync_type}字段映射规则")
    
    for item in erp_data:
        try:
            # 转换字段(ERP → 看板)
            mapped_data = {}
            for erp_field, dashboard_field in mapping.items():
                mapped_data[dashboard_field] = item.get(erp_field)
            
            # 处理时间字段(ERP可能返回时间戳/字符串,统一转为datetime)
            if "create_time" in mapped_data and mapped_data["create_time"]:
                if isinstance(mapped_data["create_time"], int):
                    mapped_data["create_time"] = datetime.fromtimestamp(mapped_data["create_time"] / 1000)
                else:
                    mapped_data["create_time"] = datetime.strptime(mapped_data["create_time"], "%Y-%m-%d %H:%M:%S")
            
            # 全量同步:先删除现有数据,再插入
            if sync_mode == "full":
                if sync_type == "customer":
                    Customer.query.filter_by(erp_id=mapped_data["erp_id"]).delete()
                elif sync_type == "order":
                    Order.query.filter_by(erp_id=mapped_data["erp_id"]).delete()
            
            # 增量同步:存在则更新,不存在则插入
            if sync_type == "customer":
                customer = Customer.query.filter_by(erp_id=mapped_data["erp_id"]).first()
                if customer:
                    for key, value in mapped_data.items():
                        setattr(customer, key, value)
                else:
                    customer = Customer(**mapped_data)
                    db.session.add(customer)
            elif sync_type == "order":
                order = Order.query.filter_by(erp_id=mapped_data["erp_id"]).first()
                if order:
                    for key, value in mapped_data.items():
                        setattr(order, key, value)
                else:
                    order = Order(**mapped_data)
                    db.session.add(order)
            
            success_count += 1
        except Exception as e:
            fail_count += 1
            error_log.append(f"数据ID:{item.get('erp_' + sync_type + '_id')},失败原因:{str(e)}")
    
    # 3. 更新同步日志
    sync_log.status = "success" if fail_count == 0 else "partial_success"
    sync_log.total_count = total_count
    sync_log.success_count = success_count
    sync_log.fail_count = fail_count
    sync_log.error_log = json.dumps(error_log, ensure_ascii=False) if error_log else None
    sync_log.end_time = datetime.now()
    
    # 4. 若有失败数据,触发AI异常分析
    if fail_count > 0:
        ai_result = analyze_sync_error(sync_log.id, error_log)
        sync_log.ai_analysis = ai_result
    
    db.session.commit()
    return {"success": True, "msg": f"同步完成,成功{success_count}条,失败{fail_count}条", "log_id": sync_log.id}

except Exception as e:
    # 同步失败,更新日志
    sync_log.status = "fail"
    sync_log.total_count = total_count
    sync_log.success_count = success_count
    sync_log.fail_count = fail_count
    sync_log.error_log = str(e)
    sync_log.end_time = datetime.now()
    # AI分析失败原因
    ai_result = analyze_sync_error(sync_log.id, [str(e)])
    sync_log.ai_analysis = ai_result
    db.session.commit()
    # 记录操作日志
    save_operation_log({
        "username": operator,
        "user_role": "system" if operator == "system" else "leader",
        "operation_type": "erp_sync",
        "operation_content": {"sync_type": sync_type, "sync_mode": sync_mode},
        "operation_result": "fail",
        "ip_address": "127.0.0.1",
        "user_agent": "system"
    })
    return {"success": False, "msg": f"同步失败:{str(e)}", "log_id": sync_log.id}

====================== 接口:手动触发ERP同步 ======================

@erp_bp.route("/erp/sync/manual", methods=["POST"])

@login_required

@permission_required("erp_sync") # 仅管理员可触发

def manual_erp_sync():

"""手动触发ERP数据同步"""

data = request.get_json()

sync_type = data.get("sync_type") # customer/order/inventory

sync_mode = data.get("sync_mode", "increment") # full/increment

if not sync_type or sync_type not in ["customer", "order", "inventory"]:

return jsonify({"success": False, "error": "参数错误,需指定有效同步类型"})

复制代码
result = sync_erp_to_dashboard(sync_type, sync_mode, current_user.username)
return jsonify(result)

====================== 接口:获取同步日志列表 ======================

@erp_bp.route("/erp/sync/log", methods=["GET"])

@login_required

def get_erp_sync_log():

"""获取ERP同步日志列表"""

page = int(request.args.get("page", 1))

page_size = int(request.args.get("page_size", 10))

sync_type = request.args.get("sync_type")

status = request.args.get("status")

复制代码
query = ERPSyncLog.query.order_by(ERPSyncLog.start_time.desc())
if sync_type:
    query = query.filter_by(sync_type=sync_type)
if status:
    query = query.filter_by(status=status)

pagination = query.paginate(page=page, per_page=page_size)
logs = pagination.items

log_list = []
for log in logs:
    log_list.append({
        "log_id": log.id,
        "sync_type": {"customer": "客户数据", "order": "订单数据", "inventory": "库存数据"}[log.sync_type],
        "sync_mode": {"full": "全量同步", "increment": "增量同步"}[log.sync_mode],
        "time_range": f"{log.start_time.strftime('%Y-%m-%d %H:%M')} - {log.end_time.strftime('%Y-%m-%d %H:%M') if log.end_time else '未结束'}",
        "total_count": log.total_count,
        "success_count": log.success_count,
        "fail_count": log.fail_count,
        "status": {"processing": "处理中", "success": "成功", "fail": "失败", "partial_success": "部分成功"}[log.status],
        "ai_analysis": log.ai_analysis[:100] + "..." if log.ai_analysis and len(log.ai_analysis) > 100 else log.ai_analysis
    })

return jsonify({
    "success": True,
    "data": log_list,
    "total": pagination.total,
    "page": page,
    "page_size": page_size
})

====================== 定时触发ERP增量同步 ======================

def init_erp_scheduler():

"""初始化ERP同步定时任务"""

from scheduler import scheduler # 复用之前的APScheduler实例

每日凌晨3点触发客户、订单数据增量同步

scheduler.add_job(

func=sync_erp_to_dashboard,

args=["customer", "increment"],

trigger="cron",

hour=3,

minute=0,

id="erp_sync_customer",

name="ERP客户数据增量同步",

replace_existing=True

)

scheduler.add_job(

func=sync_erp_to_dashboard,

args=["order", "increment"],

trigger="cron",

hour=3,

minute=30,

id="erp_sync_order",

name="ERP订单数据增量同步",

replace_existing=True

)

  1. 第二步:封装自动备份引擎,实现本地+云双重备份

-- coding: utf-8 --

backup_engine.py 自动备份引擎脚本

import os

import subprocess

import shutil

import hashlib

import time

import oss2

from datetime import datetime, timedelta

from flask import Blueprint, request, jsonify

from dotenv import load_dotenv

from models import db, BackupRecord

from logger import save_operation_log

from apscheduler.schedulers.background import BackgroundScheduler

加载环境变量

load_dotenv()

backup_bp = Blueprint("backup", name)

====================== 备份配置(新手修改这里) ======================

本地备份路径

LOCAL_BACKUP_PATH = os.getenv("LOCAL_BACKUP_PATH", "/data/backup")

数据库配置

DB_HOST = os.getenv("DB_HOST")

DB_USER = os.getenv("DB_USER")

DB_PASSWORD = os.getenv("DB_PASSWORD")

DB_NAME = os.getenv("DB_NAME")

OSS配置

OSS_ACCESS_KEY_ID = os.getenv("OSS_ACCESS_KEY_ID")

OSS_ACCESS_KEY_SECRET = os.getenv("OSS_ACCESS_KEY_SECRET")

OSS_BUCKET_NAME = os.getenv("OSS_BUCKET_NAME")

OSS_ENDPOINT = os.getenv("OSS_ENDPOINT")

备份保留时长(天)

LOCAL_BACKUP_RETENTION = 7

OSS_BACKUP_RETENTION = 30

待备份文件目录(看板静态文件、导出文件)

FILE_BACKUP_DIRS = ["/app/static", "/app/exports"]

初始化OSS客户端

auth = oss2.Auth(OSS_ACCESS_KEY_ID, OSS_ACCESS_KEY_SECRET)

bucket = oss2.Bucket(auth, OSS_ENDPOINT, OSS_BUCKET_NAME)

====================== 核心功能:数据库全量备份 ======================

def backup_database():

"""数据库全量备份(mysqldump)"""

备份文件名:db_backup_20240520_020000.sql.gz

backup_time = datetime.now().strftime("%Y%m%d_%H%M%S")

file_name = f"db_backup_{backup_time}.sql.gz"

local_path = os.path.join(LOCAL_BACKUP_PATH, "database", file_name)

创建备份目录(不存在则创建)

os.makedirs(os.path.dirname(local_path), exist_ok=True)

复制代码
# 构建mysqldump命令(压缩备份)
cmd = (
    f"mysqldump -h {DB_HOST} -u {DB_USER} -p{DB_PASSWORD} {DB_NAME} "
    f"--single-transaction --quick --lock-tables=false "
    f"| gzip > {local_path}"
)

try:
    # 执行备份命令
    subprocess.run(cmd, shell=True, check=True, capture_output=True)
    # 校验备份文件(存在且大小>0)
    if not os.path.exists(local_path) or os.path.getsize(local_path) == 0:
        raise Exception("备份文件为空或不存在")
    file_size = round(os.path.getsize(local_path) / 1024 / 1024, 2)  # 转换为MB
    
    # 上传至OSS
    oss_path = f"database/{file_name}"
    bucket.put_object_from_file(oss_path, local_path)
    
    # 记录备份日志
    expire_time = datetime.now() + timedelta(days=LOCAL_BACKUP_RETENTION)
    backup_record = BackupRecord(
        backup_type="database",
        backup_mode="full",
        backup_path=local_path,
        oss_path=oss_path,
        file_name=file_name,
        file_size=file_size,
        backup_time=datetime.now(),
        check_result="success",
        expire_time=expire_time,
        create_by="system"
    )
    db.session.add(backup_record)
    db.session.commit()
    
    # 清理过期备份(本地+OSS)
    clean_expired_backup("database")
    
    return {"success": True, "msg": "数据库备份成功", "record_id": backup_record.id}
except Exception as e:
    # 备份失败,记录日志
    if os.path.exists(local_path):
        os.remove(local_path)  # 删除无效备份文件
    save_operation_log({
        "username": "system",
        "user_role": "system",
        "operation_type": "backup",
        "operation_content": {"backup_type": "database"},
        "operation_result": "fail",
        "ip_address": "127.0.0.1",
        "user_agent": "system"
    })
    return {"success": False, "msg": f"数据库备份失败:{str(e)}"}

====================== 核心功能:文件增量备份 ======================

def backup_files():

"""文件增量备份(基于文件修改时间)"""

backup_time = datetime.now().strftime("%Y%m%d_%H%M%S")

file_name = f"file_backup_{backup_time}.zip"

local_path = os.path.join(LOCAL_BACKUP_PATH, "file", file_name)

os.makedirs(os.path.dirname(local_path), exist_ok=True)

复制代码
# 临时目录存储增量文件
temp_dir = os.path.join(LOCAL_BACKUP_PATH, "temp")
os.makedirs(temp_dir, exist_ok=True)

try:
    # 筛选近6小时修改的文件(增量)
    increment_threshold = datetime.now() - timedelta(hours=6)
    for dir_path in FILE_BACKUP_DIRS:
        if not os.path.exists(dir_path):
            continue
        # 遍历目录,复制增量文件到临时目录
        for root, dirs, files in os.walk(dir_path):
            for file in files:
                file_path = os.path.join(root, file)
                file_mtime = datetime.fromtimestamp(os.path.getmtime(file_path))
                if file_mtime > increment_threshold:
                    # 保持原目录结构
                    relative_path = os.path.relpath(file_path, dir_path)
                    temp_file_path = os.path.join(temp_dir, relative_path)
                    os.makedirs(os.path.dirname(temp_file_path), exist_ok=True)
                    shutil.copy2(file_path, temp_file_path)
    
    # 压缩临时目录
    shutil.make_archive(local_path.replace(".zip", ""), "zip", temp_dir)
    # 清理临时目录
    shutil.rmtree(temp_dir)
    
    # 校验备份文件
    if not os.path.exists(local_path) or os.path.getsize(local_path) == 0:
        raise Exception("文件备份为空或不存在")
    file_size = round(os.path.getsize(local_path) / 1024 / 1024, 2)
    
    # 上传至OSS
    oss_path = f"file/{file_name}"
    bucket.put_object_from_file(oss_path, local_path)
    
    # 记录备份日志
    expire_time = datetime.now() + timedelta(days=LOCAL_BACKUP_RETENTION)
    backup_record = BackupRecord(
        backup_type="file",
        backup_mode="increment",
        backup_path=local_path,
        oss_path=oss_path,
        file_name=file_name,
        file_size=file_size,
        backup_time=datetime.now(),
        check_result="success",
        expire_time=expire_time,
        create_by="system"
    )
    db.session.add(backup_record)
    db.session.commit()
    
    # 清理过期备份
    clean_expired_backup("file")
    
    return {"success": True, "msg": "文件增量备份成功", "record_id": backup_record.id}
except Exception as e:
    # 备份失败,清理临时文件
    if os.path.exists(temp_dir):
        shutil.rmtree(temp_dir)
    if os.path.exists(local_path):
        os.remove(local_path)
    save_operation_log({
        "username": "system",
        "user_role": "system",
        "operation_type": "backup",
        "operation_content": {"backup_type": "file"},
        "operation_result": "fail",
        "ip_address": "127.0.0.1",
        "user_agent": "system"
    })
    return {"success": False, "msg": f"文件备份失败:{str(e)}"}

====================== 核心功能:清理过期备份 ======================

def clean_expired_backup(backup_type):

"""清理过期备份(本地文件+OSS文件+数据库记录)"""

1. 清理本地过期备份

local_dir = os.path.join(LOCAL_BACKUP_PATH, backup_type)

if os.path.exists(local_dir):

for file in os.listdir(local_dir):

file_path = os.path.join(local_dir, file)

提取备份时间(从文件名)

try:

backup_time_str = file.split("")[2] + " " + file.split("")[3].split(".")[0]
backup_time = datetime.strptime(backup_time_str, "%Y%m%d
%H%M%S")

if datetime.now() - backup_time > timedelta(days=LOCAL_BACKUP_RETENTION):

os.remove(file_path)

except Exception:

continue

复制代码
# 2. 清理OSS过期备份
oss_prefix = f"{backup_type}/"
for obj in oss2.ObjectIterator(bucket, prefix=oss_prefix):
    # 提取备份时间
    try:
        file_name = obj.key.split("/")[-1]
        backup_time_str = file_name.split("_")[2] + "_" + file_name.split("_")[3].split(".")[0]
        backup_time = datetime.strptime(backup_time_str, "%Y%m%d_%H%M%S")
        if datetime.now() - backup_time > timedelta(days=OSS_BACKUP_RETENTION):
            bucket.delete_object(obj.key)
    except Exception:
        continue

# 3. 清理数据库过期记录
expired_time = datetime.now() - timedelta(days=LOCAL_BACKUP_RETENTION)
BackupRecord.query.filter_by(backup_type=backup_type).filter(BackupRecord.backup_time <= expired_time).delete()
db.session.commit()

====================== 核心功能:备份恢复测试 ======================

def test_backup_restore(record_id):

"""备份恢复测试(验证备份文件可用性)"""

record = BackupRecord.query.get(record_id)

if not record:

return {"success": False, "msg": "未找到备份记录"}

复制代码
try:
    if record.backup_type == "database":
        # 数据库备份恢复测试:解压备份文件,执行SQL语法检查
        temp_sql_path = os.path.join(LOCAL_BACKUP_PATH, "temp_restore.sql")
        # 解压.gz文件
        cmd = f"gzip -d -c {record.backup_path} > {temp_sql_path}"
        subprocess.run(cmd, shell=True, check=True)
        # 语法检查(不执行实际恢复)
        check_cmd = f"mysql -h {DB_HOST} -u {DB_USER} -p{DB_PASSWORD} -e 'SOURCE {temp_sql_path}' --dry-run"
        subprocess.run(check_cmd, shell=True, check=True, capture_output=True)
        # 清理临时文件
        os.remove(temp_sql_path)
    elif record.backup_type == "file":
        # 文件备份恢复测试:解压文件,校验文件完整性
        temp_unzip_dir = os.path.join(LOCAL_BACKUP_PATH, "temp_restore")
        shutil.unpack_archive(record.backup_path, temp_unzip_dir, "zip")
        # 简单校验:是否有文件
        if len(os.listdir(temp_unzip_dir)) == 0:
            raise Exception("解压后无文件")
        shutil.rmtree(temp_unzip_dir)
    
    # 更新恢复测试结果
    record.restore_test = "passed"
    db.session.commit()
    return {"success": True, "msg": "备份恢复测试通过"}
except Exception as e:
    record.restore_test = "failed"
    db.session.commit()
    return {"success": False, "msg": f"备份恢复测试失败:{str(e)}"}

====================== 接口:手动触发备份 ======================

@backup_bp.route("/backup/manual", methods=["POST"])

@login_required

@permission_required("backup") # 仅管理员可触发

def manual_backup():

"""手动触发备份"""

data = request.get_json()

backup_type = data.get("backup_type") # database/file

if not backup_type or backup_type not in ["database", "file"]:

return jsonify({"success": False, "error": "参数错误,需指定有效备份类型"})

复制代码
if backup_type == "database":
    result = backup_database()
else:
    result = backup_files()
return jsonify(result)

====================== 接口:获取备份记录 ======================

@backup_bp.route("/backup/record", methods=["GET"])

@login_required

def get_backup_record():

"""获取备份记录列表"""

page = int(request.args.get("page", 1))

page_size = int(request.args.get("page_size", 10))

backup_type = request.args.get("backup_type")

复制代码
query = BackupRecord.query.order_by(BackupRecord.backup_time.desc())
if backup_type:
    query = query.filter_by(backup_type=backup_type)

pagination = query.paginate(page=page, per_page=page_size)
records = pagination.items

record_list = []
for record in records:
    record_list.append({
        "record_id": record.id,
        "backup_type": {"database": "数据库", "file": "文件"}[record.backup_type],
        "backup_mode": {"full": "全量备份", "increment": "增量备份"}[record.backup_mode],
        "file_name": record.file_name,
        "file_size": f"{record.file_size} MB",
        "backup_time": record.backup_time.strftime("%Y-%m-%d %H:%M:%S"),
        "check_result": {"success": "校验通过", "fail": "校验失败"}[record.check_result],
        "restore_test": {"passed": "测试通过", "not_test": "未测试", "failed": "测试失败"}[record.restore_test],
        "expire_time": record.expire_time.strftime("%Y-%m-%d %H:%M:%S"),
        "local_path": record.backup_path,
        "oss_path": record.oss_path
    })

return jsonify({
    "success": True,
    "data": record_list,
    "total": pagination.total,
    "page": page,
    "page_size": page_size
})

====================== 接口:备份恢复测试 ======================

@backup_bp.route("/backup/test/restore", methods=["POST"])

@login_required

@permission_required("backup")

def manual_test_restore():

"""手动触发备份恢复测试"""

data = request.get_json()

record_id = data.get("record_id")

if not record_id:

return jsonify({"success": False, "error": "参数错误,需指定备份记录ID"})

复制代码
result = test_backup_restore(record_id)
return jsonify(result)

====================== 定时触发备份任务 ======================

def init_backup_scheduler():

"""初始化备份定时任务"""

from scheduler import scheduler # 复用APScheduler实例

每日凌晨2点数据库全量备份

scheduler.add_job(

func=backup_database,

trigger="cron",

hour=2,

minute=0,

id="backup_database",

name="数据库每日全量备份",

replace_existing=True

)

每6小时文件增量备份(0点、6点、12点、18点)

scheduler.add_job(

func=backup_files,

trigger="cron",

hour="0,6,12,18",

minute=0,

id="backup_files",

name="文件每6小时增量备份",

replace_existing=True

)

每日凌晨4点执行备份恢复测试(取最新一条数据库备份)

scheduler.add_job(

func=lambda: test_latest_backup(),

trigger="cron",

hour=4,

minute=0,

id="test_backup_restore",

name="备份恢复每日测试",

replace_existing=True

)

def test_latest_backup():

"""测试最新的数据库备份"""

latest_backup = BackupRecord.query.filter_by(backup_type="database", check_result="success").order_by(BackupRecord.backup_time.desc()).first()

if latest_backup:

test_backup_restore(latest_backup.id)

  1. 第三步:集成AI异常复盘服务,分析同步/备份失败原因

-- coding: utf-8 --

ai_analysis.py AI异常复盘服务脚本

from openai import OpenAI

from dotenv import load_dotenv

import os

import json

from models import ERPSyncLog, BackupRecord, db

from datetime import datetime

加载环境变量

load_dotenv()

OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")

OPENAI_MODEL = "gpt-3.5-turbo"

client = OpenAI(api_key=OPENAI_API_KEY)

====================== 核心功能:ERP同步失败AI分析 ======================

def analyze_sync_error(sync_log_id, error_details):

"""分析ERP同步失败原因,生成解决方案"""

构建提示词

prompt = f"""

你是办公自动化系统的AI故障分析助手,需分析ERP数据同步失败日志,完成以下任务:

  1. 总结失败核心原因(分点说明,简洁明了);

  2. 针对每个原因给出可执行的解决方案(步骤清晰,适配新手);

  3. 标注问题紧急程度(高/中/低),高紧急需立即处理,低紧急可后续优化。

    同步失败详情:{json.dumps(error_details, ensure_ascii=False)[:500]}
    系统环境:Flask+MySQL,ERP对接采用RESTful API,同步模式为增量/全量。

    请按以下格式输出:
    【紧急程度】XXX
    【失败原因】

    1. ...
    2. ...
      【解决方案】
    3. ...
    4. ...
      """

    try:
    response = client.chat.completions.create(
    model=OPENAI_MODEL,
    messages=[{"role": "user", "content": prompt}],
    temperature=0.3 # 降低随机性,确保结果精准
    )
    analysis_result = response.choices[0].message.content.strip()
    return analysis_result
    except Exception as e:
    return f"AI分析失败:{str(e)},请手动排查日志"

====================== 核心功能:备份失败AI分析 ======================

def analyze_backup_error(backup_record_id, error_msg):

"""分析备份失败原因,生成解决方案"""

prompt = f"""

你是办公自动化系统的AI故障分析助手,需分析数据备份失败日志,完成以下任务:

  1. 总结失败核心原因(分点说明,简洁明了);

  2. 针对每个原因给出可执行的解决方案(步骤清晰,适配新手,涉及服务器操作需标注命令);

  3. 标注问题紧急程度(高/中/低),高紧急需立即处理,避免数据丢失。

    备份失败详情:{error_msg[:500]}
    备份类型:数据库/文件备份,存储方式:本地+阿里云OSS,服务器系统为Linux。

    请按以下格式输出:
    【紧急程度】XXX
    【失败原因】

    1. ...
    2. ...
      【解决方案】
    3. ...
    4. ...
      """

    try:
    response = client.chat.completions.create(
    model=OPENAI_MODEL,
    messages=[{"role": "user", "content": prompt}],
    temperature=0.3
    )
    analysis_result = response.choices[0].message.content.strip()
    # 更新备份记录的AI分析结果(扩展BackupRecord模型,新增ai_analysis字段)
    backup_record = BackupRecord.query.get(backup_record_id)
    if backup_record:
    backup_record.ai_analysis = analysis_result
    db.session.commit()
    return analysis_result
    except Exception as e:
    return f"AI分析失败:{str(e)},请手动排查日志"

====================== 定时批量分析未处理的失败日志 ======================

def batch_analyze_error_logs():

"""定时分析未进行AI分析的同步/备份失败日志"""

分析ERP同步失败日志

unanalyzed_sync_logs = ERPSyncLog.query.filter(

ERPSyncLog.status.in_(["fail", "partial_success"]),

ERPSyncLog.ai_analysis.is_(None)

).all()

for log in unanalyzed_sync_logs:

error_details = log.error_log if log.error_log else "无详细日志"

if isinstance(error_details, str) and error_details.startswith("["):

error_details = json.loads(error_details)

analysis = analyze_sync_error(log.id, error_details)

log.ai_analysis = analysis

db.session.commit()

复制代码
# 分析备份失败日志(需先扩展BackupRecord模型,新增ai_analysis字段)
# unanalyzed_backup_records = BackupRecord.query.filter(...)
# ...

集成到定时任务(在init_erp_scheduler/init_backup_scheduler后添加)

def init_ai_analysis_scheduler():

from scheduler import scheduler

每小时批量分析失败日志

scheduler.add_job(

func=batch_analyze_error_logs,

trigger="interval",

hours=1,

id="batch_analyze_error",

name="批量AI异常分析",

replace_existing=True

)

  1. 第四步:集成到看板,实现同步/备份状态可视化

在app.py中新增/修改以下内容

from erp_sync import erp_bp, init_erp_scheduler

from backup_engine import backup_bp, init_backup_scheduler

from ai_analysis import init_ai_analysis_scheduler

from flask import render_template

注册蓝图

app.register_blueprint(erp_bp)

app.register_blueprint(backup_bp)

应用启动时初始化定时任务(ERP同步、备份、AI分析)

with app.app_context():

init_erp_scheduler()

init_backup_scheduler()

init_ai_analysis_scheduler()

新增:跨系统同步与备份监控页面

@app.route("/sync-backup/monitor")

@login_required

def sync_backup_monitor():

"""同步与备份状态监控页面"""

获取最新同步日志(各类型1条)

latest_customer_sync = ERPSyncLog.query.filter_by(sync_type="customer").order_by(ERPSyncLog.start_time.desc()).first()

latest_order_sync = ERPSyncLog.query.filter_by(sync_type="order").order_by(ERPSyncLog.start_time.desc()).first()

获取最新备份记录(数据库+文件)

latest_db_backup = BackupRecord.query.filter_by(backup_type="database").order_by(BackupRecord.backup_time.desc()).first()

latest_file_backup = BackupRecord.query.filter_by(backup_type="file").order_by(BackupRecord.backup_time.desc()).first()

复制代码
return render_template(
    "sync_backup_monitor.html",
    latest_customer_sync=latest_customer_sync,
    latest_order_sync=latest_order_sync,
    latest_db_backup=latest_db_backup,
    latest_file_backup=latest_file_backup
)

{% extends "base.html" %}

{% block content %}

ERP客户同步

状态:{{ {'success':'成功','fail':'失败','partial_success':'部分成功','processing':'处理中'}[latest_customer_sync.status if latest_customer_sync else '未知'] }}

最近同步:{{ latest_customer_sync.start_time.strftime('%Y-%m-%d %H:%M') if latest_customer_sync else '未同步' }}

相关推荐
攻城狮7号2 小时前
不懂代码也能造?TRAE+GLM-4.6 手把手教你搭心理咨询智能客服小程序
python·小程序·uni-app·vue·trae·glm我的编程搭子·glm-4.6
Loo国昌2 小时前
【LangChain1.0】第五阶段:RAG高级篇(高级检索与优化)
人工智能·后端·语言模型·架构
伊克罗德信息科技2 小时前
技术分享 | 用Dify搭建个人AI知识助手
人工智能
晚风吹长发2 小时前
初步了解Linux中的命名管道及简单应用和简单日志
linux·运维·服务器·开发语言·数据结构·c++·算法
TOPGUS2 小时前
谷歌发布三大AI购物新功能:从对话式搜索到AI代你下单
大数据·人工智能·搜索引擎·chatgpt·谷歌·seo·数字营销
Godspeed Zhao2 小时前
从零开始学AI4——背景知识3
人工智能
韦东东2 小时前
MinerU2.5源码拆解:双后端架构设计与企业级部署实践
人工智能·大模型·部署·架构设计·mineru·双后端
C++ 老炮儿的技术栈2 小时前
不调用C++/C的字符串库函数,编写函数strcpy
c语言·开发语言·c++·windows·git·postman·visual studio