目录
- Docker化你的Python应用:从开发到生产
-
- [1. 引言](#1. 引言)
- [2. Docker基础概念](#2. Docker基础概念)
-
- [2.1 容器与虚拟机的区别](#2.1 容器与虚拟机的区别)
- [2.2 Docker核心组件](#2.2 Docker核心组件)
- [3. 开发环境设置](#3. 开发环境设置)
-
- [3.1 安装Docker](#3.1 安装Docker)
- [3.2 创建示例Python应用](#3.2 创建示例Python应用)
- [4. 编写Dockerfile](#4. 编写Dockerfile)
-
- [4.1 基础Dockerfile](#4.1 基础Dockerfile)
- [4.2 多阶段构建优化](#4.2 多阶段构建优化)
- [4.3 开发环境Dockerfile](#4.3 开发环境Dockerfile)
- [5. Docker Compose开发环境](#5. Docker Compose开发环境)
-
- [5.1 基础Docker Compose配置](#5.1 基础Docker Compose配置)
- [5.2 开发环境优化配置](#5.2 开发环境优化配置)
- [5.3 开发环境启动脚本](#5.3 开发环境启动脚本)
- [6. 生产环境优化](#6. 生产环境优化)
-
- [6.1 生产环境Docker Compose配置](#6.1 生产环境Docker Compose配置)
- [6.2 Nginx配置](#6.2 Nginx配置)
- [6.3 环境变量配置](#6.3 环境变量配置)
- [7. 持续集成与部署](#7. 持续集成与部署)
-
- [7.1 GitHub Actions CI/CD流水线](#7.1 GitHub Actions CI/CD流水线)
- [7.2 安全扫描](#7.2 安全扫描)
- [8. 监控和日志](#8. 监控和日志)
-
- [8.1 应用监控配置](#8.1 应用监控配置)
- [8.2 日志配置优化](#8.2 日志配置优化)
- [9. 完整部署示例](#9. 完整部署示例)
-
- [9.1 部署脚本](#9.1 部署脚本)
- [9.2 健康检查脚本](#9.2 健康检查脚本)
- [10. 总结](#10. 总结)
-
- [10.1 核心收获](#10.1 核心收获)
- [10.2 最佳实践总结](#10.2 最佳实践总结)
- [10.3 后续优化方向](#10.3 后续优化方向)
『宝藏代码胶囊开张啦!』------ 我的 CodeCapsule 来咯!✨
写代码不再头疼!我的新站点 CodeCapsule 主打一个 "白菜价"+"量身定制 "!无论是卡脖子的毕设/课设/文献复现 ,需要灵光一现的算法改进 ,还是想给项目加个"外挂",这里都有便宜又好用的代码方案等你发现!低成本,高适配,助你轻松通关!速来围观 👉 CodeCapsule官网
Docker化你的Python应用:从开发到生产
1. 引言
在当今快速发展的软件开发领域,应用部署的复杂性和环境一致性问题是每个开发团队都必须面对的挑战。传统的部署方式常常因为环境差异、依赖冲突和配置管理等问题导致"在我机器上能运行"的尴尬局面。Docker技术的出现,彻底改变了这一现状。
Docker是一个开源的容器化平台,它允许开发者将应用及其所有依赖项打包到一个标准化的单元中,这个单元称为容器。与传统的虚拟机相比,Docker容器更加轻量级、启动更快,并且提供了更好的资源利用率。根据Docker官方报告,使用容器化技术可以将部署时间减少高达65%,同时将基础设施成本降低50%以上。
对于Python开发者而言,Docker化应用带来的好处尤为明显:
- 环境一致性:开发、测试、生产环境完全一致
- 依赖隔离:每个应用拥有独立的依赖环境,避免冲突
- 快速部署:一键部署,无需复杂的环境配置
- 可扩展性:轻松实现水平扩展和负载均衡
- 版本控制:容器镜像版本化,便于回滚和管理
本文将全面介绍如何将Python应用从开发环境Docker化到生产部署的完整流程。无论你是刚开始接触Docker的新手,还是希望优化现有部署流程的资深开发者,本文都将为你提供实用的指导和最佳实践。
2. Docker基础概念
2.1 容器与虚拟机的区别
理解Docker的第一步是明确容器与传统虚拟机的本质区别:
Docker容器 传统虚拟机 Binaries/Libraries App A Docker Engine Host OS Infrastructure Binaries/Libraries App B Binaries/Libraries App A Guest OS Hypervisor Host OS Infrastructure
关键差异:
- 虚拟机:每个VM包含完整的操作系统,资源开销大
- 容器:共享主机操作系统内核,只包含应用和依赖,轻量高效
2.2 Docker核心组件
Docker生态系统由以下几个核心组件构成:
- Docker镜像:只读模板,包含运行应用所需的一切
- Docker容器:镜像的运行实例
- Dockerfile:构建镜像的脚本文件
- Docker Compose:定义和运行多容器应用的工具
- Docker Registry:镜像仓库,如Docker Hub
3. 开发环境设置
3.1 安装Docker
首先需要在开发机器上安装Docker引擎:
bash
# 在Ubuntu上安装Docker
sudo apt update
sudo apt install docker.io
sudo systemctl start docker
sudo systemctl enable docker
# 将当前用户添加到docker组(避免每次使用sudo)
sudo usermod -aG docker $USER
# 重新登录使更改生效
newgrp docker
# 验证安装
docker --version
docker run hello-world
3.2 创建示例Python应用
让我们创建一个完整的Flask Web应用作为演示示例:
python
# app/__init__.py
from flask import Flask, jsonify, request
import logging
from datetime import datetime
import os
import redis
import json
def create_app():
"""应用工厂函数"""
app = Flask(__name__)
# 配置
app.config.from_mapping(
SECRET_KEY=os.environ.get('SECRET_KEY', 'dev-secret-key'),
REDIS_URL=os.environ.get('REDIS_URL', 'redis://localhost:6379'),
DEBUG=os.environ.get('DEBUG', 'False').lower() == 'true'
)
# 初始化Redis
redis_client = redis.Redis.from_url(
app.config['REDIS_URL'],
decode_responses=True
)
# 配置日志
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s %(levelname)s %(name)s %(message)s'
)
logger = logging.getLogger(__name__)
@app.route('/')
def index():
"""首页"""
visitor_count = redis_client.incr('visitor_count')
return jsonify({
'message': '欢迎使用Docker化的Python应用!',
'visitor_count': visitor_count,
'timestamp': datetime.now().isoformat(),
'environment': os.environ.get('ENVIRONMENT', 'development')
})
@app.route('/health')
def health_check():
"""健康检查端点"""
try:
# 检查Redis连接
redis_client.ping()
redis_healthy = True
except redis.ConnectionError:
redis_healthy = False
logger.error("Redis连接失败")
return jsonify({
'status': 'healthy' if redis_healthy else 'degraded',
'timestamp': datetime.now().isoformat(),
'redis': 'connected' if redis_healthy else 'disconnected',
'environment': os.environ.get('ENVIRONMENT', 'development')
})
@app.route('/api/users', methods=['GET', 'POST'])
def users():
"""用户API"""
if request.method == 'GET':
# 获取用户列表
users = []
for key in redis_client.scan_iter('user:*'):
user_data = redis_client.hgetall(key)
users.append(user_data)
return jsonify({'users': users})
elif request.method == 'POST':
# 创建新用户
data = request.get_json()
if not data or 'name' not in data or 'email' not in data:
return jsonify({'error': '缺少必要字段'}), 400
user_id = redis_client.incr('user_id_counter')
user_key = f'user:{user_id}'
user_data = {
'id': user_id,
'name': data['name'],
'email': data['email'],
'created_at': datetime.now().isoformat()
}
redis_client.hset(user_key, mapping=user_data)
logger.info(f"创建用户: {user_data}")
return jsonify(user_data), 201
@app.route('/api/users/<int:user_id>', methods=['GET'])
def get_user(user_id):
"""获取特定用户"""
user_key = f'user:{user_id}'
user_data = redis_client.hgetall(user_key)
if not user_data:
return jsonify({'error': '用户不存在'}), 404
return jsonify(user_data)
@app.errorhandler(404)
def not_found(error):
return jsonify({'error': '资源未找到'}), 404
@app.errorhandler(500)
def internal_error(error):
logger.error(f"服务器内部错误: {error}")
return jsonify({'error': '内部服务器错误'}), 500
return app
# 创建应用实例
app = create_app()
if __name__ == '__main__':
app.run(
host='0.0.0.0',
port=5000,
debug=app.config['DEBUG']
)
python
# app/config.py
import os
from datetime import timedelta
class Config:
"""基础配置"""
SECRET_KEY = os.environ.get('SECRET_KEY', 'dev-secret-key')
DEBUG = os.environ.get('DEBUG', 'False').lower() == 'true'
REDIS_URL = os.environ.get('REDIS_URL', 'redis://localhost:6379')
ENVIRONMENT = os.environ.get('ENVIRONMENT', 'development')
class DevelopmentConfig(Config):
"""开发环境配置"""
DEBUG = True
ENVIRONMENT = 'development'
class ProductionConfig(Config):
"""生产环境配置"""
DEBUG = False
ENVIRONMENT = 'production'
class TestingConfig(Config):
"""测试环境配置"""
TESTING = True
DEBUG = True
ENVIRONMENT = 'testing'
REDIS_URL = 'redis://localhost:6379/1' # 使用不同的数据库
def get_config():
"""根据环境变量获取配置"""
env = os.environ.get('ENVIRONMENT', 'development')
configs = {
'development': DevelopmentConfig,
'production': ProductionConfig,
'testing': TestingConfig
}
return configs.get(env, DevelopmentConfig)
python
# app/models.py
from datetime import datetime
from typing import Dict, Any, List
import redis
import json
class UserModel:
"""用户模型"""
def __init__(self, redis_client: redis.Redis):
self.redis = redis_client
def create(self, name: str, email: str) -> Dict[str, Any]:
"""创建用户"""
user_id = self.redis.incr('user_id_counter')
user_key = f'user:{user_id}'
user_data = {
'id': user_id,
'name': name,
'email': email,
'created_at': datetime.now().isoformat()
}
self.redis.hset(user_key, mapping=user_data)
return user_data
def get(self, user_id: int) -> Dict[str, Any]:
"""获取用户"""
user_key = f'user:{user_id}'
user_data = self.redis.hgetall(user_key)
return user_data if user_data else None
def get_all(self) -> List[Dict[str, Any]]:
"""获取所有用户"""
users = []
for key in self.redis.scan_iter('user:*'):
user_data = self.redis.hgetall(key)
users.append(user_data)
return users
def delete(self, user_id: int) -> bool:
"""删除用户"""
user_key = f'user:{user_id}'
return bool(self.redis.delete(user_key))
class VisitorCounter:
"""访问计数器"""
def __init__(self, redis_client: redis.Redis):
self.redis = redis_client
def increment(self) -> int:
"""增加访问计数"""
return self.redis.incr('visitor_count')
def get_count(self) -> int:
"""获取访问计数"""
count = self.redis.get('visitor_count')
return int(count) if count else 0
def reset(self) -> None:
"""重置计数器"""
self.redis.set('visitor_count', 0)
python
# requirements.txt
Flask==2.3.3
redis==4.6.0
gunicorn==21.2.0
python-dotenv==1.0.0
blinker==1.6.2
python
# wsgi.py
from app import app
if __name__ == "__main__":
app.run()
4. 编写Dockerfile
4.1 基础Dockerfile
创建适用于开发环境的基础Dockerfile:
dockerfile
# Dockerfile
# 使用官方Python运行时作为基础镜像
FROM python:3.11-slim
# 设置工作目录
WORKDIR /app
# 设置环境变量
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV FLASK_APP=app
ENV FLASK_ENV=production
# 安装系统依赖
RUN apt-get update \
&& apt-get install -y --no-install-recommends \
gcc \
curl \
&& rm -rf /var/lib/apt/lists/*
# 复制requirements文件
COPY requirements.txt .
# 安装Python依赖
RUN pip install --no-cache-dir -r requirements.txt
# 复制应用代码
COPY . .
# 创建非root用户
RUN groupadd -r appuser && useradd -r -g appuser appuser
RUN chown -R appuser:appuser /app
USER appuser
# 暴露端口
EXPOSE 5000
# 健康检查
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5000/health || exit 1
# 启动命令
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "wsgi:app"]
4.2 多阶段构建优化
为了优化生产环境镜像,我们可以使用多阶段构建:
dockerfile
# Dockerfile.multistage
# 第一阶段:构建阶段
FROM python:3.11-slim as builder
WORKDIR /app
# 安装构建依赖
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
&& rm -rf /var/lib/apt/lists/*
# 复制requirements文件
COPY requirements.txt .
# 安装依赖到虚拟环境
RUN python -m venv /opt/venv
ENV PATH="/opt/venv/bin:$PATH"
RUN pip install --no-cache-dir -r requirements.txt
# 第二阶段:运行阶段
FROM python:3.11-slim as runtime
# 安装运行时依赖
RUN apt-get update && apt-get install -y --no-install-recommends \
curl \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean
# 创建应用用户
RUN groupadd -r appuser && useradd -r -g appuser appuser
# 从构建阶段复制虚拟环境
COPY --from=builder /opt/venv /opt/venv
# 设置环境变量
ENV PATH="/opt/venv/bin:$PATH"
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV FLASK_ENV=production
WORKDIR /app
# 复制应用代码
COPY --chown=appuser:appuser . .
# 切换到非root用户
USER appuser
# 暴露端口
EXPOSE 5000
# 健康检查
HEALTHCHECK --interval=30s --timeout=30s --start-period=5s --retries=3 \
CMD curl -f http://localhost:5000/health || exit 1
# 启动应用
CMD ["gunicorn", "--bind", "0.0.0.0:5000", "--workers", "4", "wsgi:app"]
4.3 开发环境Dockerfile
为开发环境创建专门的Dockerfile:
dockerfile
# Dockerfile.dev
FROM python:3.11-slim
WORKDIR /app
# 设置开发环境变量
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
ENV FLASK_APP=app
ENV FLASK_ENV=development
ENV FLASK_DEBUG=1
# 安装系统依赖(包含开发工具)
RUN apt-get update && apt-get install -y --no-install-recommends \
gcc \
curl \
vim \
&& rm -rf /var/lib/apt/lists/*
# 复制requirements文件
COPY requirements.txt .
# 安装Python依赖
RUN pip install --no-cache-dir -r requirements.txt
# 复制应用代码
COPY . .
# 暴露端口
EXPOSE 5000
# 开发环境使用flask run(支持热重载)
CMD ["flask", "run", "--host=0.0.0.0", "--port=5000"]
5. Docker Compose开发环境
5.1 基础Docker Compose配置
创建完整的开发环境Docker Compose配置:
yaml
# docker-compose.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile.dev
ports:
- "5000:5000"
environment:
- FLASK_ENV=development
- FLASK_DEBUG=1
- REDIS_URL=redis://redis:6379/0
- SECRET_KEY=dev-secret-key-change-in-production
- ENVIRONMENT=development
volumes:
- .:/app
# 排除不必要的文件
- /app/__pycache__
- /app/.pytest_cache
depends_on:
- redis
networks:
- app-network
# 开发环境使用标准输入保持容器运行
stdin_open: true
tty: true
redis:
image: redis:7-alpine
ports:
- "6379:6379"
volumes:
- redis_data:/data
networks:
- app-network
command: redis-server --appendonly yes
# 可选:添加Redis管理界面
redis-commander:
image: rediscommander/redis-commander:latest
environment:
- REDIS_HOSTS=local:redis:6379
ports:
- "8081:8081"
depends_on:
- redis
networks:
- app-network
volumes:
redis_data:
networks:
app-network:
driver: bridge
5.2 开发环境优化配置
为开发环境添加更多便利功能:
yaml
# docker-compose.override.yml
# 这个文件在开发环境中自动加载
version: '3.8'
services:
web:
# 开发环境使用不同的entrypoint
entrypoint: ["/app/docker-entrypoint.sh"]
# 开发环境启用热重载
command: ["flask", "run", "--host=0.0.0.0", "--port=5000", "--reload"]
# 挂载源代码用于实时开发
volumes:
- .:/app
# Node.js应用可以挂载node_modules
- /app/venv
# 开发环境可以访问调试器
security_opt:
- seccomp:unconfined
# 开发环境资源限制较宽松
deploy:
resources:
limits:
memory: 1G
reservations:
memory: 512M
# 开发环境添加数据库(如果需要)
postgres:
image: postgres:15-alpine
environment:
- POSTGRES_DB=app_development
- POSTGRES_USER=app_user
- POSTGRES_PASSWORD=dev_password
ports:
- "5432:5432"
volumes:
- postgres_data:/var/lib/postgresql/data
networks:
- app-network
volumes:
postgres_data:
5.3 开发环境启动脚本
创建开发环境启动和初始化脚本:
bash
#!/bin/bash
# docker-entrypoint.sh
set -e
echo "等待依赖服务启动..."
# 等待Redis可用
/wait-for-it.sh redis:6379 --timeout=30 --strict -- echo "Redis已启动"
# 等待PostgreSQL可用(如果使用)
# /wait-for-it.sh postgres:5432 --timeout=30 --strict -- echo "PostgreSQL已启动"
echo "运行数据库迁移(如果需要)"
# flask db upgrade
echo "启动开发服务器"
exec "$@"
bash
#!/bin/bash
# wait-for-it.sh
# 从https://github.com/vishnubob/wait-for-it 获取的脚本
WAITFORIT_cmdname=${0##*/}
echoerr() { if [[ $WAITFORIT_QUIET -ne 1 ]]; then echo "$@" 1>&2; fi }
usage()
{
cat << USAGE >&2
Usage:
$WAITFORIT_cmdname host:port [-s] [-t timeout] [-- command args]
-h HOST | --host=HOST Host or IP under test
-p PORT | --port=PORT TCP port under test
Alternatively, you specify the host and port as host:port
-s | --strict Only execute subcommand if the test succeeds
-q | --quiet Don't output any status messages
-t TIMEOUT | --timeout=TIMEOUT
Timeout in seconds, zero for no timeout
-- COMMAND ARGS Execute command with args after the test finishes
USAGE
exit 1
}
wait_for()
{
if [[ $WAITFORIT_TIMEOUT -gt 0 ]]; then
echoerr "$WAITFORIT_cmdname: waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
else
echoerr "$WAITFORIT_cmdname: waiting for $WAITFORIT_HOST:$WAITFORIT_PORT without a timeout"
fi
WAITFORIT_start_ts=$(date +%s)
while :
do
if [[ $WAITFORIT_ISBUSY -eq 1 ]]; then
nc -z $WAITFORIT_HOST $WAITFORIT_PORT
WAITFORIT_result=$?
else
(echo -n > /dev/tcp/$WAITFORIT_HOST/$WAITFORIT_PORT) >/dev/null 2>&1
WAITFORIT_result=$?
fi
if [[ $WAITFORIT_result -eq 0 ]]; then
WAITFORIT_end_ts=$(date +%s)
echoerr "$WAITFORIT_cmdname: $WAITFORIT_HOST:$WAITFORIT_PORT is available after $((WAITFORIT_end_ts - WAITFORIT_start_ts)) seconds"
break
fi
sleep 1
done
return $WAITFORIT_result
}
wait_for_wrapper()
{
# 为了支持在Alpine Linux上运行,这个脚本使用nc或/dev/tcp
if command -v nc >/dev/null 2>&1; then
WAITFORIT_ISBUSY=1
elif [[ $WAITFORIT_TIMEOUT -gt 0 ]] && command -v timeout >/dev/null 2>&1; then
WAITFORIT_ISBUSY=0
else
echoerr "错误: 这个脚本需要nc或timeout命令"
exit 1
fi
WAITFORIT_HOST=$1
WAITFORIT_PORT=$2
WAITFORIT_TIMEOUT=$3
wait_for
WAITFORIT_RESULT=$?
if [[ $WAITFORIT_RESULT -ne 0 ]]; then
echoerr "$WAITFORIT_cmdname: timeout occurred after waiting $WAITFORIT_TIMEOUT seconds for $WAITFORIT_HOST:$WAITFORIT_PORT"
fi
return $WAITFORIT_RESULT
}
# 处理参数
WAITFORIT_HOST=""
WAITFORIT_PORT=""
WAITFORIT_TIMEOUT=15
WAITFORIT_QUIET=0
WAITFORIT_STRICT=0
WAITFORIT_CMD=()
while [[ $# -gt 0 ]]
do
case "$1" in
*:* )
WAITFORIT_hostport=(${1//:/ })
WAITFORIT_HOST=${WAITFORIT_hostport[0]}
WAITFORIT_PORT=${WAITFORIT_hostport[1]}
shift 1
;;
-q | --quiet)
WAITFORIT_QUIET=1
shift 1
;;
-s | --strict)
WAITFORIT_STRICT=1
shift 1
;;
-h)
WAITFORIT_HOST="$2"
if [[ $WAITFORIT_HOST == "" ]]; then break; fi
shift 2
;;
--host=*)
WAITFORIT_HOST="${1#*=}"
shift 1
;;
-p)
WAITFORIT_PORT="$2"
if [[ $WAITFORIT_PORT == "" ]]; then break; fi
shift 2
;;
--port=*)
WAITFORIT_PORT="${1#*=}"
shift 1
;;
-t)
WAITFORIT_TIMEOUT="$2"
if [[ $WAITFORIT_TIMEOUT == "" ]]; then break; fi
shift 2
;;
--timeout=*)
WAITFORIT_TIMEOUT="${1#*=}"
shift 1
;;
--)
shift
WAITFORIT_CMD=("$@")
break
;;
--help)
usage
;;
*)
echoerr "未知参数: $1"
usage
;;
esac
done
if [[ "$WAITFORIT_HOST" == "" || "$WAITFORIT_PORT" == "" ]]; then
echoerr "错误: 需要指定主机和端口"
usage
fi
WAITFORIT_TIMEOUT=${WAITFORIT_TIMEOUT:-15}
WAITFORIT_STRICT=${WAITFORIT_STRICT:-0}
wait_for_wrapper $WAITFORIT_HOST $WAITFORIT_PORT $WAITFORIT_TIMEOUT
WAITFORIT_RESULT=$?
if [[ $WAITFORIT_CMD != "" ]]; then
if [[ $WAITFORIT_RESULT -ne 0 && $WAITFORIT_STRICT -eq 1 ]]; then
echoerr "$WAITFORIT_cmdname: strict mode, refusing to execute subprocess"
exit $WAITFORIT_RESULT
fi
exec "${WAITFORIT_CMD[@]}"
else
exit $WAITFORIT_RESULT
fi
6. 生产环境优化
6.1 生产环境Docker Compose配置
创建专门的生产环境配置:
yaml
# docker-compose.prod.yml
version: '3.8'
services:
web:
build:
context: .
dockerfile: Dockerfile.multistage
image: myapp:${TAG:-latest}
restart: unless-stopped
environment:
- FLASK_ENV=production
- REDIS_URL=redis://redis:6379/0
- SECRET_KEY=${SECRET_KEY}
- ENVIRONMENT=production
expose:
- "5000"
depends_on:
- redis
networks:
- app-network
# 生产环境资源限制
deploy:
resources:
limits:
memory: 512M
cpus: '1.0'
reservations:
memory: 256M
cpus: '0.5'
# 健康检查配置
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:5000/health"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
# 日志配置
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"
redis:
image: redis:7-alpine
restart: unless-stopped
command: redis-server --appendonly yes --requirepass ${REDIS_PASSWORD}
volumes:
- redis_data:/data
networks:
- app-network
# Redis资源限制
deploy:
resources:
limits:
memory: 256M
reservations:
memory: 128M
# Nginx反向代理
nginx:
image: nginx:1.23-alpine
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/conf.d:/etc/nginx/conf.d:ro
- ssl_certs:/etc/ssl/certs
depends_on:
- web
networks:
- app-network
volumes:
redis_data:
ssl_certs:
networks:
app-network:
driver: bridge
6.2 Nginx配置
创建Nginx反向代理配置:
nginx
# nginx/nginx.conf
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;
events {
worker_connections 1024;
use epoll;
multi_accept on;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" '
'rt=$request_time uct="$upstream_connect_time" '
'uht="$upstream_header_time" urt="$upstream_response_time"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# Gzip压缩
gzip on;
gzip_vary on;
gzip_min_length 1024;
gzip_proxied any;
gzip_comp_level 6;
gzip_types
application/atom+xml
application/javascript
application/json
application/ld+json
application/manifest+json
application/rss+xml
application/vnd.geo+json
application/vnd.ms-fontobject
application/x-font-ttf
application/x-web-app-manifest+json
application/xhtml+xml
application/xml
font/opentype
image/bmp
image/svg+xml
image/x-icon
text/cache-manifest
text/css
text/plain
text/vcard
text/vnd.rim.location.xloc
text/vtt
text/x-component
text/x-cross-domain-policy;
include /etc/nginx/conf.d/*.conf;
}
nginx
# nginx/conf.d/app.conf
upstream app_servers {
server web:5000;
}
server {
listen 80;
server_name _;
# 安全头
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-XSS-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy "no-referrer-when-downgrade" always;
add_header Content-Security-Policy "default-src 'self' http: https: data: blob: 'unsafe-inline'" always;
# 静态文件缓存
location ~* \.(jpg|jpeg|png|gif|ico|css|js)$ {
expires 1y;
add_header Cache-Control "public, immutable";
}
# 健康检查
location /health {
access_log off;
proxy_pass http://app_servers;
}
# API路由
location /api/ {
proxy_pass http://app_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
proxy_connect_timeout 30s;
proxy_send_timeout 30s;
proxy_read_timeout 30s;
}
# 其他路由
location / {
proxy_pass http://app_servers;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_cache_bypass $http_upgrade;
}
}
6.3 环境变量配置
创建生产环境变量文件:
bash
# .env.production
# 应用配置
SECRET_KEY=your-production-secret-key-change-this
FLASK_ENV=production
ENVIRONMENT=production
# Redis配置
REDIS_PASSWORD=your-secure-redis-password
# 数据库配置(如果使用)
POSTGRES_DB=app_production
POSTGRES_USER=app_user
POSTGRES_PASSWORD=your-secure-db-password
# 镜像标签
TAG=latest
7. 持续集成与部署
7.1 GitHub Actions CI/CD流水线
创建自动化部署流水线:
yaml
# .github/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches: [ main ]
pull_request:
branches: [ main ]
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
python-version: [3.9, 3.10, 3.11]
steps:
- uses: actions/checkout@v4
- name: Set up Python ${{ matrix.python-version }}
uses: actions/setup-python@v4
with:
python-version: ${{ matrix.python-version }}
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install -r requirements.txt
pip install pytest pytest-cov
- name: Run tests
run: |
pytest --cov=app tests/ --cov-report=xml
- name: Upload coverage to Codecov
uses: codecov/codecov-action@v3
with:
file: ./coverage.xml
flags: unittests
name: codecov-umbrella
build-and-push:
needs: test
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v3
- name: Log in to Container Registry
uses: docker/login-action@v3
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Extract metadata
id: meta
uses: docker/metadata-action@v5
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=sha,prefix={{branch}}-
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=semver,pattern={{major}}.{{minor}}
- name: Build and push Docker image
uses: docker/build-push-action@v5
with:
context: .
file: ./Dockerfile.multistage
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
deploy:
needs: build-and-push
runs-on: ubuntu-latest
if: github.ref == 'refs/heads/main'
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Deploy to production
uses: appleboy/ssh-action@v1.0.0
with:
host: ${{ secrets.PRODUCTION_HOST }}
username: ${{ secrets.PRODUCTION_USERNAME }}
key: ${{ secrets.PRODUCTION_SSH_KEY }}
script: |
cd /opt/myapp
docker-compose -f docker-compose.prod.yml pull
docker-compose -f docker-compose.prod.yml up -d
docker system prune -f
7.2 安全扫描
在CI/CD流水线中添加安全扫描:
yaml
# .github/workflows/security.yml
name: Security Scan
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main ]
jobs:
security-scan:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout@v4
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@master
with:
scan-type: 'fs'
scan-ref: '.'
format: 'sarif'
output: 'trivy-results.sarif'
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: 'trivy-results.sarif'
- name: Run Hadolint for Dockerfile linting
uses: hadolint/hadolint-action@v3.1.0
with:
dockerfile: Dockerfile
dockerfile: Dockerfile.multistage
- name: Run Bandit for Python security issues
run: |
pip install bandit
bandit -r app/ -f json -o bandit-results.json
- name: Upload Bandit results
uses: github/codeql-action/upload-sarif@v3
if: always()
with:
sarif_file: bandit-results.json
8. 监控和日志
8.1 应用监控配置
添加应用性能监控:
python
# app/monitoring.py
import time
import logging
from functools import wraps
from prometheus_client import Counter, Histogram, generate_latest, REGISTRY
from flask import request, Response
# 定义指标
REQUEST_COUNT = Counter(
'http_requests_total',
'Total HTTP Requests',
['method', 'endpoint', 'status']
)
REQUEST_DURATION = Histogram(
'http_request_duration_seconds',
'HTTP Request Duration',
['method', 'endpoint']
)
def monitor_requests(f):
"""监控请求的装饰器"""
@wraps(f)
def decorated_function(*args, **kwargs):
start_time = time.time()
try:
response = f(*args, **kwargs)
status_code = response.status_code if hasattr(response, 'status_code') else 200
except Exception as e:
status_code = 500
raise e
finally:
duration = time.time() - start_time
# 记录指标
REQUEST_COUNT.labels(
method=request.method,
endpoint=request.endpoint or 'unknown',
status=status_code
).inc()
REQUEST_DURATION.labels(
method=request.method,
endpoint=request.endpoint or 'unknown'
).observe(duration)
return response
return decorated_function
def setup_metrics(app):
"""设置指标端点"""
@app.route('/metrics')
def metrics():
"""Prometheus指标端点"""
return Response(generate_latest(REGISTRY), mimetype='text/plain')
8.2 日志配置优化
创建生产环境日志配置:
python
# app/logging_config.py
import logging
import sys
from logging.handlers import RotatingFileHandler, SysLogHandler
import json
import os
class JSONFormatter(logging.Formatter):
"""JSON日志格式化器"""
def format(self, record):
log_entry = {
'timestamp': self.formatTime(record),
'level': record.levelname,
'logger': record.name,
'message': record.getMessage(),
'module': record.module,
'function': record.funcName,
'line': record.lineno,
}
# 添加异常信息
if record.exc_info:
log_entry['exception'] = self.formatException(record.exc_info)
# 添加额外字段
if hasattr(record, 'props'):
log_entry.update(record.props)
return json.dumps(log_entry)
def setup_logging(app):
"""设置日志配置"""
# 根据环境设置日志级别
if app.config.get('ENVIRONMENT') == 'production':
log_level = logging.INFO
else:
log_level = logging.DEBUG
# 清除现有的处理器
for handler in logging.root.handlers[:]:
logging.root.removeHandler(handler)
# 创建格式化器
if app.config.get('ENVIRONMENT') == 'production':
formatter = JSONFormatter()
else:
formatter = logging.Formatter(
'%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)
# 控制台处理器
console_handler = logging.StreamHandler(sys.stdout)
console_handler.setLevel(log_level)
console_handler.setFormatter(formatter)
# 文件处理器(生产环境)
if app.config.get('ENVIRONMENT') == 'production':
file_handler = RotatingFileHandler(
'/var/log/app/app.log',
maxBytes=10485760, # 10MB
backupCount=5
)
file_handler.setLevel(logging.INFO)
file_handler.setFormatter(formatter)
logging.root.addHandler(file_handler)
# 添加处理器到根日志记录器
logging.root.addHandler(console_handler)
logging.root.setLevel(log_level)
# 设置第三方库的日志级别
logging.getLogger('werkzeug').setLevel(logging.WARNING)
logging.getLogger('gunicorn').setLevel(logging.INFO)
9. 完整部署示例
9.1 部署脚本
创建完整的部署脚本:
bash
#!/bin/bash
# deploy.sh
set -e
# 颜色定义
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# 日志函数
log() {
echo -e "${GREEN}[$(date +'%Y-%m-%d %H:%M:%S')] $1${NC}"
}
warn() {
echo -e "${YELLOW}[$(date +'%Y-%m-%d %H:%M:%S')] WARNING: $1${NC}"
}
error() {
echo -e "${RED}[$(date +'%Y-%m-%d %H:%M:%S')] ERROR: $1${NC}"
exit 1
}
# 检查依赖
check_dependencies() {
log "检查系统依赖..."
if ! command -v docker &> /dev/null; then
error "Docker未安装"
fi
if ! command -v docker-compose &> /dev/null; then
error "Docker Compose未安装"
fi
log "✓ 所有依赖已安装"
}
# 加载环境变量
load_env() {
local env_file="${1:-.env.production}"
if [[ -f "$env_file" ]]; then
log "加载环境变量从 $env_file"
set -a
source "$env_file"
set +a
else
warn "环境变量文件 $env_file 不存在"
fi
}
# 备份当前部署
backup_current() {
if [[ -d "backups" ]]; then
local backup_dir="backups/$(date +'%Y%m%d_%H%M%S')"
log "创建备份到 $backup_dir"
mkdir -p "$backup_dir"
# 备份重要的数据卷
if docker volume ls | grep -q "myapp_redis_data"; then
docker run --rm \
-v myapp_redis_data:/data \
-v "$(pwd)/$backup_dir":/backup \
alpine tar czf /backup/redis_data.tar.gz -C /data ./
fi
fi
}
# 拉取最新镜像
pull_images() {
log "拉取最新Docker镜像..."
docker-compose -f docker-compose.prod.yml pull
}
# 运行数据库迁移
run_migrations() {
log "运行数据库迁移..."
# 如果有数据库迁移,在这里执行
# docker-compose -f docker-compose.prod.yml run --rm web flask db upgrade
}
# 部署应用
deploy_app() {
log "部署应用..."
docker-compose -f docker-compose.prod.yml up -d
# 等待应用启动
log "等待应用启动..."
sleep 30
# 检查应用健康状态
if curl -f http://localhost/health &> /dev/null; then
log "✓ 应用健康检查通过"
else
error "应用健康检查失败"
fi
}
# 清理旧镜像
cleanup() {
log "清理旧Docker镜像..."
docker image prune -f
}
# 显示部署状态
show_status() {
log "显示部署状态..."
echo ""
docker-compose -f docker-compose.prod.yml ps
echo ""
docker-compose -f docker-compose.prod.yml logs --tail=10 web
}
# 主部署流程
main() {
local env_file="${1:-.env.production}"
log "开始部署流程..."
check_dependencies
load_env "$env_file"
backup_current
pull_images
run_migrations
deploy_app
cleanup
show_status
log "🎉 部署完成!"
}
# 执行主函数
main "$@"
9.2 健康检查脚本
创建详细的健康检查脚本:
bash
#!/bin/bash
# health-check.sh
set -e
# 健康检查端点
HEALTH_URL="http://localhost/health"
# 检查应用健康状态
check_app_health() {
echo "检查应用健康状态..."
local response
response=$(curl -s -f "$HEALTH_URL" || echo "{}")
if python3 -c "
import json, sys
try:
data = json.loads('$response')
status = data.get('status', 'unknown')
redis_status = data.get('redis', 'unknown')
if status == 'healthy' and redis_status == 'connected':
print('SUCCESS')
sys.exit(0)
else:
print(f'FAILED: status={status}, redis={redis_status}')
sys.exit(1)
except Exception as e:
print(f'ERROR: {e}')
sys.exit(2)
"; then
echo "✓ 应用健康状态正常"
return 0
else
echo "✗ 应用健康状态异常"
return 1
fi
}
# 检查容器状态
check_container_health() {
echo "检查容器状态..."
local containers
containers=$(docker-compose -f docker-compose.prod.yml ps -q)
for container in $containers; do
local status
status=$(docker inspect --format='{{.State.Status}}' "$container")
if [[ "$status" == "running" ]]; then
echo "✓ 容器 $container 运行正常"
else
echo "✗ 容器 $container 状态异常: $status"
return 1
fi
done
return 0
}
# 检查资源使用情况
check_resources() {
echo "检查资源使用情况..."
# 检查内存使用
local memory_usage
memory_usage=$(docker stats --no-stream --format "table {{.Container}}\t{{.MemUsage}}" | grep -v "CONTAINER" || true)
echo "内存使用情况:"
echo "$memory_usage"
# 检查磁盘空间
local disk_usage
disk_usage=$(df -h / | awk 'NR==2 {print $5 " used (" $3 "/" $2 ")"}')
echo "磁盘使用情况: $disk_usage"
}
# 主健康检查流程
main() {
echo "开始健康检查..."
if check_app_health && check_container_health; then
check_resources
echo "✅ 所有健康检查通过"
return 0
else
echo "❌ 健康检查失败"
return 1
fi
}
# 执行主函数
main "$@"
10. 总结
通过本文的全面介绍,我们完成了Python应用从开发到生产的完整Docker化流程。让我们回顾一下关键要点:
10.1 核心收获
- 环境一致性:通过Docker实现了开发、测试、生产环境的完全一致
- 高效开发:使用Docker Compose简化了多服务应用的开发环境搭建
- 生产就绪:通过多阶段构建、资源限制、健康检查等优化生产部署
- 自动化部署:借助CI/CD流水线实现自动化测试、构建和部署
- 监控运维:集成了日志管理、性能监控和健康检查
10.2 最佳实践总结
- 镜像优化:使用多阶段构建减小镜像大小
- 安全加固:使用非root用户运行容器,定期安全扫描
- 配置管理:通过环境变量管理不同环境的配置
- 资源管理:合理设置资源限制,避免单容器影响整个系统
- 健康检查:实现全面的健康检查机制,确保应用可用性
10.3 后续优化方向
随着应用的发展,还可以考虑以下优化:
- 容器编排:迁移到Kubernetes实现更复杂的部署模式
- 服务网格:使用Istio等服务网格技术管理微服务通信
- 可观测性:集成分布式追踪和高级监控
- 安全扫描:在CI/CD流水线中加入更全面的安全扫描
- 多云部署:实现跨多个云平台的部署能力
Docker化不仅仅是技术选择,更是一种开发理念的转变。它促使我们思考如何构建可移植、可扩展、易维护的应用架构。通过采用本文介绍的最佳实践,你将能够构建出生产就绪的Python应用,为业务的稳定运行提供坚实的技术基础。
记住,容器化之旅是一个持续改进的过程。随着新技术和新工具的出现,不断优化你的Docker化策略,让部署变得简单而可靠。