Building a Containerised Backend with Docker Compose

Overview

In this exercise, I built a fully containerised backend system using Docker Compose.

The system consists of a Python API and a MySQL database, running in separate containers but working together as a single application.

The goal was to understand how services are connected, started, and verified in a containerised environment.


Step 1: Build a Simple Python API

I first created a minimal Python API that supports:

  • Adding a product (POST)

  • Fetching all products (GET)

Example API routes:

复制代码
@app.route("/products", methods=["POST"])
def add_product():
    ...

@app.route("/products", methods=["GET"])
def get_products():
    ...

This API is responsible only for handling HTTP requests and database operations.


Step 2: Containerise the API with Dockerfile

Next, I containerised the API using a Dockerfile.

复制代码
FROM python:3.10-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY app.py .
CMD ["python", "app.py"]

This defines:

  • The runtime environment

  • The dependencies

  • How the API starts inside a container


Step 3: Prepare Database Initialisation

I created an SQL script that automatically runs when the database container starts for the first time.

复制代码
CREATE TABLE products (
  id INT AUTO_INCREMENT PRIMARY KEY,
  name VARCHAR(100),
  quantity INT
);

This ensures the database schema is ready without manual setup.


Step 4: Orchestrate Services with Docker Compose

Using Docker Compose, I defined and connected the API and database services.

Docker Compose automatically:

  • Creates a shared network

  • Allows services to communicate using service names

  • Manages startup order


Step 5: Handle Startup Timing Issues

During startup, the API initially failed because the database was not ready.

I solved this by adding a retry mechanism in the API so it waits until the database becomes available.

复制代码
def get_db_connection():
    while True:
        try:
            return mysql.connector.connect(...)
        except:
            time.sleep(3)

This made the system stable and resilient during startup.


Step 6: Verify the System

Finally, I verified the system by sending HTTP requests directly from the command line.

复制代码
curl -X POST http://localhost:5000/products \
  -H "Content-Type: application/json" \
  -d '{"name":"Apple","quantity":10}'

curl http://localhost:5000/products

Both data insertion and retrieval worked as expected.


Final Outcome

I successfully built and ran a fully containerised backend system ,

where a Python API and MySQL database communicate through Docker Compose,

and verified that the system works end-to-end.

相关推荐
China_Yanhy7 分钟前
入职 Web3 运维日记 · 第 8 日:黑暗森林 —— 对抗 MEV 机器人的“三明治攻击”
运维·机器人·web3
艾莉丝努力练剑13 分钟前
hixl vs NCCL:昇腾生态通信库的独特优势分析
运维·c++·人工智能·cann
酉鬼女又兒21 分钟前
每天一个Linux命令_printf
linux·运维·服务器
虾说羊26 分钟前
docker容器化部署项目流程
运维·docker·容器
Trouvaille ~27 分钟前
TCP Socket编程实战(三):线程池优化与TCP编程最佳实践
linux·运维·服务器·网络·c++·网络协议·tcp/ip
大大大反派30 分钟前
CANN 生态中的自动化部署引擎:深入 `mindx-sdk` 项目构建端到端 AI 应用
运维·人工智能·自动化
WHD3061 小时前
苏州勒索病毒加密 服务器数据解密恢复
运维·服务器
骇客野人1 小时前
通过脚本推送Docker镜像
java·docker·容器
蜡笔小炘1 小时前
LVS -- 持久链接(Persistent Connection)实现会话粘滞
运维·服务器
liux35282 小时前
基于kubeadm部署Kubernetes 1.26.4 集群指南
云原生·容器·kubernetes