企业级微服务基础设施 | Docker Compose 9 大中间件 本地私有仓库 一键部署脚本前言

前言

在本地开发或测试环境中,基于127.0.0.1 本地私有仓库 快速部署整套微服务基础设施是高频刚需。本文提供一键自动化部署脚本,集成 9 大生产级中间件,适配本地 HTTP 私有仓库,自动配置权限、初始化数据库、修复兼容问题,全程零手动操作,开箱即用。

统一密码shhy123#部署目录/home/appInfrastructure本地私有仓库127.0.0.1:8081

一、部署服务清单

  1. Redis 6.2.0(缓存)
  2. RabbitMQ 3.13(消息队列)
  3. MySQL 8.4.0(关系型数据库)
  4. Elasticsearch 8.17.2(检索引擎)
  5. Kibana 8.17.2(可视化)
  6. MongoDB 8.0.3(文档数据库)
  7. InfluxDB 2.7.10(时序数据库)
  8. EMQX 5.8.8(MQTT 服务器)
  9. Nacos 2.2.3(配置 / 注册中心)

二、核心特性

  • ✅ 自动配置 Docker 信任本地 HTTP 私有仓库
  • ✅ 自动创建标准化目录、写入配置文件
  • ✅ 自动修复 MySQL8 认证插件兼容问题
  • ✅ 自动初始化 Nacos 数据库、配置鉴权
  • ✅ 自动设置 Kibana 密码、ES 安全配置
  • ✅ 全服务健康检查 + 资源限制 + 开机自启
  • ✅ 规范化目录权限,彻底解决容器挂载报错

三、完整一键部署脚本(本地仓库版)

bash 复制代码
#!/bin/bash
set -e
echo "============================================="
echo "      基础设施一键部署脚本 (专属配置版)      "
echo "============================================="

# 核心配置
BASE_DIR="/home/appInfrastructure"
REGISTRY="100.31.200.100:8081"
IMAGE_PREFIX="${REGISTRY}/public-docker"

# ===================== 自动配置Docker信任HTTP仓库 =====================
echo "0. 自动配置Docker信任私有HTTP仓库..."
sudo mkdir -p /etc/docker
sudo tee /etc/docker/daemon.json > /dev/null <<EOF
{
  "insecure-registries": ["100.31.200.100:8081"]
}
EOF
sudo systemctl daemon-reload
sudo systemctl restart docker
echo "✅ Docker HTTP仓库配置完成,等待服务重启..."
sleep 5
# ==========================================================================

# 1. 初始化工作目录
echo "1. 初始化工作目录..."
sudo mkdir -p ${BASE_DIR}
cd ${BASE_DIR}

# ===================== 第一步:创建所有服务目录(先建目录,防止报错)=====================
echo "2. 创建服务目录结构..."
# Redis
sudo mkdir -p ./redis/data ./redis/logs
# RabbitMQ
sudo mkdir -p ./rabbitmq/data ./rabbitmq/logs
# MySQL
sudo mkdir -p ./mysql/conf ./mysql/data ./mysql/logs
# MongoDB
sudo mkdir -p ./mongodb/data ./mongodb/logs
# ES & Kibana
sudo mkdir -p ./es-kibana/es/master/{conf,data,logs,plugins}
sudo mkdir -p ./es-kibana/kibana
# InfluxDB
sudo mkdir -p ./influxdb/data ./influxdb/config ./influxdb/logs
# EMQX
sudo mkdir -p ./emqx/data ./emqx/log
# Nacos
sudo mkdir -p ./nacos/{conf,data,logs}
# ====================================================================================

# 2. 写入配置文件(目录已创建,无报错)
echo "3. 写入服务配置文件..."

# Redis配置
cat > ./redis/redis.conf <<'EOF'
bind 0.0.0.0
protected-mode no
port 6379
tcp-backlog 511
timeout 0
tcp-keepalive 300
daemonize no
pidfile /var/run/redis_6379.pid
loglevel notice
logfile ""
databases 48
always-show-logo no
set-proc-title yes
proc-title-template "{title} {listen-addr} {server-mode}"
save 3600 1
save 300 100
save 60 10000
stop-writes-on-bgsave-error yes
rdbcompression yes
rdbchecksum yes
dbfilename dump.rdb
rdb-del-sync-files no
dir ./
replica-serve-stale-data yes
replica-read-only yes
repl-diskless-sync no
repl-diskless-sync-delay 5
repl-diskless-load disabled
repl-disable-tcp-nodelay no
replica-priority 100
acllog-max-len 128
requirepass shhy123#
maxclients 10000
lazyfree-lazy-eviction no
lazyfree-lazy-expire no
lazyfree-lazy-server-del no
replica-lazy-flush no
lazyfree-lazy-user-del no
lazyfree-lazy-user-flush no
oom-score-adj no
oom-score-adj-values 0 200 800
disable-thp yes
appendonly yes
appendfilename "append.aof"
appendfsync everysec
no-appendfsync-on-rewrite no
auto-aof-rewrite-percentage 100
auto-aof-rewrite-min-size 64mb
aof-load-truncated yes
aof-use-rdb-preamble yes
lua-time-limit 5000
slowlog-log-slower-than 10000
slowlog-max-len 128
latency-monitor-threshold 0
notify-keyspace-events ""
hash-max-ziplist-entries 512
hash-max-ziplist-value 64
list-max-ziplist-size -2
list-compress-depth 0
set-max-intset-entries 512
zset-max-ziplist-entries 128
zset-max-ziplist-value 64
hll-sparse-max-bytes 3000
stream-node-max-bytes 4096
stream-node-max-entries 100
activerehashing yes
client-output-buffer-limit normal 0 0 0
client-output-buffer-limit replica 256mb 64mb 60
client-output-buffer-limit pubsub 32mb 8mb 60
hz 10
dynamic-hz yes
aof-rewrite-incremental-fsync yes
rdb-save-incremental-fsync yes
jemalloc-bg-thread yes
EOF

# MySQL配置
cat > ./mysql/conf/my.cnf <<'EOF'
[mysqld]
user=mysql
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
pid-file=/var/lib/mysql/mysqld.pid
character-set-server=utf8mb4
collation-server=utf8mb4_general_ci
default-time-zone=+8:00
lower_case_table_names=1
max_connections=800
max_user_connections=600
wait_timeout=300
interactive_timeout=300
mysql_native_password=ON
local_infile = 1

[client]
socket=/var/lib/mysql/mysql.sock
EOF

# Elasticsearch配置
cat > ./es-kibana/es/master/conf/elasticsearch.yml <<'EOF'
cluster.name: "elasticsearch"
cluster.max_shards_per_node: 12000
network.host: 0.0.0.0
xpack.security.enabled: true
http.cors.enabled: true
http.cors.allow-origin: "*" 
http.cors.allow-headers: "Authorization, Content-Type, X-Requested-With"
EOF

# Kibana配置
cat > ./es-kibana/kibana/kibana.yml <<'EOF'
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://es:9200"]
i18n.locale: "zh-CN"
elasticsearch.username: kibana_system
elasticsearch.password: shhy123#
EOF

# Nacos配置 【已按你要求修改】
cat > ./nacos/conf/application.properties <<'EOF'
# 基础单机配置
nacos.standalone=true
server.port=8848

# MySQL连接配置(如果数据库用ip链接这个需要开启)
spring.datasource.platform=mysql
db.num=1
db.url.0=jdbc:mysql://mysql:3306/nacos_config?characterEncoding=utf8&useSSL=false&serverTimezone=UTC
db.user.0=root
db.password.0=shhy123#

# ========== 核心:开启鉴权(适配Nacos 2.2.3,可直接登录) ==========
# 1. 启用鉴权(核心开关)
nacos.core.auth.enabled=true
# 2. 启用默认鉴权插件(2.2.3必需)
nacos.core.auth.plugin.enabled=true
# 3. 鉴权密钥(必须32位及以上,64位最佳)
nacos.core.auth.plugin.nacos.token.secret.key=SecretKey012345678901234567890123456789012345678901234567890123456789
# 4. 服务端身份标识(解决403/鉴权校验问题,必需)
nacos.core.auth.server.identity.key=custom-auth-key
nacos.core.auth.server.identity.value=custom-auth-value
# 5. 客户端身份标识(与服务端对应,可选但建议配置)
nacos.core.auth.identity.key=custom-auth-key
nacos.core.auth.identity.value=custom-auth-value
# 6. 关闭默认用户免密(生产必配,强制登录)
nacos.core.auth.enable.userAgentAuthWhite=false
# 7. 兜底:关闭空token访问(增强鉴权)
nacos.core.auth.plugin.nacos.token.empty.access=false
EOF

# 3. 生成docker-compose.yml【严格按你的配置+保留私有仓库】
echo "4. 生成Docker编排文件..."
cat > docker-compose.yml <<EOF
version: "3.8"
networks:
  internal-net:
    driver: bridge

services:
  redis:
    image: ${IMAGE_PREFIX}/redis:6.2.0
    container_name: redis6
    restart: always
    ports:
      - "6379:6379"
    volumes:
      - ./redis/redis.conf:/usr/local/etc/redis/redis.conf:ro
      - ./redis/data:/data:rw
      - ./redis/logs:/var/log/redis:rw
    environment:
      - TZ=Asia/Shanghai
    command: ["redis-server", "/usr/local/etc/redis/redis.conf"]
    healthcheck:
      test: ["CMD", "redis-cli", "ping"]
      interval: 5s
      timeout: 3s
      retries: 3
    networks:
      - internal-net

  rabbitmq:
    image: ${IMAGE_PREFIX}/rabbitmq:3.13-management
    container_name: rabbitmq
    restart: always
    hostname: rabbitmq
    environment:
      - RABBITMQ_DEFAULT_VHOST=my_vhost
      - RABBITMQ_DEFAULT_USER=admin
      - RABBITMQ_DEFAULT_PASS=shhy123#
    volumes:
      - ./rabbitmq/data:/var/lib/rabbitmq:rw
      - ./rabbitmq/logs:/var/log/rabbitmq:rw
    ports:
      - "4369:4369"
      - "5672:5672"
      - "15672:15672"
      - "25672:25672"
    cap_add:
      - CAP_SYS_NICE
    networks:
      - internal-net
    healthcheck:
      test: ["CMD", "rabbitmq-diagnostics", "ping"]
      interval: 10s
      timeout: 5s
      retries: 3

  mysql:
    image: ${IMAGE_PREFIX}/mysql:8.4.0
    container_name: mysql840
    restart: always
    hostname: mysql
    environment:
      - MYSQL_ROOT_PASSWORD=shhy123#
      - TZ=Asia/Shanghai
    volumes:
      - ./mysql/conf/my.cnf:/etc/my.cnf:ro
      - ./mysql/data:/var/lib/mysql:rw
      - ./mysql/logs:/var/log/mysql:rw
    command:
      - --character-set-server=utf8mb4
      - --collation-server=utf8mb4_general_ci
      - --explicit_defaults_for_timestamp=true
      - --default-time-zone=+8:00
      - --lower_case_table_names=1
    ports:
      - "3306:3306"
    cap_add:
      - CAP_SYS_NICE
    networks:
      - internal-net
    healthcheck:
      test: ["CMD", "mysqladmin", "ping", "-h", "localhost", "-uroot", "-p\$\$MYSQL_ROOT_PASSWORD"]
      interval: 10s
      timeout: 5s
      retries: 3

  es:
    image: ${IMAGE_PREFIX}/elasticsearch:8.17.2
    container_name: es
    restart: always
    hostname: es
    environment:
      - "ES_JAVA_OPTS=-Xms2048m -Xmx2048m"
      - "discovery.type=single-node"
      - "cluster.name=elasticsearch-spc"
      - "ELASTIC_PASSWORD=shhy123#"
      - "TAKE_FILE_OWNERSHIP=true"
      - TZ=Asia/Shanghai
    volumes:
      - ./es-kibana/es/master/conf/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml:ro
      - ./es-kibana/es/master/data:/usr/share/elasticsearch/data:rw
      - ./es-kibana/es/master/logs:/usr/share/elasticsearch/logs:rw
      - ./es-kibana/es/master/plugins:/usr/share/elasticsearch/plugins:rw
    ports:
      - "9200:9200"
      - "9300:9300"
    networks:
      - internal-net
    healthcheck:
      test: ["CMD", "curl", "-s", "-u", "elastic:\$\$ELASTIC_PASSWORD", "http://localhost:9200/_cluster/health"]
      interval: 15s
      timeout: 10s
      retries: 3

  kibana:
    image: ${IMAGE_PREFIX}/kibana:8.17.2
    container_name: kibana
    restart: always
    hostname: kibana
    environment:
      - TZ=Asia/Shanghai
      - "elasticsearch.hosts=http://es:9200"
      - "elasticsearch.username=kibana_system"
      - "elasticsearch.password=shhy123#"
      - "NODE_OPTIONS=--openssl-legacy-provider=false"
    volumes:
      - ./es-kibana/kibana/kibana.yml:/usr/share/kibana/config/kibana.yml:ro
    ports:
      - "5601:5601"
    networks:
      - internal-net
    depends_on:
      - es
    healthcheck:
      test: ["CMD", "curl", "-s", "http://localhost:5601/api/status"]
      interval: 15s
      timeout: 10s
      retries: 3

  mongodb:
    image: ${IMAGE_PREFIX}/mongo:8.0.3
    container_name: mongodb
    restart: always
    hostname: mongodb
    environment:
      - TZ=Asia/Shanghai
      - MONGO_INITDB_ROOT_USERNAME=admin
      - MONGO_INITDB_ROOT_PASSWORD=shhy123#
    volumes:
      - ./mongodb/data:/data/db:rw
      - ./mongodb/logs:/var/log/mongodb:rw
    ports:
      - "27017:27017"
    networks:
      - internal-net
    healthcheck:
      test: ["CMD", "mongosh", "--eval", "db.adminCommand('ping')", "-u", "admin", "-p", "shhy123#", "--authenticationDatabase", "admin"]
      interval: 10s
      timeout: 5s
      retries: 3

  influxdb:
    image: ${IMAGE_PREFIX}/influxdb:2.7.10
    container_name: influxdb
    restart: always
    hostname: influxdb
    environment:
      - TZ=Asia/Shanghai
      - DOCKER_INFLUXDB_INIT_MODE=setup
      - DOCKER_INFLUXDB_INIT_USERNAME=admin
      - DOCKER_INFLUXDB_INIT_PASSWORD=shhy123#
      - DOCKER_INFLUXDB_INIT_ORG=my_org
      - DOCKER_INFLUXDB_INIT_BUCKET=default_bucket
      - DOCKER_INFLUXDB_INIT_RETENTION=30d
      - DOCKER_INFLUXDB_INIT_ADMIN_TOKEN=my_secret_token_123
    volumes:
      - ./influxdb/data:/var/lib/influxdb2:rw
      - ./influxdb/config:/etc/influxdb2:rw
      - ./influxdb/logs:/var/log/influxdb2:rw
    ports:
      - "8086:8086"
    networks:
      - internal-net
    healthcheck:
      test: ["CMD", "curl", "-s", "-o", "/dev/null", "http://localhost:8086/health"]
      interval: 10s
      timeout: 5s
      retries: 3

  emqx:
    image: ${IMAGE_PREFIX}/emqx/emqx:5.8.8
    container_name: emqx
    restart: always
    hostname: emqx
    environment:
      - TZ=Asia/Shanghai
      - EMQX_NODE_NAME=emqx@127.0.0.1
      - EMQX_CLUSTER__DISCOVERY_STRATEGY=static
      - EMQX_LISTENER_TCP_DEFAULT_BIND=0.0.0.0:1883
      - EMQX_LISTENER_WS_DEFAULT_BIND=0.0.0.0:8083
      - EMQX_LOG_LEVEL=info
      - EMQX_LOG_CONSOLE_ENABLE=true
      - EMQX_NODE_PROCESS_LIMIT=2097152
      - EMQX_NODE_MAX_PORTS=1048576
      - EMQX_NODE_COOKIE=emqx_secret_cookie_123
    volumes:
      - ./emqx/data:/opt/emqx/data:rw
      - ./emqx/log:/opt/emqx/log:rw
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "1883:1883"
      - "8083:8083"
      - "8084:8084"
      - "8883:8883"
      - "18083:18083"
    networks:
      - internal-net
    healthcheck:
      test: ["CMD", "/opt/emqx/bin/emqx", "ctl", "status"]
      interval: 20s
      timeout: 25s
      retries: 10
      start_period: 60s
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 1G
        reservations:
          cpus: '0.5'
          memory: 512M

  nacos:
    image: ${IMAGE_PREFIX}/nacos/nacos-server:v2.2.3
    container_name: nacos
    restart: always
    hostname: nacos
    privileged: true
    pull_policy: never
    environment:
      - TZ=Asia/Shanghai
      - SPRING_DATASOURCE_PLATFORM=mysql
      - MYSQL_SERVICE_HOST=mysql
      - MYSQL_SERVICE_PORT=3306
      - MYSQL_SERVICE_USER=root
      - MYSQL_SERVICE_PASSWORD=shhy123#
      - MYSQL_SERVICE_DB_NAME=nacos_config
      - NACOS_SERVER_PORT=8848
      - PREFER_HOST_MODE=hostname
      - MODE=standalone
      - JVM_XMS=1024m
      - JVM_XMX=1024m
      - JVM_XMN=512m
      - JVM_MS=128m
      - JVM_MMS=320m
      - NACOS_LOG_BASE_DIR=/home/nacos/logs
      - NACOS_LOG_ROLLING_CONFIG=30
      - NACOS_LOG_MAX_FILE_SIZE=100MB
      - NACOS_WEB_CONTEXT=/nacos
      - NACOS_DEBUG_MODE=false
    volumes:
      - ./nacos/conf/application.properties:/home/nacos/conf/application.properties:ro
      - ./nacos/conf:/home/nacos/conf:rw
      - ./nacos/logs:/home/nacos/logs:rw
      - ./nacos/data:/home/nacos/data:rw
      - /etc/localtime:/etc/localtime:ro
    ports:
      - "8848:8848"
      - "9848:9848"
      - "9849:9849"
    networks:
      - internal-net
    depends_on:
      mysql:
        condition: service_healthy
    deploy:
      resources:
        limits:
          cpus: '1'
          memory: 2G
        reservations:
          cpus: '0.5'
          memory: 1G
    healthcheck:
      test: ["CMD", "curl", "-s", "-u", "nacos:nacos", "http://localhost:8848/nacos/v1/ns/instance?serviceName=nacos-prod-health"]
      interval: 10s
      timeout: 5s
      retries: 5
      start_period: 30s
EOF

# ===================== 【核心】权限配置 =====================
echo "5. 配置服务权限..."
sudo chown -R 999:999 ./redis/data ./redis/logs
sudo chmod -R 755 ./redis
sudo chmod -R 777 ./redis/logs

sudo chown -R 999:999 ./rabbitmq/data ./rabbitmq/logs
sudo chmod -R 755 ./rabbitmq
sudo chmod -R 700 ./rabbitmq/data
sudo chmod -R 777 ./rabbitmq/logs

sudo chown -R 999:999 ./mysql/data ./mysql/logs
sudo chmod -R 755 ./mysql
sudo chmod -R 777 ./mysql/logs

sudo chown -R 999:999 ./mongodb/data ./mongodb/logs
sudo chmod -R 755 ./mongodb
sudo chmod -R 777 ./mongodb/logs

sudo chown -R 1000:1000 ./es-kibana/es/master/data ./es-kibana/es/master/logs ./es-kibana/es/master/plugins
sudo chmod -R 755 ./es-kibana
sudo chmod -R 777 ./es-kibana/es/master/logs

sudo chown -R 1000:1000 ./influxdb/data ./influxdb/config ./influxdb/logs
sudo chmod -R 755 ./influxdb
sudo chmod -R 777 ./influxdb/logs

sudo chown -R 1000:1000 ./emqx/data ./emqx/log
sudo chmod -R 755 ./emqx
sudo chmod -R 700 ./emqx/data
sudo chmod -R 777 ./emqx/log

sudo chown -R 1000:1000 ./nacos/data ./nacos/logs
sudo chmod -R 755 ./nacos
# ====================================================================================

# 登录私有仓库
echo "6. 登录私有镜像仓库..."
sudo docker login ${REGISTRY}

# 拉取镜像
echo "7. 拉取服务镜像..."
sudo docker compose pull

# 启动服务
echo "8. 启动所有服务..."
sudo docker compose up -d

# MySQL配置
echo "9. 等待MySQL启动完成(100秒)..."
sleep 100
echo "10. 修改MySQL root认证插件..."
docker exec -i mysql840 mysql -u root -p'shhy123#' -e "
ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'shhy123#';
ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'shhy123#';
FLUSH PRIVILEGES;
"
echo "✅ MySQL权限配置完成!"

# 创建Nacos数据库
echo "11. 创建Nacos数据库..."
docker exec -i mysql840 mysql -u root -p'shhy123#' -e "
CREATE DATABASE IF NOT EXISTS nacos_config DEFAULT CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci;
"
echo "✅ nacos_config数据库创建完成!"
docker restart nacos
sleep 20

# 设置Kibana密码
echo "12. 等待ES启动完成(100秒)..."
sleep 100
echo "13. 设置Kibana密码..."
printf 'shhy123#\nshhy123#\n' | docker exec -i es /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system -i -b
echo "✅ Kibana密码已成功设置为: shhy123#"

echo "============================================="
echo "              部署完成!🎉                   "
echo " 所有服务权限已按规范配置完成"
echo " 查看状态:docker compose ps"
echo "============================================="

# 自动删除脚本自身
rm -f -- "$0"

四、纯复制粘贴执行步骤(全程无需动脑)

1. 创建脚本文件

bash

运行

复制代码
vim deploy.sh
  • i 进入编辑模式
  • 复制粘贴上面完整脚本
  • Esc,输入 :wq 保存退出

2. 赋予执行权限

bash

运行

复制代码
chmod +x deploy.sh

3. 一键部署(全程自动)

bash

运行

复制代码
sudo ./deploy.sh

4. 验证 MySQL 认证插件(确保兼容性)

bash

运行

复制代码
docker exec -it mysql840 mysql -u root -pshhy123# -e "SELECT user,host,plugin FROM mysql.user WHERE user='root';"

成功标志 :两个 root 账号的 plugin 均为mysql_native_password


五、纯净无残留重装步骤

bash

运行

复制代码
# 1. 强制删除所有容器
docker rm -f $(docker ps -a -q)

# 2. 清空部署目录(彻底重置)
cd /home/appInfrastructure
sudo docker compose down -v
sudo rm -rf *

# 3. 重新执行部署
sudo ./deploy.sh

六、各服务端口与访问地址

表格

服务 端口 访问方式
Redis 6379 TCP 连接
MySQL 3306 数据库连接
RabbitMQ 管理后台 15672 HTTP 网页
Elasticsearch 9200 HTTP 接口
Kibana 可视化 5601 网页控制台
MongoDB 27017 TCP 连接
InfluxDB 8086 HTTP 时序 API
EMQX 管理后台 18083 MQTT 网页控制台
Nacos 注册配置中心 8848 微服务控制台

七、本地仓库部署说明

  1. 本地仓库准备 :确保127.0.0.1:8081已搭建 Docker 私有仓库并包含所需镜像

  2. 镜像准备 :将所有中间件镜像推送到本地仓库:

    bash

    运行

    复制代码
    # 示例:推送Redis镜像到本地仓库
    docker tag redis:6.2.0 127.0.0.1:8081/public-docker/redis:6.2.0
    docker push 127.0.0.1:8081/public-docker/redis:6.2.0
  3. 脚本适配:已自动配置 Docker 信任本地 HTTP 仓库,无需额外修改 daemon.json

相关推荐
ziqi5223 小时前
Docker compose 和共享数据
运维·docker·容器
安当加密3 小时前
AES-256直接加密就够了?微服务架构下的敏感数据加密:信封加密、格式保留加密和字段级加密全解析
微服务·云原生·架构
您^_^3 小时前
专家(一):Claude Code 微服务实战——6 个服务从拆分到 K8s 部署,$0.45 全套 YAML 照抄
人工智能·windows·微服务·架构·kubernetes·个人开发·claude code
泓博4 小时前
Macbook Docker Compose不识别
运维·docker·容器
susu10830189114 小时前
windows系统的WSL的Ubuntu安装docker
linux·ubuntu·docker
Riu_Peter4 小时前
【技术】Docker 部署 MySQL
mysql·adb·docker
木雷坞5 小时前
Jellyfin Docker Compose 媒体库为空排查:volume、PUID/PGID 和挂载路径
docker·docker-compose·jellyfin
xingfujie6 小时前
第2章:服务器规划与基础环境配置
linux·运维·微服务·云原生·容器·kubernetes·负载均衡
杨浦老苏6 小时前
开源服务器监控工具Checkmate
运维·docker·群晖·网站监控
ℳ₯㎕ddzོꦿ࿐6 小时前
实战指南:使用 Docker Compose 优雅部署 MongoDB 并自动初始化用户
mongodb·docker·容器