雷池技术博客投稿
1. 注意事项
1.1 本文使用的compose.yaml基于官方文件修改而来,使用与否请自行判断。
1.2 本文涉及到的网络结构是 路由器→雷池→群晖nas,因为群晖nas本身自带反向代理就没有用lucky等工具。
2. 硬性要求
必须有稳定域名和公网ip,同时请确保你已申请并下发 SSL 证书。acme可以申请SSL证书我的blog有一篇文章写的关于群晖NAS 证书申请部署以及更新大家可以进行参考。
3. 准备工作
3.1 创建雷池目录
arduino
1mkdir "/volume1/docker/safeline"
这里根据你的nas实际情况创建文件夹
3.2 配置 compose 环境变量
arduino
1cd "/volume1/docker/safeline"
2touch ".env"
这里跟官方文档写的一样使用文本编辑器打开 .env 文件,写入下方的内容,POSTGRES的密码需自定义
ini
1SAFELINE_DIR=/volume1/docker/safeline
2IMAGE_TAG=latest
3MGT_PORT=9443
4POSTGRES_PASSWORD=yourpassword #-------(自定义密码使用数字+英文大小写组合,勿使用特殊字符)
5SUBNET_PREFIX=172.22.222 #-------(这里你可以自定义想使用的桥接网段)
6IMAGE_PREFIX=swr.cn-east-3.myhuaweicloud.com/chaitin-safeline
7ARCH_SUFFIX=
8RELEASE=
9REGION=
4. 下载 compose 编排脚本并编辑
4.1 使用下方的命令进入雷池安装目录并下载
arduino
1cd "/volume1/docker/safeline"
2wget "https://waf-ce.chaitin.cn/release/latest/compose.yaml"
4.2 开始编辑compose文件
原始内容如下
bash
1networks:
2 safeline-ce:
3 name: safeline-ce
4 driver: bridge
5 ipam:
6 driver: default
7 config:
8 - gateway: ${SUBNET_PREFIX:?SUBNET_PREFIX required}.1
9 subnet: ${SUBNET_PREFIX}.0/24
10 driver_opts:
11 com.docker.network.bridge.name: safeline-ce
12
13services:
14 postgres:
15 container_name: safeline-pg
16 restart: always
17 image: ${IMAGE_PREFIX}/safeline-postgres${ARCH_SUFFIX}:15.2
18 volumes:
19 - ${SAFELINE_DIR}/resources/postgres/data:/var/lib/postgresql/data
20 - /etc/localtime:/etc/localtime:ro
21 environment:
22 - POSTGRES_USER=safeline-ce
23 - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?postgres password required}
24 networks:
25 safeline-ce:
26 ipv4_address: ${SUBNET_PREFIX}.2
27 command: [postgres, -c, max_connections=600]
28 healthcheck:
29 test: pg_isready -U safeline-ce -d safeline-ce
30 mgt:
31 container_name: safeline-mgt
32 restart: always
33 image: ${IMAGE_PREFIX}/safeline-mgt${REGION}${ARCH_SUFFIX}:${IMAGE_TAG:?image tag required}
34 volumes:
35 - /etc/localtime:/etc/localtime:ro
36 - ${SAFELINE_DIR}/resources/mgt:/app/data
37 - ${SAFELINE_DIR}/logs/nginx:/app/log/nginx:z
38 - ${SAFELINE_DIR}/resources/sock:/app/sock
39 - /var/run:/app/run
40 ports:
41 - ${MGT_PORT:-9443}:1443
42 healthcheck:
43 test: curl -k -f https://localhost:1443/api/open/health
44 environment:
45 - MGT_PG=postgres://safeline-ce:${POSTGRES_PASSWORD}@safeline-pg/safeline-ce?sslmode=disable
46 depends_on:
47 - postgres
48 - fvm
49 logging:
50 driver: "json-file"
51 options:
52 max-size: "100m"
53 max-file: "5"
54 networks:
55 safeline-ce:
56 ipv4_address: ${SUBNET_PREFIX}.4
57 detect:
58 container_name: safeline-detector
59 restart: always
60 image: ${IMAGE_PREFIX}/safeline-detector${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
61 volumes:
62 - ${SAFELINE_DIR}/resources/detector:/resources/detector
63 - ${SAFELINE_DIR}/logs/detector:/logs/detector
64 - /etc/localtime:/etc/localtime:ro
65 environment:
66 - LOG_DIR=/logs/detector
67 networks:
68 safeline-ce:
69 ipv4_address: ${SUBNET_PREFIX}.5
70 tengine:
71 container_name: safeline-tengine
72 restart: always
73 image: ${IMAGE_PREFIX}/safeline-tengine${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
74 volumes:
75 - /etc/localtime:/etc/localtime:ro
76 - /etc/resolv.conf:/etc/resolv.conf:ro
77 - ${SAFELINE_DIR}/resources/nginx:/etc/nginx
78 - ${SAFELINE_DIR}/resources/detector:/resources/detector
79 - ${SAFELINE_DIR}/resources/chaos:/resources/chaos
80 - ${SAFELINE_DIR}/logs/nginx:/var/log/nginx:z
81 - ${SAFELINE_DIR}/resources/cache:/usr/local/nginx/cache
82 - ${SAFELINE_DIR}/resources/sock:/app/sock
83 environment:
84 - TCD_MGT_API=https://${SUBNET_PREFIX}.4:1443/api/open/publish/server
85 - TCD_SNSERVER=${SUBNET_PREFIX}.5:8000
86 # deprecated
87 - SNSERVER_ADDR=${SUBNET_PREFIX}.5:8000
88 - CHAOS_ADDR=${SUBNET_PREFIX}.10
89 ulimits:
90 nofile: 131072
91 network_mode: host
92 luigi:
93 container_name: safeline-luigi
94 restart: always
95 image: ${IMAGE_PREFIX}/safeline-luigi${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
96 environment:
97 - MGT_IP=${SUBNET_PREFIX}.4
98 - LUIGI_PG=postgres://safeline-ce:${POSTGRES_PASSWORD}@safeline-pg/safeline-ce?sslmode=disable
99 volumes:
100 - /etc/localtime:/etc/localtime:ro
101 - ${SAFELINE_DIR}/resources/luigi:/app/data
102 logging:
103 driver: "json-file"
104 options:
105 max-size: "100m"
106 max-file: "5"
107 depends_on:
108 - detect
109 - mgt
110 networks:
111 safeline-ce:
112 ipv4_address: ${SUBNET_PREFIX}.7
113 fvm:
114 container_name: safeline-fvm
115 restart: always
116 image: ${IMAGE_PREFIX}/safeline-fvm${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
117 volumes:
118 - /etc/localtime:/etc/localtime:ro
119 logging:
120 driver: "json-file"
121 options:
122 max-size: "100m"
123 max-file: "5"
124 networks:
125 safeline-ce:
126 ipv4_address: ${SUBNET_PREFIX}.8
127 chaos:
128 container_name: safeline-chaos
129 restart: always
130 image: ${IMAGE_PREFIX}/safeline-chaos${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
131 logging:
132 driver: "json-file"
133 options:
134 max-size: "100m"
135 max-file: "10"
136 environment:
137 - DB_ADDR=postgres://safeline-ce:${POSTGRES_PASSWORD}@safeline-pg/safeline-ce?sslmode=disable
138 volumes:
139 - ${SAFELINE_DIR}/resources/sock:/app/sock
140 - ${SAFELINE_DIR}/resources/chaos:/app/chaos
141 networks:
142 safeline-ce:
143 ipv4_address: ${SUBNET_PREFIX}.10
以下是我修改后的文件内容
在safeline-tengine容器上修改了网络模式,从host模式改变到bridge模式。
并且添加了macvlan网络,在safeline-tengine容器上添加了一个局域网的ip地址。
添加macvlan网络是因为分配到独立ip之后就不会与宿主机(群晖nas)80 443端口冲突。
这里还加了一个桥接网络是因为获取到独立ip后跟宿主机是无法通信的,必须经过这个桥接网络来通信。
bash
1networks:
2 safeline-ce:
3 name: safeline-ce
4 driver: bridge
5 ipam:
6 driver: default
7 config:
8 - gateway: ${SUBNET_PREFIX:?SUBNET_PREFIX required}.1
9 subnet: ${SUBNET_PREFIX}.0/24
10 driver_opts:
11 com.docker.network.bridge.name: safeline-ce
12
13 macvlan:
14 name: macvlan
15 driver: macvlan
16 driver_opts:
17 parent: ovs_eth2 # 替换为宿主机实际物理网卡名称
18 ipam:
19 config:
20 - subnet: 192.168.30.0/24 # 替换为实际使用中的局域网网段
21 gateway: 192.168.30.1 # 替换为实际使用中的网关地址
22
23services:
24 postgres:
25 container_name: safeline-pg
26 restart: always
27 image: ${IMAGE_PREFIX}/safeline-postgres${ARCH_SUFFIX}:15.2
28 volumes:
29 - ${SAFELINE_DIR}/resources/postgres/data:/var/lib/postgresql/data
30 - /etc/localtime:/etc/localtime:ro
31 environment:
32 - POSTGRES_USER=safeline-ce
33 - POSTGRES_PASSWORD=${POSTGRES_PASSWORD:?postgres password required}
34 networks:
35 safeline-ce:
36 ipv4_address: ${SUBNET_PREFIX}.2
37 command: [postgres, -c, max_connections=600]
38 healthcheck:
39 test: pg_isready -U safeline-ce -d safeline-ce
40
41 mgt:
42 container_name: safeline-mgt
43 restart: always
44 image: ${IMAGE_PREFIX}/safeline-mgt${REGION}${ARCH_SUFFIX}:${IMAGE_TAG:?image tag required}
45 volumes:
46 - /volume2/docker/etc/hosts:/etc/hosts:ro
47 - /etc/localtime:/etc/localtime:ro
48 - ${SAFELINE_DIR}/resources/mgt:/app/data
49 - ${SAFELINE_DIR}/logs/nginx:/app/log/nginx:z
50 - ${SAFELINE_DIR}/resources/sock:/app/sock
51 - /var/run:/app/run
52 ports:
53 - ${MGT_PORT:-9443}:1443
54 healthcheck:
55 test: curl -k -f https://localhost:1443/api/open/health
56 environment:
57 - MGT_PG=postgres://safeline-ce:${POSTGRES_PASSWORD}@safeline-pg/safeline-ce?sslmode=disable
58 depends_on:
59 - postgres
60 - fvm
61 logging:
62 driver: "json-file"
63 options:
64 max-size: "100m"
65 max-file: "5"
66 networks:
67 safeline-ce:
68 ipv4_address: ${SUBNET_PREFIX}.4
69
70 detect:
71 container_name: safeline-detector
72 restart: always
73 image: ${IMAGE_PREFIX}/safeline-detector${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
74 volumes:
75 - ${SAFELINE_DIR}/resources/detector:/resources/detector
76 - ${SAFELINE_DIR}/logs/detector:/logs/detector
77 - /etc/localtime:/etc/localtime:ro
78 environment:
79 - LOG_DIR=/logs/detector
80 networks:
81 safeline-ce:
82 ipv4_address: ${SUBNET_PREFIX}.5
83
84 tengine:
85 container_name: safeline-tengine
86 restart: always
87 image: ${IMAGE_PREFIX}/safeline-tengine${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
88 volumes:
89 - /etc/localtime:/etc/localtime:ro
90 - /etc/resolv.conf:/etc/resolv.conf:ro
91 - /volume1/docker/etc/hosts:/etc/hosts:ro
92 - ${SAFELINE_DIR}/resources/nginx:/etc/nginx
93 - ${SAFELINE_DIR}/resources/detector:/resources/detector
94 - ${SAFELINE_DIR}/resources/chaos:/resources/chaos
95 - ${SAFELINE_DIR}/logs/nginx:/var/log/nginx:z
96 - ${SAFELINE_DIR}/resources/cache:/usr/local/nginx/cache
97 - ${SAFELINE_DIR}/resources/sock:/app/sock
98 environment:
99 - TCD_MGT_API=https://${SUBNET_PREFIX}.4:1443/api/open/publish/server
100 - TCD_SNSERVER=${SUBNET_PREFIX}.5:8000
101 - SNSERVER_ADDR=${SUBNET_PREFIX}.5:8000
102 - CHAOS_ADDR=${SUBNET_PREFIX}.10
103 ulimits:
104 nofile: 131072
105 networks:
106 safeline-ce:
107 ipv4_address: ${SUBNET_PREFIX}.6
108 macvlan:
109 ipv4_address: 192.168.30.253 #实际使用中的局域网网段
110
111 luigi:
112 container_name: safeline-luigi
113 restart: always
114 image: ${IMAGE_PREFIX}/safeline-luigi${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
115 environment:
116 - MGT_IP=${SUBNET_PREFIX}.4
117 - LUIGI_PG=postgres://safeline-ce:${POSTGRES_PASSWORD}@safeline-pg/safeline-ce?sslmode=disable
118 volumes:
119 - /etc/localtime:/etc/localtime:ro
120 - ${SAFELINE_DIR}/resources/luigi:/app/data
121 logging:
122 driver: "json-file"
123 options:
124 max-size: "100m"
125 max-file: "5"
126 depends_on:
127 - detect
128 - mgt
129 networks:
130 safeline-ce:
131 ipv4_address: ${SUBNET_PREFIX}.7
132
133 fvm:
134 container_name: safeline-fvm
135 restart: always
136 image: ${IMAGE_PREFIX}/safeline-fvm${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
137 volumes:
138 - /etc/localtime:/etc/localtime:ro
139 logging:
140 driver: "json-file"
141 options:
142 max-size: "100m"
143 max-file: "5"
144 networks:
145 safeline-ce:
146 ipv4_address: ${SUBNET_PREFIX}.8
147
148 chaos:
149 container_name: safeline-chaos
150 restart: always
151 image: ${IMAGE_PREFIX}/safeline-chaos${REGION}${ARCH_SUFFIX}:${IMAGE_TAG}
152 logging:
153 driver: "json-file"
154 options:
155 max-size: "100m"
156 max-file: "10"
157 environment:
158 - DB_ADDR=postgres://safeline-ce:${POSTGRES_PASSWORD}@safeline-pg/safeline-ce?sslmode=disable
159 volumes:
160 - ${SAFELINE_DIR}/resources/sock:/app/sock
161 - ${SAFELINE_DIR}/resources/chaos:/app/chaos
162 networks:
163 safeline-ce:
164 ipv4_address: ${SUBNET_PREFIX}.10
4.3 修改hosts文件
写到这里细心的小伙伴会发现在safeline-tengine这个容器上我还添加了一个文件映射(文件需要自己创建)
ruby
1 - /volume1/docker/etc/hosts:/etc/hosts:ro
以下是hosts文件里的内容
yaml
1# Any manual change will be lost if the host name is changed or system upgrades.
2127.0.0.1 localhost
3::1 localhost
4
5# Custom services running on 172.16.0.1 (Docker gateway IP)
6172.22.222.1 blog.huamei-tokyo.com
这里添加这个映射是因为我在雷池防护应用时上游服务器是填写 https://域名.com 这样写的。我需要把这个域名解析到桥接网络的网关地址上。
这样一来就可以通过桥接网络转发清洗好的流量给nas了。
然后接下来就是大家熟悉的环节群晖通过compose部署项目了。
5. 总结
本文详细介绍了如何在群晖 NAS 上通过 Docker 部署雷池防火墙,解决与 NAS 自身 80/443 端口冲突问题的实际方案。我们不仅配置了标准的 Docker Compose 文件,还通过添加 macvlan 网络,为容器分配了独立 IP,从而避免了端口冲突,并通过桥接网络实现了容器与宿主机的正常通信。
这种网络结构(桥接 + macvlan)适用于大多数需要同时暴露 80/443 且与群晖共存的服务部署场景,既不影响 NAS 本身的 Web 服务,又可以保证雷池具备完整的防护能力。
如果你有公网 IP、域名和 SSL 证书,并能根据实际情况调整配置文件,那么就可以顺利搭建属于自己的安全防护体系。