1. 容器中如何访问主机服务
在docker容器、docker compose 中如何访问主机服务呢?
docker容器
20.10.0 版本在 linux 新增 host.docker.internal 支持: docker run -it --add-host=host.docker.internal:host-gateway alpine cat /etc/hosts
sh
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.1 host.docker.internal # --add-host的作用就是添加了这行到/etc/hosts
172.17.0.3 cb0565ceea26
add-host 的意思是告诉容器,容器对域名 host.docker.internal 的访问都将转发到 host-gateway 去。
即容器内部访问这个域名 host.docker.internal 时,就会访问到对应的主机上的 host-gateway 地址,从而达到容器访问主机上服务的效果。
docker compose
注意:8080是宿主机提供的服务
yaml
version: "2.3" # version改为3.3也可以
services:
server:
image: curlimages/curl
command: curl http://host.docker.internal:8080
extra_hosts:
- "host.docker.internal:host-gateway"
2. docker-compose使用多个docker-compose.yaml配置文件创建容器
如何使用多个Docker compose文件,参见:multiple-compose-files
Docker Compose lets you merge and override a set of Compose files together to create a composite Compose file.
By default, Compose reads two files, a compose.yml and an optional compose.override.yml file. By convention, the compose.yml contains your base configuration. The override file can contain configuration overrides for existing services or entirely new services.
If a service is defined in both files, Compose merges the configurations using the rules described below and in the Compose Specification.
后面的docker compose配置如何与前面的docker compose配置合并,参见:compose-file/13-merge
指定多个docker compose配置文件示例
bash
docker compose -f docker-compose.yaml -f docker-compose.gpu.yaml up -d --build
docker compose -f docker-compose.yaml -f docker-compose.api.yaml up -d --build
docker-compose.yaml
YAML
version: '3.8'
services:
ollama:
volumes:
- ollama:/root/.ollama
container_name: ollama
pull_policy: always
tty: true
restart: unless-stopped
image: ollama/ollama:${OLLAMA_DOCKER_TAG-latest}
open-webui:
build:
context: .
args:
OLLAMA_BASE_URL: '/ollama'
dockerfile: Dockerfile
image: ghcr.io/open-webui/open-webui:${WEBUI_DOCKER_TAG-main}
container_name: open-webui
volumes:
- open-webui:/app/backend/data
depends_on:
- ollama
ports:
- ${OPEN_WEBUI_PORT-3000}:8080
environment:
- 'OLLAMA_BASE_URL=http://ollama:11434'
- 'WEBUI_SECRET_KEY='
extra_hosts:
- host.docker.internal:host-gateway
restart: unless-stopped
volumes:
ollama: {}
open-webui: {}
docker-compose.gpu.yaml
YAML
version: '3.8'
services:
ollama:
# GPU support
deploy:
resources:
reservations:
devices:
- driver: ${OLLAMA_GPU_DRIVER-nvidia}
count: ${OLLAMA_GPU_COUNT-1}
capabilities:
- gpu
docker-compose.api.yaml
YAML
version: '3.8'
services:
ollama:
# Expose Ollama API outside the container stack
ports:
- ${OLLAMA_WEBAPI_PORT-11434}:11434