快速构建基于Paddle Serving部署的Paddle Detection目标检测Docker镜像
阅读提示:
(1)Paddle的Serving项目中,在tools文件下,已提供了丰富的Dockerfile,可供参考,有点让人眼花缭乱。
(2)需要熟悉serving部署,port通过内部的config.yml来指定,此处Dockerfile内可不指定port。
(3)本文只介绍4个重要的文件,操作细节懂的都懂,不再赘述。
项目介绍
本项目采用Paddle Detection框架开发目标检测服务,采用Paddle Serving提供Web服务,采用Docker方式打包服务,方便部署。
本项目目录如下:
需要重点关注的几个文件
构建cpu版本的docker
指令:docker build -t aep-aiplus-ppyoloe-multiclass:cpu-1.0.0 -f ./Dockerfile-cpu .
-
Dockerfile-cpu文件
FROM python:3.8.16
COPY . /aep-aiplus-ppyoloe-multiclass
WORKDIR /aep-aiplus-ppyoloe-multiclass
RUN sed -i 's/deb.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
&& sed -i 's/security.debian.org/mirrors.ustc.edu.cn/g' /etc/apt/sources.list
&& apt-get clean
&& apt update
&& apt-get install -y libgl1-mesa-dev
&& pip config set global.index-url https://mirror.baidu.com/pypi/simple
&& pip install --upgrade setuptools
&& pip install --upgrade pip
&& pip install -r requirements.txt
CMD ["python", "web_service.py"] -
requirements.txt文件
paddlepaddle==2.2.2
paddle-serving-client==0.9.0
paddle-serving-app==0.9.0
paddle-serving-server==0.9.0
构建gpu版本的docker(cuda11.2+cudnn8)
指令:docker build -t aep-aiplus-ppyoloe-multiclass:cu1102-1.0.0 -f ./Dockerfile-cuda1102-cudnn8 .
-
Dockerfile-cuda1102-cudnn8文件
FROM registry.baidubce.com/paddlepaddle/paddle:2.2.2-gpu-cuda11.2-cudnn8
COPY . /aep-aiplus-ppyoloe-multiclass
WORKDIR /aep-aiplus-ppyoloe-multiclass
RUN sed -i 's/archive.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list
RUN sed -i 's/security.ubuntu.com/mirrors.aliyun.com/g' /etc/apt/sources.list
RUN apt-get clean
RUN rm /etc/apt/sources.list.d/cuda.list
RUN rm /etc/apt/sources.list.d/nvidia-ml.list
RUN apt update
RUN apt-get install -y libgl1-mesa-dev
RUN pip3 config set global.index-url https://mirror.baidu.com/pypi/simple
RUN pip3 install --upgrade pip
RUN pip3 install -r requirements-gpu.txt
CMD ["python", "web_service.py"] -
requirements-gpu.txt文件
paddle-serving-client==0.9.0
paddle-serving-app==0.9.0
paddle-serving-server-gpu==0.9.0.post112