Golang 版本导致的容器运行时问题

问题现场

用户反馈安装了某个 containerd 版本的节点无法正常拉起容器,业务场景是在 K8S Pod 里面运行一个 Docker,在容器里面通过 docker 命令再启动新的容器。

报错信息如下:

shell 复制代码
$ docker run -it ubuntu /bin/bash

docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting "proc" to rootfs at "/proc": mount proc:/proc (via /proc/self/fd/10), flags: 0xe: operation not permitted: unknown.

ERRO[0000] error waiting for container: context canceled

排查过程

从报错信息flags: 0xe: operation not permitted可以知道这个问题的直接原因是挂载 proc文件系统的时候没有权限,导致这一现象的根本原因还需要进一步定位。

Containerd

在用户反馈的信息中,比较关键的一点是某个 containerd 版本开始存在问题,很容易联想到可能和 Containerd 引入的改动有关系。

经过分析,出现问题的 containerd 版本与上一个版本相比,仅新增了两个 Commit,改动内容非常少。通过仔细分析这些引入的改动,发现它们与上述报错信息并无关联,并且新增的特性不会默认打开。因此,基本确定和 containerd 引入的改动无关。

Pod 配置

既然 Containerd 的改动不会有影响,是不是用户 Pod 配置的不同导致呢?

经过对比 Pod yaml,我们发现用户在两个不同 containerd 版本的节点上的 Pod 没有影响权限的配置项。

  • 都是用了特权容器,配置相同

    yaml 复制代码
    securityContext:
      privileged: true
      runAsNonRoot: false
  • 都挂载了节点上的以下目录(排除 ConfigMap,PVC 等内容):

    yaml 复制代码
    volumes:
      -hostPath:
          path:/usr
          type:Directory
        name:host-usr
      -hostPath:
          path:/var/lib/containerd
          type:Directory
        name:containerd-image
      -hostPath:
          path:/run/containerd
          type:Directory
        name:containerd-dir
      -hostPath:
          path:/var/lib/lxc/lxcfs/proc/cpuinfo
          type:File
        name:lxcfs-proc-cpuinfo
      -hostPath:
          path:/var/lib/lxc/lxcfs/proc/meminfo
          type:File
        name:lxcfs-proc-meminfo
      -hostPath:
          path:/var/lib/lxc/lxcfs/sys/devices/system/cpu/online
          type:File
        name:system-cpu-online

由于暂时无法确定根因,提供了回退 Containerd 版本的脚本:

shell 复制代码
TOS_URL=https://xxx/containerd-$VERSION.tar.gz

function downgrade() {
        wget $TOS_URL -o containerd.tar.gz
        tar -zxvf containerd.tar.gz -C /usr/
        systemctl restart containerd
}

downgrade

Runc

既然 Containerd 的改动不会影响权限,这个报错信息是什么地方导致的呢?

这个报错信息来自 runc,分析 runc 报错的具体位置:

golang 复制代码
// prepareRootfs sets up the devices, mount points, and filesystems for use
// inside a new mount namespace. It doesn't set anything as ro. You must call
// finalizeRootfs after this function to finish setting up the rootfs.
func prepareRootfs(pipe io.ReadWriter, iConfig *initConfig, mountFds []int) (err error) {
    config := iConfig.Config
    if err := prepareRoot(config); err != nil {
        return fmt.Errorf("error preparing rootfs:%w", err)
    }

    if mountFds != nil && len(mountFds) != len(config.Mounts) {
        return fmt.Errorf("malformed mountFds slice. Expected size: %v, got: %v. Slice: %v", len(config.Mounts), len(mountFds), mountFds)
    }

    mountConfig := &mountConfig{
        root:            config.Rootfs,
        label:           config.MountLabel,
        cgroup2Path:     iConfig.Cgroup2Path,
        rootlessCgroups: iConfig.RootlessCgroups,
        cgroupns:        config.Namespaces.Contains(configs.NEWCGROUP),
    }
    setupDev := needsSetupDev(config)
    for i, m := range config.Mounts {
        for _, precmd := range m.PremountCmds {
            if err := mountCmd(precmd); err != nil {
                return fmt.Errorf("error running premount command:%w", err)
            }
        }

        // Just before the loop we checked that if not empty, len(mountFds) == len(config.Mounts).
        // Therefore, we can access mountFds[i] without any concerns.
        if mountFds != nil && mountFds[i] != -1 {
            mountConfig.fd = &mountFds[i]
        } else {
            mountConfig.fd = nil
        }

        if err := mountToRootfs(m, mountConfig); err != nil {
            return fmt.Errorf("error mounting %q to rootfs at %q:%w", m.Source, m.Destination, err)
        }

    // 省略部分内容

    returnnil
}

prepareRootfs中调用了mountToRootfs

golang 复制代码
func mountToRootfs(m *configs.Mount, c *mountConfig) error {
    rootfs := c.root

    // procfs and sysfs are special because we need to ensure they are actually
    // mounted on a specific path in a container without any funny business.
    switch m.Device {
    case"proc", "sysfs":
        // If the destination already exists and is not a directory, we bail
        // out. This is to avoid mounting through a symlink or similar -- which
        // has been a "fun" attack scenario in the past.
        // TODO: This won't be necessary once we switch to libpathrs and we can
        //       stop all of these symlink-exchange attacks.
        dest := filepath.Clean(m.Destination)
        if !strings.HasPrefix(dest, rootfs) {
            // Do not use securejoin as it resolves symlinks.
            dest = filepath.Join(rootfs, dest)
        }
        if fi, err := os.Lstat(dest); err != nil {
            if !os.IsNotExist(err) {
                return err
            }
        } elseif !fi.IsDir() {
            return fmt.Errorf("filesystem %q must be mounted on ordinary directory", m.Device)
        }
        if err := os.MkdirAll(dest, 0o755); err != nil {
            return err
        }
        // Selinux kernels do not support labeling of /proc or /sys.
        return mountPropagate(m, rootfs, "", nil)
    }

    // 省略部分内容
}

对于procsysfs 文件系统,最终会调用mountPropagate

golang 复制代码
// Do the mount operation followed by additional mounts required to take care
// of propagation flags. This will always be scoped inside the container rootfs.
func mountPropagate(m *configs.Mount, rootfs string, mountLabel string, mountFd *int) error {
    var (
        data  = label.FormatMountLabel(m.Data, mountLabel)
        flags = m.Flags
    )
    // Delay mounting the filesystem read-only if we need to do further
    // operations on it. We need to set up files in "/dev", and other tmpfs
    // mounts may need to be chmod-ed after mounting. These mounts will be
    // remounted ro later in finalizeRootfs(), if necessary.
    if m.Device == "tmpfs" || utils.CleanPath(m.Destination) == "/dev" {
        flags &= ^unix.MS_RDONLY
    }

    // Because the destination is inside a container path which might be
    // mutating underneath us, we verify that we are actually going to mount
    // inside the container with WithProcfd() -- mounting through a procfd
    // mounts on the target.
    source := m.Source
    if mountFd != nil {
        source = "/proc/self/fd/" + strconv.Itoa(*mountFd)
    }

    if err := utils.WithProcfd(rootfs, m.Destination, func(procfd string) error {
        return mount(source, m.Destination, procfd, m.Device, uintptr(flags), data)
    }); err != nil {
        return err
    }
    // We have to apply mount propagation flags in a separate WithProcfd() call
    // because the previous call invalidates the passed procfd -- the mount
    // target needs to be re-opened.
    if err := utils.WithProcfd(rootfs, m.Destination, func(procfd string) error {
        for _, pflag := range m.PropagationFlags {
            if err := mount("", m.Destination, procfd, "", uintptr(pflag), ""); err != nil {
                return err
            }
        }
        returnnil
    }); err != nil {
        return fmt.Errorf("change mount propagation through procfd:%w", err)
    }
    returnnil
}

最终调用的标准库中的unix.Mount

golang 复制代码
// mount is a simple unix.Mount wrapper. If procfd is not empty, it is used
// instead of target (and the target is only used to add context to an error).
func mount(source, target, procfd, fstype string, flags uintptr, data string) error {
    dst := target
    if procfd != "" {
        dst = procfd
    }
    if err := unix.Mount(source, dst, fstype, flags, data); err != nil {
        return &mountError{
            op:     "mount",
            source: source,
            target: target,
            procfd: procfd,
            flags:  flags,
            data:   data,
            err:    err,
        }
    }
    returnnil
}

从 runc 源码看,在挂载 proc 的时候,Mount 系统调用返回的错误信息。因此,挂载失败和 runc 调用 mount 时的现场有关系。

我们通过修改 runc 源码,打印 runc 挂载 proc 时相关的现场信息。

正常启动 case:

复制代码
[31017-09+00 17:24:24] Destination: /proc,
uid: 0,
uid_map:          0       1000          1
         1     100000      65536
     65537     165536      65536
,
gid_map:          0       1000          1
         1     100000      65536
     65537     165536      65536
,
userns: user:[4026534201],
mntns: mnt:[4026534601],
cgroupns: cgroup:[4026531835],
mode: dr-xr-xr-x, owner: 65534,
fileGid: 65534
cap: "Name:\trunc:[2:INIT]\nUmask:\t0022\nState:\tR (running)\nTgid:\t749\nNgid:\t0\nPid:\t749\nPPid:\t739\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t749\t1\nNSpid:\t749\t1\nNSpgid:\t749\t1\nNSsid:\t749\t1\nVmPeak:\t 1090280 kB\nVmSize:\t 1090280 kB\nVmLck:\t       0 kB\nVmPin:\t       0 kB\nVmHWM:\t    9664 kB\nVmRSS:\t    9664 kB\nRssAnon:\t    2540 kB\nRssFile:\t    7124 kB\nRssShmem:\t       0 kB\nVmData:\t   86400 kB\nVmStk:\t     132 kB\nVmExe:\t    5208 kB\nVmLib:\t     772 kB\nVmPTE:\t     148 kB\nVmSwap:\t       0 kB\nHugetlbPages:\t       0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t6\nSigQ:\t0/900794\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffdffc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000003fffffffff\nCapEff:\t0000003fffffffff\nCapBnd:\t0000003fffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nCpus_allowed:\t3ffffff,ffffffff\nCpus_allowed_list:\t0-57\nMems_allowed:\t00000000,00000001\nMems_allowed_list:\t0\nvoluntary_ctxt_switches:\t3\nnonvoluntary_ctxt_switches:\t0\n"

异常 case:

复制代码
[31017-09+00 17:24:24] uid: 0,
uid_map:          0       1000          1
         1     100000      65536
     65537     165536      65536
,
gid_map:          0       1000          1
         1     100000      65536
     65537     165536      65536
,
userns: user:[4026534201],
mntns: mnt:[4026534601],
cgroupns: cgroup:[4026531835],
mode: dr-xr-xr-x, owner: 65534,
fileGid: 65534
cap: "Name:\trunc:[2:INIT]\nUmask:\t0022\nState:\tR (running)\nTgid:\t812\nNgid:\t0\nPid:\t812\nPPid:\t802\nTracerPid:\t0\nUid:\t0\t0\t0\t0\nGid:\t0\t0\t0\t0\nFDSize:\t64\nGroups:\t0 \nNStgid:\t812\t1\nNSpid:\t812\t1\nNSpgid:\t812\t1\nNSsid:\t812\t1\nVmPeak:\t 1090536 kB\nVmSize:\t 1090536 kB\nVmLck:\t       0 kB\nVmPin:\t       0 kB\nVmHWM:\t   10032 kB\nVmRSS:\t   10032 kB\nRssAnon:\t    2988 kB\nRssFile:\t    7044 kB\nRssShmem:\t       0 kB\nVmData:\t   86656 kB\nVmStk:\t     132 kB\nVmExe:\t    5208 kB\nVmLib:\t     772 kB\nVmPTE:\t     160 kB\nVmSwap:\t       0 kB\nHugetlbPages:\t       0 kB\nCoreDumping:\t0\nTHP_enabled:\t1\nThreads:\t6\nSigQ:\t0/900794\nSigPnd:\t0000000000000000\nShdPnd:\t0000000000000000\nSigBlk:\t0000000000000000\nSigIgn:\t0000000000000000\nSigCgt:\tfffffffdffc1feff\nCapInh:\t0000000000000000\nCapPrm:\t0000003fffffffff\nCapEff:\t0000003fffffffff\nCapBnd:\t0000003fffffffff\nCapAmb:\t0000000000000000\nNoNewPrivs:\t0\nSeccomp:\t0\nSpeculation_Store_Bypass:\tthread vulnerable\nCpus_allowed:\t3ffffff,ffffffff\nCpus_allowed_list:\t0-57\nMems_allowed:\t00000000,00000001\nMems_allowed_list:\t0\nvoluntary_ctxt_switches:\t3\nnonvoluntary_ctxt_switches:\t0\n"

正常拉起 docker 容器和拉起失败情况下,runc 的 userns 相同,target 的 owner 都是 65534(没有 map id)。此外,查看 runc 进程的 Capability,都包括 SYS_ADMIN:

shell 复制代码
# capsh --decode=0000003fffffffff
0x0000003fffffffff=cap_chown,cap_dac_override,cap_dac_read_search,cap_fowner,cap_fsetid,cap_kill,cap_setgid,cap_setuid,cap_setpcap,cap_linux_immutable,cap_net_bind_service,cap_net_broadcast,cap_net_admin,cap_net_raw,cap_ipc_lock,cap_ipc_owner,cap_sys_module,cap_sys_rawio,cap_sys_chroot,cap_sys_ptrace,cap_sys_pacct,cap_sys_admin,cap_sys_boot,cap_sys_nice,cap_sys_resource,cap_sys_time,cap_sys_tty_config,cap_mknod,cap_lease,cap_audit_write,cap_audit_control,cap_setfcap,cap_mac_override,cap_mac_admin,cap_syslog,cap_wake_alarm,cap_block_suspend,cap_audit_read

异常 case 和正常 case 的 runc 进程环境也没区别,为什么挂载 proc 会失败呢?

进一步修改 runc 源码,当 mount 失败之后,让进程不退出,保留现场,然后进入 runc 所在 user namespace 和 mount namespace,尝试 mount proc 文件系统,结果【失败】。

Bind mount 没问题,但是 mount proc 失败:

场景分析

通过分析,用户的使用场景大致如下:

docker in container architecture

  • 用户挂载了节点上的一些目录到外部容器。

  • Docker 用的 runc 是社区提供的。

核心问题:节点上 Containerd 版本的变化为什么会影响容器中的 Runc ?

对比容器中的挂载点信息,发现有些区别。

正常 case:

复制代码
2409 2408 0:308 / /proc rw,nosuid,nodev,noexec,relatime master:461 - proc proc rw
2410 2409 0:235 /proc/cpuinfo /proc/cpuinfo ro,relatime master:462 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
2411 2409 0:235 /proc/meminfo /proc/meminfo ro,relatime master:463 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
2633 2632 0:308 / /run/containerd/io.containerd.runtime.v2.task/k8s.io/17c97f222c6a02eb46f170cdf5ed6621bee116836a57c6121de9f3818494182f/rootfs/proc rw,nosuid,nodev,noexec,relatime master:609 - proc proc rw
3010 3009 0:330 / /ebs/docker/165536.165536/fuse-overlayfs/70c276e0d397f4bebfea4239e967132baa2db0746181b0b7dc0ea7b24912f1e3/merged/proc rw,nosuid,nodev,noexec,relatime - proc proc rw

异常 case:

复制代码
2779 2778 0:308 / /proc rw,nosuid,nodev,noexec,relatime master:461 - proc proc rw
2827 2779 0:235 /proc/meminfo /proc/meminfo ro,relatime master:462 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
2828 2779 0:235 /proc/cpuinfo /proc/cpuinfo ro,relatime master:463 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
3074 3073 0:308 / /run/containerd/io.containerd.runtime.v2.task/k8s.io/a33db564c39c8573ea92e3e5327be27ca205684cb965b8f03d9eaa0ae2b8aff7/rootfs/proc rw,nosuid,nodev,noexec,relatime master:608 - proc proc rw
3075 3074 0:235 /proc/meminfo /run/containerd/io.containerd.runtime.v2.task/k8s.io/a33db564c39c8573ea92e3e5327be27ca205684cb965b8f03d9eaa0ae2b8aff7/rootfs/proc/meminfo ro,relatime master:609 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
3076 3074 0:235 /proc/cpuinfo /run/containerd/io.containerd.runtime.v2.task/k8s.io/a33db564c39c8573ea92e3e5327be27ca205684cb965b8f03d9eaa0ae2b8aff7/rootfs/proc/cpuinfo ro,relatime master:610 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount

注意,异常 case 中多出了两条挂载点。正常 case 只有:

  • /run/containerd/xxx/rootfs/proc

异常 case 除了这条挂载点,还多出了:

  • /run/containerd/xxx/rootfs/proc/meminfo

  • /run/containerd/xxx/rootfs/proc/cpuinfo

Golang 版本(柳暗花明)

除了 Containerd 版本不一样,我们注意到,编译 Containerd 的环境也有一些区别,旧版本是用 Golang 1.18 编译,新版本使用了 Golang 1.20。基于这个思路,我们使用不同 Golang 版本编译了相同的 Containerd 源码。

Golang 1.18 编译的 containerd 没问题:

Golang 1.20 编译的 containerd 有问题:

从这个现象基本可以确定这个问题和 Golang 版本有关系。

通过编译 Golang 源码找到引入问题的 commit。golang 1.19 的第一个 commit:936c7fbc1c154964b6e3e8a7523bdf0c29b4e1b3[1]

通过二分查找的方法,找到引入问题的 commit:

Step 1:

复制代码
4a4127bccc go1.19.1
43456202a1 go1.19
ad672e7ce1 go1.19rc2
bac4eb53d6 go1.19rc1
2cfbef4380 go1.19beta1 --> failed

d81dd12906 --> fail
797e8890 --> ok

2ea7d3461b go1.9.2 --> build error
7f40c1214d go1.9.1 --> build error
c8aec4095e go1.9 --> build error
048c9cfaac go1.9rc2 --> build error
65c6c88a94 go1.9rc1 --> build error
936c7fbc1c go1.19 start --> ok

Step 2:

复制代码
1:* d81dd12906 --> fail
2:* 420a1fb223
88:* e0ae8540ab
735:* 0670afa1b3 --> fail
760:* 2c73f5f32f --> fail
772:* d2552037 --> fail

773:* 72e77a7f41 --> fail
774:* 9298f604f4 --> ok

775:* d65a41329e --> ok
778:* 517781b391 --> ok
785:* d85694ab4f --> ok
839:* 6f6942ef7a --> ok
992:* 1724077b78 --> ok
1136:* 79861be205
1269:* 797e889046 --> ok

最终,找到导致问题的 Commit:72e77a7f41bbf45d466119444307fd3ae996e257[2]

72e77a7f41:

上一个 commit 9298f604f4:

这个 commit 修改了sort.Sort()的实现,从稳定排序变成不稳定排序

为了进一步验证这个改动对 containerd 的影响,修改 containerd 中所有sort.Sort()sort.Stable(),即修改为稳定排序,问题修复。

Containerd 调用 sort.Sort() 排序 mounts:pkg/cri/opts/spec_linux.go#L120[3]

修改 containerd,输出稳定排序和非稳定排序场景下的 mounts 结果。

stable sort mounts:

unstable sort mounts:

为什么挂载顺序的区别会导致 mount proc 失败?

Mount 限制(根因)

runc 调用 mount 系统调用返回的错误信息,根因是mount_too_revealing 返回 1 导致挂载失败:

mount_too_revealing 函数中,如果是 procsys 会走到 mnt_already_visible 来检查权限。

"

参考 cve-2022-0492[4]

源码如下:

c 复制代码
static bool mnt_already_visible(struct mnt_namespace *ns,
                                    const struct super_block *sb,
                                    int *new_mnt_flags)
    {
            int new_flags = *new_mnt_flags;
            struct mount *mnt;
            bool visible = false;

            down_read(&namespace_sem);
            lock_ns_list(ns);
            list_for_each_entry(mnt, &ns->list, mnt_list) {
                    struct mount *child;
                    int mnt_flags;

                    ...
                    list_for_each_entry(child, &mnt->mnt_mounts, mnt_child) {
                            struct inode *inode = child->mnt_mountpoint->d_inode;
/* Only worry about locked mounts */
                            if (!(child->mnt.mnt_flags & MNT_LOCKED))
                                    continue;
/* Is the directorypermanetly empty? */
                            if (!is_empty_dir_inode(inode))
                                    goto next;
                    }
/* Preserve the locked attributes */
                    *new_mnt_flags |= mnt_flags & (MNT_LOCK_READONLY | \
                                            MNT_LOCK_ATIME);
                    visible = true;
                    goto found;
            next:        ;
            }
    found:
            unlock_ns_list(ns);
            up_read(&namespace_sem);
            return visible;
    }

mnt_already_visible会遍历新的 mount namespace 并且检查是否有子挂载点,如果已经有子挂载点并且不是全部对当前 mount namespace 可见,则不能挂载 procsys。原因如下:procfs 和 sysfs 包括许多全局数据,不能直接挂载到容器中。

"

mnt_already_visible will iterate the new mount namespace and check whether it has child mountpoint. If it has child mountpoint, it is not fully visible to this mount namespace so the procfs will not be mounted.
"

This reason is as following. The procfs and sysfs contains some global data, so the container should not touch. So mouting procfs and sysfs in new user namespace should be restricted.

这是内核的限制:

回头看下我们拿到的容器中的挂载点信息。

正常 case:

复制代码
2409 2408 0:308 / /proc rw,nosuid,nodev,noexec,relatime master:461 - proc proc rw
2410 2409 0:235 /proc/cpuinfo /proc/cpuinfo ro,relatime master:462 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
2411 2409 0:235 /proc/meminfo /proc/meminfo ro,relatime master:463 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
2633 2632 0:308 / /run/containerd/io.containerd.runtime.v2.task/k8s.io/17c97f222c6a02eb46f170cdf5ed6621bee116836a57c6121de9f3818494182f/rootfs/proc rw,nosuid,nodev,noexec,relatime master:609 - proc proc rw
3010 3009 0:330 / /ebs/docker/165536.165536/fuse-overlayfs/70c276e0d397f4bebfea4239e967132baa2db0746181b0b7dc0ea7b24912f1e3/merged/proc rw,nosuid,nodev,noexec,relatime - proc proc rw

异常 case:

复制代码
2779 2778 0:308 / /proc rw,nosuid,nodev,noexec,relatime master:461 - proc proc rw
2827 2779 0:235 /proc/meminfo /proc/meminfo ro,relatime master:462 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
2828 2779 0:235 /proc/cpuinfo /proc/cpuinfo ro,relatime master:463 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
3074 3073 0:308 / /run/containerd/io.containerd.runtime.v2.task/k8s.io/a33db564c39c8573ea92e3e5327be27ca205684cb965b8f03d9eaa0ae2b8aff7/rootfs/proc rw,nosuid,nodev,noexec,relatime master:608 - proc proc rw
3075 3074 0:235 /proc/meminfo /run/containerd/io.containerd.runtime.v2.task/k8s.io/a33db564c39c8573ea92e3e5327be27ca205684cb965b8f03d9eaa0ae2b8aff7/rootfs/proc/meminfo ro,relatime master:609 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount
3076 3074 0:235 /proc/cpuinfo /run/containerd/io.containerd.runtime.v2.task/k8s.io/a33db564c39c8573ea92e3e5327be27ca205684cb965b8f03d9eaa0ae2b8aff7/rootfs/proc/cpuinfo ro,relatime master:610 - fuse.lxcfs lxcfs rw,user_id=0,group_id=0,allow_other,force_umount

在异常 case 中,多出来的两个挂载点的父挂载点是 3074,也就是 proc 的挂载点。

至此,问题的根因已经确定。

runc 会先挂载 /proc/cpuinfo/proc/meminfo,这两个目录实际对应节点上 /run/containerd 下的子目录,比较巧的是,用户刚好挂载了 /run/containerd 目录到容器。

因此,先挂载的 /proc/cpuinfo/proc/meminfo 由于是/run/containerd/xxx/rootfs/proc 的子挂载点,在挂载 /run/containerd 目录的时候会被传播到容器内。而先挂载 /run/containerd 则不存在这个问题。

问题复现

  • 在节点安装 lxcfs
shell 复制代码
$ apt-get update && apt-get install lxcfs -y
  • 异常 case:创建 Pod,其中 /run/containerd 晚于 /proc/cpuinfo/proc/meminfo 挂载,docker 无法启动容器
yaml 复制代码
apiVersion: v1
kind:Pod
metadata:
name:docker-mount-sort-stable-test-pod
namespace:default
spec:
hostNetwork:true
containers:
-image:ghcr.io/sctb512/docker-test:latest
    imagePullPolicy:Always
    name:docker
    securityContext:
      privileged:true
      runAsNonRoot:false
    terminationMessagePath:/dev/termination-log
    terminationMessagePolicy:File
    volumeMounts:
    -mountPath:/var/lib/containerd
      mountPropagation:HostToContainer
      name:containerd-image
    -mountPath:/proc/meminfo
      name:lxcfs-proc-meminfo
      readOnly:true
    -mountPath:/proc/cpuinfo
      name:lxcfs-proc-cpuinfo
      readOnly:true
    -mountPath:/run/containerd
      name:containerd-dir
volumes:
-hostPath:
      path:/var/lib/lxc/lxcfs/proc/meminfo
      type:File
    name:lxcfs-proc-meminfo
-hostPath:
      path:/var/lib/lxc/lxcfs/proc/cpuinfo
      type:File
    name:lxcfs-proc-cpuinfo
-hostPath:
      path:/run/containerd
      type:Directory
    name:containerd-dir
-hostPath:
      path:/var/lib/containerd
      type:Directory
    name:containerd-image

通过 docker 拉起容器:

shell 复制代码
$ docker run --rm -it nginx:latest
  • 正常 case:创建 Pod /run/containerd 早于 /proc/cpuinfo/proc/meminfo 挂载,docker 可以启动容器
yaml 复制代码
apiVersion: v1
kind:Pod
metadata:
name:docker-mount-sort-stable-test-pod
namespace:default
spec:
hostNetwork:true
containers:
-image:ghcr.io/sctb512/docker-test:latest
    imagePullPolicy:Always
    name:docker
    securityContext:
      privileged:true
      runAsNonRoot:false
    terminationMessagePath:/dev/termination-log
    terminationMessagePolicy:File
    volumeMounts:
    -mountPath:/var/lib/containerd
      mountPropagation:HostToContainer
      name:containerd-image
    -mountPath:/run/containerd
      name:containerd-dir
    -mountPath:/proc/meminfo
      name:lxcfs-proc-meminfo
      readOnly:true
    -mountPath:/proc/cpuinfo
      name:lxcfs-proc-cpuinfo
      readOnly:true
volumes:
-hostPath:
      path:/var/lib/lxc/lxcfs/proc/meminfo
      type:File
    name:lxcfs-proc-meminfo
-hostPath:
      path:/var/lib/lxc/lxcfs/proc/cpuinfo
      type:File
    name:lxcfs-proc-cpuinfo
-hostPath:
      path:/run/containerd
      type:Directory
    name:containerd-dir
-hostPath:
      path:/var/lib/containerd
      type:Directory
    name:containerd-image

总结

Containerd 老版本使用 Golang 1.18 编译,新版本使用 golang 1.20 编译,Golang 1.19 在 commit 72e77a7f41bbf45d466119444307fd3ae996e257[5] 将 sort.Sort 由稳定排序修改为不稳定排序。containerd 使用 sort.Sort 对挂载点进行排序,sort.Sort变为不稳定之后,containerd 传给 runc 的挂载点顺序发生了变化。

用户场景会将 /run/containerd 目录挂载到容器中,不同挂载顺序会导致 runc 挂载 procfs 时看到的子挂载点信息不同:

  • /proc/meminfo/proc/cpuinfo 先于 /run/containerd 挂载,containerd 传的挂载参数是 rbind(类似于 bind,会递归 bind 当前挂载点上已有的子挂载点),meminfo 和 cpuinfo 作为 /run/containerd/xxx/proc 的子挂载点在挂载 /run/containerd 时会被挂载,也就是说,runc 挂载 procfs 时, /run/containerd/xxx/proc 存在子挂载点,因此,不能挂载。

  • /proc/meminfo/proc/cpuinfo 晚于 /run/containerd 挂载,runc 挂载 /run/containerd 时 meminfo 和 cpuinfo 还没被挂载,因此,runc 挂载 /run/containerd 时不存在子挂载点,可以挂载 procfs。

问题触发条件:

  • 容器挂载了 /proc/xxx 目录 & 挂载了节点上的 /run/containerd 目录

  • sysfs 也有类似问题

解决方案:

  • Containerd 侧:修改sort.Sortsort.Stable

pkg/cri/opts/spec_linux.go#L120[6]

有个有意思的点,containerd 社区给OrderedMounts 加了单元测试,用例中用的 sort.Stable,但代码逻辑中实际用的是 sort.Sort。可能默许了 sort.Sort 包含稳定排序的特征,只不过在 Golang 1.19 被打破了,才导致的问题。

pkg/cri/opts/spec_test.go#L44[7]

  • 用户侧:确保 Pod yaml 中容器挂载点的顺序 /run/containerd/proc/xxx 挂载点之前。

向社区提交的 PR:

containerd/containerd/pull/10021[8]

回合到 1.6 分支:

containerd/containerd/pull/10045[9]

正式 release 版本:v1.6.32[10]

参考资料

1\] 936c7fbc1c154964b6e3e8a7523bdf0c29b4e1b3: ** \[2\] 72e77a7f41bbf45d466119444307fd3ae996e257: ** \[3\] pkg/cri/opts/spec_linux.go#L120: ** \[4\] cve-2022-0492: *[https://terenceli.github.io/技术/2022/03/06/cve-2022-0492](https://terenceli.github.io/%E6%8A%80%E6%9C%AF/2022/03/06/cve-2022-0492)* \[5\] 72e77a7f41bbf45d466119444307fd3ae996e257: ** \[6\] pkg/cri/opts/spec_linux.go#L120: ** \[7\] pkg/cri/opts/spec_test.go#L44: ** \[8\] containerd/containerd/pull/10021: ** \[9\] containerd/containerd/pull/10045: ** \[10\] v1.6.32: ** ![图片](https://mmbiz.qpic.cn/sz_mmbiz_png/NyssVd6aPwtCCDMBNRSxxBiboUtfEfKSOaibpWqManwicmeyfSeDrMAStcGg7JlHNibU3dTYAmcCsvdNJPVicL7icggg/640)

相关推荐
听雨·眠4 小时前
go中map和slice非线程安全
java·开发语言·golang
weisian1515 小时前
云原生--CNCF-1-云原生计算基金会介绍(云原生生态的发展目标和未来)
云原生
元气满满的热码式8 小时前
K8S节点出现Evicted状态“被驱逐”
云原生·容器·kubernetes
{⌐■_■}9 小时前
【go】什么是Go语言的GPM模型?工作流程?为什么Go语言中的GMP模型需要有P?
java·开发语言·后端·golang
kill bert9 小时前
第33周JavaSpringCloud微服务 面试题
微服务·云原生·架构
why1519 小时前
滴滴-golang后端开发-企业事业部门-二面
开发语言·网络·golang
塔克拉玛攻城狮10 小时前
一文详解银河麒麟配置容器运行时及gVisor(runsc)、Kata(runv)详细指南
docker·kubernetes·containerd·银河麒麟
Ai 编码助手10 小时前
用Go语言&&正则,如何爬取数据
开发语言·后端·golang
阿里云云原生11 小时前
我定制的通义灵码 Project Rules,用 AI 写出“更懂我”的代码
云原生