docker快速安装单节点和多点MongoDB副本集

文章目录

概要

最近项目的MongoDB版本终于升级到了MongoDB4.4了,可以使用使用事务了,但MongoDB的不支持单节点事务,至少是副本集。

测试环境没必要搭建复杂的副本集,还好官方给出了单节点副本集的方案,本文总结了如何用docker来快速搭建。

一、单节点副本集

1:拉取镜像

bash 复制代码
docker pull mongo:7.0

2:准备环境

bash 复制代码
mkdir -p /home/docker/mongo/data  #数据挂载目录
mkdir -p /home/docker/mongo/logs  #日志挂载目录
mkdir -p /home/docker/mongo/config #配置挂载目录
chmod 777  /home/docker/mongo/*      #授权
docker run -d --name mongo_tmp -p 27017:27017 mongo:7.0
docker cp mongo_tmp:/etc/mongod.conf.orig /home/docker/mongo/config/mongod.conf  #复制配置文件
docker rm mongo_tmp

3:调整配置
vim /home/docker/mongo/config/mongod.conf

yaml 复制代码
# mongod.conf
# Where and how to store data.
storage:
  dbPath: /var/lib/mongodb
#  engine:
#  wiredTiger:

# where to write logging data.
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# network interfaces
net:
  port: 27017
  bindIp: 127.0.0.1


# how the process runs
processManagement:
  timeZoneInfo: /usr/share/zoneinfo
#security:
#operationProfiling:
replication:  #开启副本集
  replSetName: rs  #设置副本集名称
#sharding:
## Enterprise-Only Options:
#auditLog:

4:启动容器

bash 复制代码
docker run -d -p 27017:27017 --name mongo \
-v /home/docker/mongo/data:/var/lib/mongodb \
-v /home/docker/mongo/logs:/var/log/mongodb \
-v /home/docker/mongo/config/mongod.conf:/etc/mongod.conf \
 mongo:7.0 -f /etc/mongod.conf

5:初始化副本集并检测

bash 复制代码
[root@test home]# docker exec mongo mongosh --version
2.2.0
[root@test home]# docker exec -it mongo bash
root@4250323d6d90:/# mongosh
Current Mongosh Log ID: 6606093080fcdce72edb83af
Connecting to:          mongodb://127.0.0.1:27017/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.2.0
Using MongoDB:          7.0.7
Using Mongosh:          2.2.0
test> rs.initiate()  #初始化副本集
{
  info2: 'no configuration specified. Using a default configuration for the set',
  me: '4250323d6d90:27017',
  ok: 1
}
rs [direct: primary] test> rs.status() #检测
{
  set: 'rs',
  date: ISODate('2024-03-28T21:53:44.472Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 1,
  writeMajorityCount: 1,
  votingMembersCount: 1,
  writableVotingMembersCount: 1,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1711662813, i: 23 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-03-28T21:53:33.694Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1711662813, i: 23 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1711662813, i: 23 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1711662813, i: 23 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-03-28T21:53:33.694Z'),
    lastDurableWallTime: ISODate('2024-03-28T21:53:33.694Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1711662813, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-03-28T21:53:33.621Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1711662813, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1711662813, i: 1 }), t: Long('-1') },
    numVotesNeeded: 1,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    newTermStartDate: ISODate('2024-03-28T21:53:33.645Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-03-28T21:53:33.661Z')
  },
  members: [
    {
      _id: 0,
      name: '4250323d6d90:27017',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 381,
      optime: { ts: Timestamp({ t: 1711662813, i: 23 }), t: Long('1') },
      optimeDate: ISODate('2024-03-28T21:53:33.000Z'),
      lastAppliedWallTime: ISODate('2024-03-28T21:53:33.694Z'),
      lastDurableWallTime: ISODate('2024-03-28T21:53:33.694Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1711662813, i: 2 }),
      electionDate: ISODate('2024-03-28T21:53:33.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1711662813, i: 23 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1711662813, i: 23 })
}
rs [direct: primary] test> rs.config()
{
  _id: 'rs',
  version: 1,
  term: 1,
  members: [
    {
      _id: 0,
      host: '4250323d6d90:27017', #可以看到这里的host有问题,不过单节点副本集可以不用管,不影响使用,多节点就不行了
      arbiterOnly: false,
      buildIndexes: true,
      hidden: false,
      priority: 1,
      tags: {},
      secondaryDelaySecs: Long('0'),
      votes: 1
    }
  ],
  protocolVersion: Long('1'),
  writeConcernMajorityJournalDefault: true,
  settings: {
    chainingAllowed: true,
    heartbeatIntervalMillis: 2000,
    heartbeatTimeoutSecs: 10,
    electionTimeoutMillis: 10000,
    catchUpTimeoutMillis: -1,
    catchUpTakeoverDelayMillis: 30000,
    getLastErrorModes: {},
    getLastErrorDefaults: { w: 1, wtimeout: 0 },
    replicaSetId: ObjectId('6605e6dd4e61b074963931a1')
  }
}

6:go程序连接

项目中使用的是官方MongoDB库,需要在mongo url中添加connect=direct,如mongodb://127.0.0.1:27017/?connect=direct,仍按照单mongo实例连接,毕竟不是真的副本集。

二、多节点副本集

我们这里以三节点为例。

按照第一章节的操作起三个容器。

1:容器一

bash 复制代码
mkdir -p /home/docker/mongo_rs1/data  #数据挂载目录
mkdir -p /home/docker/mongo_rs1/logs  #日志挂载目录
mkdir -p /home/docker/mongo_rs1/config #配置挂载目录
chmod 777  /home/docker/mongo_rs1/*      #授权

vim /home/docker/mongo_rs1/config/mongod.conf

yaml 复制代码
storage:
  dbPath: /var/lib/mongodb
systemLog:
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log
net:
  port: 27021
  bindIp: 0.0.0.0
processManagement:
  timeZoneInfo: /usr/share/zoneinfo
replication:  #开启副本集
  replSetName: rs  #设置副本集名称
bash 复制代码
docker run -d -p 27021:27021 --name mongo-rs1 \
-v /home/docker/mongo_rs1/data:/var/lib/mongodb \
-v /home/docker/mongo_rs1/logs:/var/log/mongodb \
-v /home/docker/mongo_rs1/config/mongod.conf:/etc/mongod.conf \
 mongo:7.0 -f /etc/mongod.conf

2:容器二

按照容器一创建挂载目录和修改配置文件监听端口为27022

bash 复制代码
docker run -d -p 27022:27022 --name mongo-rs2 \
-v /home/docker/mongo_rs2/data:/var/lib/mongodb \
-v /home/docker/mongo_rs2/logs:/var/log/mongodb \
-v /home/docker/mongo_rs2/config/mongod.conf:/etc/mongod.conf \
 mongo:7.0 -f /etc/mongod.conf

3:容器三

按照容器一创建挂载目录和修改配置文件监听端口为27023

bash 复制代码
docker run -d -p 27023:27023 --name mongo-rs3 \
-v /home/docker/mongo_rs3/data:/var/lib/mongodb \
-v /home/docker/mongo_rs3/logs:/var/log/mongodb \
-v /home/docker/mongo_rs3/config/mongod.conf:/etc/mongod.conf \
 mongo:7.0 -f /etc/mongod.conf

进入容器一:

bash 复制代码
[root@test home]# docker exec -it mongo-rs1 bash
root@74963372ad8f:/# mongosh --port 27021
Current Mongosh Log ID: 66061117cc4fbc0923db83af
Connecting to:          mongodb://127.0.0.1:27021/?directConnection=true&serverSelectionTimeoutMS=2000&appName=mongosh+2.2.0
Using MongoDB:          7.0.7
Using Mongosh:          2.2.0
test> rs.status()
MongoServerError[NotYetInitialized]: no replset config has been received
test> rs.initiate({_id:"rs",members : [{_id:0, host:"192.168.30.213:27021"},{_id:1, host:"192.168.30.213:27022"},{_id:2, host:"192.168.30.213:27023"}]}) 
{ ok: 1 }  #注意docker下多节点副本集初始化时一定指定配置,且host不能是127.0.0.1,
rs [direct: other] test> rs.status()
{
  set: 'rs',
  date: ISODate('2024-03-29T00:57:45.865Z'),
  myState: 1,
  term: Long('1'),
  syncSourceHost: '',
  syncSourceId: -1,
  heartbeatIntervalMillis: Long('2000'),
  majorityVoteCount: 2,
  writeMajorityCount: 2,
  votingMembersCount: 3,
  writableVotingMembersCount: 3,
  optimes: {
    lastCommittedOpTime: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
    lastCommittedWallTime: ISODate('2024-03-29T00:57:44.102Z'),
    readConcernMajorityOpTime: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
    appliedOpTime: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
    durableOpTime: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
    lastAppliedWallTime: ISODate('2024-03-29T00:57:44.102Z'),
    lastDurableWallTime: ISODate('2024-03-29T00:57:44.102Z')
  },
  lastStableRecoveryTimestamp: Timestamp({ t: 1711673852, i: 1 }),
  electionCandidateMetrics: {
    lastElectionReason: 'electionTimeout',
    lastElectionDate: ISODate('2024-03-29T00:57:43.488Z'),
    electionTerm: Long('1'),
    lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1711673852, i: 1 }), t: Long('-1') },
    lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1711673852, i: 1 }), t: Long('-1') },
    numVotesNeeded: 2,
    priorityAtElection: 1,
    electionTimeoutMillis: Long('10000'),
    numCatchUpOps: Long('0'),
    newTermStartDate: ISODate('2024-03-29T00:57:43.516Z'),
    wMajorityWriteAvailabilityDate: ISODate('2024-03-29T00:57:44.039Z')
  },
  members: [
    {
      _id: 0,
      name: '192.168.30.213:27021',
      health: 1,
      state: 1,
      stateStr: 'PRIMARY',
      uptime: 547,
      optime: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
      optimeDate: ISODate('2024-03-29T00:57:44.000Z'),
      lastAppliedWallTime: ISODate('2024-03-29T00:57:44.102Z'),
      lastDurableWallTime: ISODate('2024-03-29T00:57:44.102Z'),
      syncSourceHost: '',
      syncSourceId: -1,
      infoMessage: 'Could not find member to sync from',
      electionTime: Timestamp({ t: 1711673863, i: 1 }),
      electionDate: ISODate('2024-03-29T00:57:43.000Z'),
      configVersion: 1,
      configTerm: 1,
      self: true,
      lastHeartbeatMessage: ''
    },
    {
      _id: 1,
      name: '192.168.30.213:27022',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 13,
      optime: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
      optimeDate: ISODate('2024-03-29T00:57:44.000Z'),
      optimeDurableDate: ISODate('2024-03-29T00:57:44.000Z'),
      lastAppliedWallTime: ISODate('2024-03-29T00:57:44.102Z'),
      lastDurableWallTime: ISODate('2024-03-29T00:57:44.102Z'),
      lastHeartbeat: ISODate('2024-03-29T00:57:45.509Z'),
      lastHeartbeatRecv: ISODate('2024-03-29T00:57:44.514Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.30.213:27021',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    },
    {
      _id: 2,
      name: '192.168.30.213:27023',
      health: 1,
      state: 2,
      stateStr: 'SECONDARY',
      uptime: 13,
      optime: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
      optimeDurable: { ts: Timestamp({ t: 1711673864, i: 7 }), t: Long('1') },
      optimeDate: ISODate('2024-03-29T00:57:44.000Z'),
      optimeDurableDate: ISODate('2024-03-29T00:57:44.000Z'),
      lastAppliedWallTime: ISODate('2024-03-29T00:57:44.102Z'),
      lastDurableWallTime: ISODate('2024-03-29T00:57:44.102Z'),
      lastHeartbeat: ISODate('2024-03-29T00:57:45.509Z'),
      lastHeartbeatRecv: ISODate('2024-03-29T00:57:44.516Z'),
      pingMs: Long('0'),
      lastHeartbeatMessage: '',
      syncSourceHost: '192.168.30.213:27021',
      syncSourceId: 0,
      infoMessage: '',
      configVersion: 1,
      configTerm: 1
    }
  ],
  ok: 1,
  '$clusterTime': {
    clusterTime: Timestamp({ t: 1711673864, i: 7 }),
    signature: {
      hash: Binary.createFromBase64('AAAAAAAAAAAAAAAAAAAAAAAAAAA=', 0),
      keyId: Long('0')
    }
  },
  operationTime: Timestamp({ t: 1711673864, i: 7 })
}

注意:如果副本集初始化错无,可以用rs.rs.reconfig()命令调整。

三、参考

1]:docker搭建mongodb单节点副本集

2]:MongoDB 单节点升级为副本集高可用集群

3]:MongoDB---副本集配置、管理

4]:Install MongoDB with Docker