*`记得关注一下博主,博主每天都会更新IT技术,让你有意想不到的小收获哦^_^`*
文章目录
-
-
- 一、查看原MongoDB分片集群配置
-
-
- 1、查看mongodb01主机的MongoDB分片集群目录创建信息
- [2、查看 MongoDB分片集群shard1节点信息](#2、查看 MongoDB分片集群shard1节点信息)
- 3、登录mongos路由服务控制台查看shard分片节点信息
-
- 二、原MongoDB分片集群创建测试分片数据库和集合
-
-
- [1、调整Chunck大小为了更好地显示出创建集合测试数据的分片情况,Chunk默认64MB,chunk取值范围 1MB~1024MB(2的整数倍取值),"value: 1"代表调整每个chunk大小为1MB](#1、调整Chunck大小为了更好地显示出创建集合测试数据的分片情况,Chunk默认64MB,chunk取值范围 1MB~1024MB(2的整数倍取值),"value: 1"代表调整每个chunk大小为1MB)
- 2、创建测试分片数据库和集合
-
- 三、Shard4分片服务器安装与初始化
-
-
- [1、创建shard4分片服务工作目录(mongo01/ mongo02 /mongo03三台主机都要操作)](#1、创建shard4分片服务工作目录(mongo01/ mongo02 /mongo03三台主机都要操作))
- [2、编辑shard4 分片服务配置文件shard1.conf(mongo01/ mongo02 /mongo03三台主机都要操作)](#2、编辑shard4 分片服务配置文件shard1.conf(mongo01/ mongo02 /mongo03三台主机都要操作))
- [3、把shard4分片服务添加到systemctl启动项并启动(mongo01/ mongo02 /mongo03三台主机都要操作)](#3、把shard4分片服务添加到systemctl启动项并启动(mongo01/ mongo02 /mongo03三台主机都要操作))
- 4、登录任意一台主机的shard4分片服务控制台初始化副本集(任意一台主机操作)
- 5、登录shard4分片主节点创建管理员用户
- 6、shard4分片服务配置文件添加以下参数
- [7、重启shard4服务(mongo01/ mongo02 /mongo03三台主机都要操作)](#7、重启shard4服务(mongo01/ mongo02 /mongo03三台主机都要操作))
- 8、登录任意一台shard4分片节点控制台检查(任意一台主机操作)
-
- 四、登录Mongos路由控制台添加shard4分片
- 五、测试shard4分片
-
MongoDB 原分片集群shard节点规划
主机名称 | 主机IP | CPU | 内存 | mongos端口 | config端口 | share1端口 | share2端口 | share3端口 | 操作系统及软件版本 |
---|---|---|---|---|---|---|---|---|---|
mongodb01 | 192.168.91.61 | 2*4 | 16GB | 27017 | 27019 | 27101 | 27102 | 27103 | CentOS 7.9 mongo 4.4.29 mongos 4.4.29 mongod 4.4.29 mongosh 2.2.29 |
mongodb02 | 192.168.91.62 | 2*4 | 16GB | 27017 | 27019 | 27101 | 27102 | 27103 | CentOS 7.9 mongo 4.4.29 mongos 4.4.29 mongod 4.4.29 mongosh 2.2.29 |
mongodb03 | 192.168.91.63 | 2*4 | 16GB | 27017 | 27019 | 27101 | 27102 | 27103 | CentOS 7.9 mongo 4.4.29 mongos 4.4.29 mongod 4.4.29 mongosh 2.2.29 |
MongoDB 分片集群增加shard4节点规则
主机名称 | 主机IP | CPU | 内存 | mongos端口 | config端口 | share1端口 | share2端口 | share3端口 | share4端口 | 操作系统及软件版本 |
---|---|---|---|---|---|---|---|---|---|---|
mongodb01 | 192.168.91.61 | 2*4 | 16GB | 27017 | 27019 | 27101 | 27102 | 27103 | 27104 | CentOS 7.9 mongo 4.4.29 mongos 4.4.29 mongod 4.4.29 mongosh 2.2.29 |
mongodb02 | 192.168.91.62 | 2*4 | 16GB | 27017 | 27019 | 27101 | 27102 | 27103 | 27104 | CentOS 7.9 mongo 4.4.29 mongos 4.4.29 mongod 4.4.29 mongosh 2.2.29 |
mongodb03 | 192.168.91.63 | 2*4 | 16GB | 27017 | 27019 | 27101 | 27102 | 27103 | 27104 | CentOS 7.9 mongo 4.4.29 mongos 4.4.29 mongod 4.4.29 mongosh 2.2.29 |
提示:MongoDB分片集群安装与部署本教程就不再讲解,如需了解请自行查阅相关文档。
一、查看原MongoDB分片集群配置
1、查看mongodb01主机的MongoDB分片集群目录创建信息
bash
[root@mongodb01 ~]# ls -l /data/mongodb/
total 0
drwxr-xr-x 5 root root 60 Jun 26 08:43 configsvr
drwxr-xr-x 2 root root 21 Jun 25 09:42 keyfile
drwxr-xr-x 5 root root 57 Jun 26 08:44 mongos
drwxr-xr-x 5 root root 57 Jun 26 08:43 shard1
drwxr-xr-x 5 root root 57 Jun 26 08:44 shard2
drwxr-xr-x 5 root root 57 Jun 26 08:44 shard3
[root@mongodb01 ~]# ls -l /data/mongodb/keyfile/
total 4
-rw------- 1 root root 1024 Jun 25 09:42 keyfile
2、查看 MongoDB分片集群shard1节点信息
bash
[root@mongodb01 ~]# mongosh --host 192.168.91.61 --port 27101 -u root -p 123456
shard1 [direct: primary] test> rs.status()
{
set: 'shard1',
date: ISODate('2024-06-26T03:14:27.798Z'),
myState: 1,
term: Long('20'),
syncSourceHost: '',
syncSourceId: -1,
heartbeatIntervalMillis: Long('2000'),
majorityVoteCount: 2,
writeMajorityCount: 2,
votingMembersCount: 3,
writableVotingMembersCount: 3,
optimes: {
lastCommittedOpTime: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
lastCommittedWallTime: ISODate('2024-06-26T03:14:26.202Z'),
readConcernMajorityOpTime: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
readConcernMajorityWallTime: ISODate('2024-06-26T03:14:26.202Z'),
appliedOpTime: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
durableOpTime: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
lastAppliedWallTime: ISODate('2024-06-26T03:14:26.202Z'),
lastDurableWallTime: ISODate('2024-06-26T03:14:26.202Z')
},
lastStableRecoveryTimestamp: Timestamp({ t: 1719371634, i: 1 }),
electionCandidateMetrics: {
lastElectionReason: 'stepUpRequestSkipDryRun',
lastElectionDate: ISODate('2024-06-26T03:14:06.174Z'),
electionTerm: Long('20'),
lastCommittedOpTimeAtElection: { ts: Timestamp({ t: 1719371643, i: 2 }), t: Long('19') },
lastSeenOpTimeAtElection: { ts: Timestamp({ t: 1719371643, i: 2 }), t: Long('19') },
numVotesNeeded: 2,
priorityAtElection: 1,
electionTimeoutMillis: Long('10000'),
priorPrimaryMemberId: 1,
numCatchUpOps: Long('0'),
newTermStartDate: ISODate('2024-06-26T03:14:06.194Z'),
wMajorityWriteAvailabilityDate: ISODate('2024-06-26T03:14:07.203Z')
},
electionParticipantMetrics: {
votedForCandidate: true,
electionTerm: Long('19'),
lastVoteDate: ISODate('2024-06-26T02:06:43.709Z'),
electionCandidateMemberId: 1,
voteReason: '',
lastAppliedOpTimeAtElection: { ts: Timestamp({ t: 1719367592, i: 1 }), t: Long('18') },
maxAppliedOpTimeInSet: { ts: Timestamp({ t: 1719367592, i: 1 }), t: Long('18') },
priorityAtElection: 1
},
members: [
{
_id: 0,
name: '192.168.91.61:27101',
health: 1,
state: 1,
stateStr: 'PRIMARY',
uptime: 9035,
optime: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
optimeDate: ISODate('2024-06-26T03:14:26.000Z'),
lastAppliedWallTime: ISODate('2024-06-26T03:14:26.202Z'),
lastDurableWallTime: ISODate('2024-06-26T03:14:26.202Z'),
syncSourceHost: '',
syncSourceId: -1,
infoMessage: '',
electionTime: Timestamp({ t: 1719371646, i: 1 }),
electionDate: ISODate('2024-06-26T03:14:06.000Z'),
configVersion: 16,
configTerm: 20,
self: true,
lastHeartbeatMessage: ''
},
{
_id: 1,
name: '192.168.91.62:27101',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 9022,
optime: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
optimeDurable: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
optimeDate: ISODate('2024-06-26T03:14:26.000Z'),
optimeDurableDate: ISODate('2024-06-26T03:14:26.000Z'),
lastAppliedWallTime: ISODate('2024-06-26T03:14:26.202Z'),
lastDurableWallTime: ISODate('2024-06-26T03:14:26.202Z'),
lastHeartbeat: ISODate('2024-06-26T03:14:26.210Z'),
lastHeartbeatRecv: ISODate('2024-06-26T03:14:27.224Z'),
pingMs: Long('0'),
lastHeartbeatMessage: '',
syncSourceHost: '192.168.91.61:27101',
syncSourceId: 0,
infoMessage: '',
configVersion: 16,
configTerm: 20
},
{
_id: 2,
name: '192.168.91.63:27101',
health: 1,
state: 2,
stateStr: 'SECONDARY',
uptime: 9021,
optime: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
optimeDurable: { ts: Timestamp({ t: 1719371666, i: 1 }), t: Long('20') },
optimeDate: ISODate('2024-06-26T03:14:26.000Z'),
optimeDurableDate: ISODate('2024-06-26T03:14:26.000Z'),
lastAppliedWallTime: ISODate('2024-06-26T03:14:26.202Z'),
lastDurableWallTime: ISODate('2024-06-26T03:14:26.202Z'),
lastHeartbeat: ISODate('2024-06-26T03:14:26.213Z'),
lastHeartbeatRecv: ISODate('2024-06-26T03:14:26.216Z'),
pingMs: Long('0'),
lastHeartbeatMessage: '',
syncSourceHost: '192.168.91.62:27101',
syncSourceId: 1,
infoMessage: '',
configVersion: 16,
configTerm: 20
}
],
ok: 1,
'$gleStats': {
lastOpTime: Timestamp({ t: 0, i: 0 }),
electionId: ObjectId('7fffffff0000000000000014')
},
lastCommittedOpTime: Timestamp({ t: 1719371666, i: 1 }),
'$configServerState': { opTime: { ts: Timestamp({ t: 1719371662, i: 1 }), t: Long('8') } },
'$clusterTime': {
clusterTime: Timestamp({ t: 1719371666, i: 1 }),
signature: {
hash: Binary.createFromBase64('da0qYhTkZsQc4Idi8z3/ohfZto0=', 0),
keyId: Long('7384248065542062092')
}
},
operationTime: Timestamp({ t: 1719371666, i: 1 })
}
3、登录mongos路由服务控制台查看shard分片节点信息
bash
[root@mongodb01 ~]# mongosh --host 192.168.91.61 --port 27017 -u root -p 123456
[direct: mongos] test> sh.status()
shardingVersion
{ _id: 1, clusterId: ObjectId('667a1e7ef49fe55ecd0dac12') }
---
shards
[
{
_id: 'shard1',
host: 'shard1/192.168.91.61:27101,192.168.91.62:27101,192.168.91.63:27101',
state: 1
},
{
_id: 'shard2',
host: 'shard2/192.168.91.61:27102,192.168.91.62:27102,192.168.91.63:27102',
state: 1
},
{
_id: 'shard3',
host: 'shard3/192.168.91.61:27103,192.168.91.62:27103,192.168.91.63:27103',
state: 1
}
]
---
active mongoses
[ { '4.4.29': 3 } ]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
'Currently enabled': 'yes',
'Currently running': 'no',
'Failed balancer rounds in last 5 attempts': 0,
'Migration Results for the last 24 hours': {
'1': "Failed with error 'aborted', from shard3 to shard1",
'2': "Failed with error 'aborted', from shard3 to shard2",
'6': "Failed with error 'aborted', from shard1 to shard3",
'17': "Failed with error 'aborted', from shard2 to shard3",
'19': "Failed with error 'aborted', from shard1 to shard2",
'33': 'Success',
'55': "Failed with error 'aborted', from shard2 to shard1"
}
}
---
databases
[
{
database: { _id: 'config', primary: 'config', partitioned: true },
collections: {
'config.system.sessions': {
shardKey: { _id: 1 },
unique: false,
balancing: true,
chunkMetadata: [
{ shard: 'shard1', nChunks: 342 },
{ shard: 'shard2', nChunks: 341 },
{ shard: 'shard3', nChunks: 341 }
],
chunks: [
'too many chunks to print, use verbose if you want to force print'
],
tags: []
}
}
}
]
二、原MongoDB分片集群创建测试分片数据库和集合
1、调整Chunck大小为了更好地显示出创建集合测试数据的分片情况,Chunk默认64MB,chunk取值范围 1MB~1024MB(2的整数倍取值),"value: 1"代表调整每个chunk大小为1MB
bash
mongosh --host 192.168.91.61 --port 27017 -u root -p 123456
use config
db.settings.updateOne( { _id: "chunksize" },{ $set: { _id: "chunksize", value: 1 } },{ upsert: true } )
db.settings.find()
2、创建测试分片数据库和集合
bash
mongosh --host 192.168.91.61 --port 27017 -u root -p 123456
### 进入到admin数据库
use admin
### 开启school数据库分片功能(默认自动分片)
db.runCommand( { enablesharding : "school" } )
### 设置school数据库student集合(表),_id字段(_id字段mongodDB文档默认自动创建的主键,如果集合没有较好的字段选择作为分片索引,推荐选择_id字段)为片键名称,使用hashed哈希分片
db.runCommand( { shardcollection : "school.student",key : { id: "hashed" } } )
### 创建school数据库并向student集合插数据
use school
for (var i = 1; i <= 10000; i++){
db.student.insert({id:i,"001":"xiaoming"});
}
db.student.stats().count;
### 查看分片的Chunk数据块分布情况
[direct: mongos] school> sh.status()
shardingVersion
{ _id: 1, clusterId: ObjectId('667a1e7ef49fe55ecd0dac12') }
---
shards
[
{
_id: 'shard1',
host: 'shard1/192.168.91.61:27101,192.168.91.62:27101,192.168.91.63:27101',
state: 1
},
{
_id: 'shard2',
host: 'shard2/192.168.91.61:27102,192.168.91.62:27102,192.168.91.63:27102',
state: 1
},
{
_id: 'shard3',
host: 'shard3/192.168.91.61:27103,192.168.91.62:27103,192.168.91.63:27103',
state: 1
}
]
---
active mongoses
[ { '4.4.29': 3 } ]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
'Currently enabled': 'yes',
'Currently running': 'no',
'Failed balancer rounds in last 5 attempts': 0,
'Migration Results for the last 24 hours': {
'1': "Failed with error 'aborted', from shard3 to shard1",
'2': "Failed with error 'aborted', from shard3 to shard2",
'6': "Failed with error 'aborted', from shard1 to shard3",
'17': "Failed with error 'aborted', from shard2 to shard3",
'19': "Failed with error 'aborted', from shard1 to shard2",
'33': 'Success',
'55': "Failed with error 'aborted', from shard2 to shard1"
}
}
---
databases
[
{
database: { _id: 'config', primary: 'config', partitioned: true },
collections: {
'config.system.sessions': {
shardKey: { _id: 1 },
unique: false,
balancing: true,
chunkMetadata: [
{ shard: 'shard1', nChunks: 342 },
{ shard: 'shard2', nChunks: 341 },
{ shard: 'shard3', nChunks: 341 }
],
chunks: [
'too many chunks to print, use verbose if you want to force print'
],
tags: []
}
}
},
{
database: {
_id: 'school',
primary: 'shard3',
partitioned: true,
version: {
uuid: UUID('212d6713-cd2b-485e-9eeb-46710a64ebbd'),
lastMod: 1
},
lastMovedTimestamp: Timestamp({ t: 1719372422, i: 2 })
},
collections: {
'school.student': {
shardKey: { id: 'hashed' },
unique: false,
balancing: true,
chunkMetadata: [
{ shard: 'shard1', nChunks: 2 },
{ shard: 'shard2', nChunks: 2 },
{ shard: 'shard3', nChunks: 2 }
],
chunks: [
{ min: { id: MinKey() }, max: { id: Long('-6148914691236517204') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) },
{ min: { id: Long('-6148914691236517204') }, max: { id: Long('-3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 1 }) },
{ min: { id: Long('-3074457345618258602') }, max: { id: Long('0') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 2 }) },
{ min: { id: Long('0') }, max: { id: Long('3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 3 }) },
{ min: { id: Long('3074457345618258602') }, max: { id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) },
{ min: { id: Long('6148914691236517204') }, max: { id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) }
],
tags: []
}
}
}
]
三、Shard4分片服务器安装与初始化
1、创建shard4分片服务工作目录(mongo01/ mongo02 /mongo03三台主机都要操作)
bash
mkdir -p /data/mongodb/shard4/{db,log,conf}
2、编辑shard4 分片服务配置文件shard1.conf(mongo01/ mongo02 /mongo03三台主机都要操作)
提示:主要修改端口、replSetName复制命名不能与其它shard节点重复,增加安全授权认证
bash
cat > /data/mongodb/shard4/conf/shard4.conf << EOF
systemLog:
destination: file
logAppend: true
path: /data/mongodb/shard4/log/shard4.log
storage:
dbPath: /data/mongodb/shard4/db
journal:
enabled: true
processManagement:
fork: true
pidFilePath: /data/mongodb/shard4/shard4.pid
net:
port: 27104
bindIp: 0.0.0.0
replication:
replSetName: shard4
sharding:
clusterRole: shardsvr
EOF
3、把shard4分片服务添加到systemctl启动项并启动(mongo01/ mongo02 /mongo03三台主机都要操作)
bash
cat > /usr/lib/systemd/system/shard4.service << EOF
[Unit]
Description=MongoDB Database Server
After=network-online.target
[Service]
Type=forking
PIDFile=/data/mongodb/shard4/shard4.pid
ExecStart=/usr/local/bin/mongod --config /data/mongodb/shard4/conf/shard4.conf
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/usr/local/bin/mongod --shutdown /data/mongodb/shard4/conf/shard4.conf
PrivateTmp=true
LimitNOFILE=65535
LimitPROC=65535
Restart=always
RestartSec=1
[Install]
WantedBy=multi-user.target
EOF
systemctl daemon-reload
systemctl start shard4
systemctl status shard4
4、登录任意一台主机的shard4分片服务控制台初始化副本集(任意一台主机操作)
bash
mongosh --host 192.168.91.61 --port 27104
### 复制以下参数执行shard4分片副本集初始化
rs.initiate(
{
_id: "shard4",
members: [
{_id: 0,host: "192.168.91.61:27104"},
{_id: 1,host: "192.168.91.62:27104"},
{_id: 2,host: "192.168.91.63:27104"}
]
}
)
### 显示成功结果
{ ok: 1 }
### 查看副本集状态
rs.status()
5、登录shard4分片主节点创建管理员用户
提示:shard4分片主节点创建管理员用户要与其它分片的管理员用户名称和密码一致
bash
mongosh --host 192.168.91.61 --port 27104
### 查询shard1分片的主节点分布在哪台主机上
rs.status()
### 进入admin数据库
use admin
### 创建管理员用户root
db.createUser({user:"root",pwd:"123456",roles:["root"]})
### 验证管理员用户root账号和密码是否登录正常
db.auth("root","123456")
### 查询已创建的用户
db.system.users.find();
6、shard4分片服务配置文件添加以下参数
bash
cat >> /data/mongodb/shard4/conf/shard4.conf << EOF
security:
keyFile: /data/mongodb/keyfile/keyfile
authorization: enabled
EOF
7、重启shard4服务(mongo01/ mongo02 /mongo03三台主机都要操作)
bash
systemctl stop shard4
systemctl start shard4
systemctl status shard4
8、登录任意一台shard4分片节点控制台检查(任意一台主机操作)
bash
mongosh --host 192.168.91.61 --port 27104 -u root -p 123456
show dbs
四、登录Mongos路由控制台添加shard4分片
1、登录任意一台主机的mongos路由服务控制台添加shard4分片(任意一台主机操作)
bash
mongosh --host 192.168.91.61 --port 27017 -u root -p 123456
### 查看分片集群成员信息
use admin
db.runCommand({ listshards : 1})
### 添加shard4分片
sh.addShard( "shard4/192.168.91.61:27104,192.168.91.62:27104,192.168.91.63:27104")
### 查看分片集群成员信息
use admin
db.runCommand({ listshards : 1})
五、测试shard4分片
1、查看分片集群的分片详细信息,当前school数据库student集合分片中并没有shard4分片
bash
mongosh --host 192.168.91.61 --port 27017 -u root -p 123456
[direct: mongos] test> sh.status()
shardingVersion
{ _id: 1, clusterId: ObjectId('667a1e7ef49fe55ecd0dac12') }
---
shards
[
{
_id: 'shard1',
host: 'shard1/192.168.91.61:27101,192.168.91.62:27101,192.168.91.63:27101',
state: 1
},
{
_id: 'shard2',
host: 'shard2/192.168.91.61:27102,192.168.91.62:27102,192.168.91.63:27102',
state: 1
},
{
_id: 'shard3',
host: 'shard3/192.168.91.61:27103,192.168.91.62:27103,192.168.91.63:27103',
state: 1
},
{
_id: 'shard4',
host: 'shard4/192.168.91.61:27104,192.168.91.62:27104,192.168.91.63:27104',
state: 1
}
]
---
active mongoses
[ { '4.4.29': 3 } ]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
'Currently enabled': 'yes',
'Currently running': 'no',
'Failed balancer rounds in last 5 attempts': 0,
'Migration Results for the last 24 hours': {
'1': "Failed with error 'aborted', from shard1 to shard2",
'2': "Failed with error 'aborted', from shard1 to shard3",
'8': "Failed with error 'aborted', from shard2 to shard1",
'17': "Failed with error 'aborted', from shard2 to shard3",
'273': 'Success'
}
}
---
databases
[
{
database: { _id: 'config', primary: 'config', partitioned: true },
collections: {
'config.system.sessions': {
shardKey: { _id: 1 },
unique: false,
balancing: true,
chunkMetadata: [
{ shard: 'shard1', nChunks: 256 },
{ shard: 'shard2', nChunks: 256 },
{ shard: 'shard3', nChunks: 256 },
{ shard: 'shard4', nChunks: 256 }
],
chunks: [
'too many chunks to print, use verbose if you want to force print'
],
tags: []
}
}
},
{
database: {
_id: 'school',
primary: 'shard3',
partitioned: true,
version: {
uuid: UUID('212d6713-cd2b-485e-9eeb-46710a64ebbd'),
lastMod: 1
},
lastMovedTimestamp: Timestamp({ t: 1719372422, i: 2 })
},
collections: {
'school.student': {
shardKey: { id: 'hashed' },
unique: false,
balancing: true,
chunkMetadata: [
{ shard: 'shard1', nChunks: 2 },
{ shard: 'shard2', nChunks: 2 },
{ shard: 'shard3', nChunks: 2 }
],
chunks: [
{ min: { id: MinKey() }, max: { id: Long('-6148914691236517204') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 0 }) },
{ min: { id: Long('-6148914691236517204') }, max: { id: Long('-3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 1 }) },
{ min: { id: Long('-3074457345618258602') }, max: { id: Long('0') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 2 }) },
{ min: { id: Long('0') }, max: { id: Long('3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 1, i: 3 }) },
{ min: { id: Long('3074457345618258602') }, max: { id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 4 }) },
{ min: { id: Long('6148914691236517204') }, max: { id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 5 }) }
],
tags: []
}
}
}
]
2、创建school数据库并向student集合插大量数据,使balancer自动会把shard4分片添加进来负载
bash
mongosh --host 192.168.91.61 --port 27017 -u root -p 123456
use school
for (var i = 10001; i <= 200000; i++){
db.student.insert({id:i,"002":"zhangshang"});
}
db.student.stats().count;
3、查看分片集群的分片详细信息,当前school数据库student集合分片增加了shard4分片
bash
mongosh --host 192.168.91.61 --port 27017 -u root -p 123456
[direct: mongos] school> sh.status()
shardingVersion
{ _id: 1, clusterId: ObjectId('667a1e7ef49fe55ecd0dac12') }
---
shards
[
{
_id: 'shard1',
host: 'shard1/192.168.91.61:27101,192.168.91.62:27101,192.168.91.63:27101',
state: 1
},
{
_id: 'shard2',
host: 'shard2/192.168.91.61:27102,192.168.91.62:27102,192.168.91.63:27102',
state: 1
},
{
_id: 'shard3',
host: 'shard3/192.168.91.61:27103,192.168.91.62:27103,192.168.91.63:27103',
state: 1
},
{
_id: 'shard4',
host: 'shard4/192.168.91.61:27104,192.168.91.62:27104,192.168.91.63:27104',
state: 1
}
]
---
active mongoses
[ { '4.4.29': 3 } ]
---
autosplit
{ 'Currently enabled': 'yes' }
---
balancer
{
'Currently enabled': 'yes',
'Failed balancer rounds in last 5 attempts': 0,
'Currently running': 'no',
'Migration Results for the last 24 hours': {
'1': "Failed with error 'aborted', from shard1 to shard2",
'2': "Failed with error 'aborted', from shard1 to shard3",
'8': "Failed with error 'aborted', from shard2 to shard1",
'17': "Failed with error 'aborted', from shard2 to shard3",
'276': 'Success'
}
}
---
databases
[
{
database: { _id: 'config', primary: 'config', partitioned: true },
collections: {
'config.system.sessions': {
shardKey: { _id: 1 },
unique: false,
balancing: true,
chunkMetadata: [
{ shard: 'shard1', nChunks: 256 },
{ shard: 'shard2', nChunks: 256 },
{ shard: 'shard3', nChunks: 256 },
{ shard: 'shard4', nChunks: 256 }
],
chunks: [
'too many chunks to print, use verbose if you want to force print'
],
tags: []
}
}
},
{
database: {
_id: 'school',
primary: 'shard3',
partitioned: true,
version: {
uuid: UUID('212d6713-cd2b-485e-9eeb-46710a64ebbd'),
lastMod: 1
},
lastMovedTimestamp: Timestamp({ t: 1719372422, i: 2 })
},
collections: {
'school.student': {
shardKey: { id: 'hashed' },
unique: false,
balancing: true,
chunkMetadata: [
{ shard: 'shard1', nChunks: 3 },
{ shard: 'shard2', nChunks: 3 },
{ shard: 'shard3', nChunks: 3 },
{ shard: 'shard4', nChunks: 3 }
],
chunks: [
{ min: { id: MinKey() }, max: { id: Long('-7688756510863042004') }, 'on shard': 'shard4', 'last modified': Timestamp({ t: 2, i: 0 }) },
{ min: { id: Long('-7688756510863042004') }, max: { id: Long('-6148914691236517204') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 2, i: 1 }) },
{ min: { id: Long('-6148914691236517204') }, max: { id: Long('-4610244297072658853') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 8 }) },
{ min: { id: Long('-4610244297072658853') }, max: { id: Long('-3074457345618258602') }, 'on shard': 'shard1', 'last modified': Timestamp({ t: 1, i: 9 }) },
{ min: { id: Long('-3074457345618258602') }, max: { id: Long('-1543944560524976222') }, 'on shard': 'shard4', 'last modified': Timestamp({ t: 3, i: 0 }) },
{ min: { id: Long('-1543944560524976222') }, max: { id: Long('0') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 3, i: 1 }) },
{ min: { id: Long('0') }, max: { id: Long('1533922912875085655') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 2, i: 4 }) },
{ min: { id: Long('1533922912875085655') }, max: { id: Long('3074457345618258602') }, 'on shard': 'shard2', 'last modified': Timestamp({ t: 2, i: 5 }) },
{ min: { id: Long('3074457345618258602') }, max: { id: Long('4591276082391765156') }, 'on shard': 'shard4', 'last modified': Timestamp({ t: 4, i: 0 }) },
{ min: { id: Long('4591276082391765156') }, max: { id: Long('6148914691236517204') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 4, i: 1 }) },
{ min: { id: Long('6148914691236517204') }, max: { id: Long('7681229324810461991') }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 12 }) },
{ min: { id: Long('7681229324810461991') }, max: { id: MaxKey() }, 'on shard': 'shard3', 'last modified': Timestamp({ t: 1, i: 13 }) }
],
tags: []
}
}
}
]