MongoDB 3.2.7 for rhel6.4 副本集-分片集群部署

    今天,一同事反映,他安装部署mongodb副本集--分片集群,初始化分片时遇到问题:初始化分片必须使用主机名(也就是必须有相当于DNS服务的解析),这样以来,mongo副本集--分片集群就
会出现DNS服务器单点故障问题。为了验证这一问题,我单独使用ip部署mongo 3.2.7 for rhel 6.4 副本集--分片集群,验证结果是:副本集及分片初始化时使用IP,则同时均使用IP方式,使用host或
DNS解析,则副本集及分片初始化时均使用主机名或域名解析方式,可成功部署mongo 3.2.7 for rhel 6.4 副本集--分片集群。另外,个人观点:建议使用DNS或host域名解析,因为主机名可以不改变,
而主机的IP地址的改变可能性是很大的。
   mongo 3.2.7 for rhel 6.4 副本集--分片集群的部署过程如下:
   首先,确保rhel 6.4环境能支持3.2.7的安装部署,mongodb3.2.7单实例安装过程及可能遇到的问题可参考:
   MongoDB 3.2 for RHEL6.4 installation(地址:http://blog.itpub.net/29357786/viewspace-2119891/)
   本次实验过程涉及3台服务器:
   角色:副本集仲裁者--分片集群配置服务器(192.168.144.111)
[root@arbiter ~]# hostname
arbiter
[root@arbiter ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@arbiter ~]# 
    角色:副本集firstset主节点--分片集群分片1(192.168.144.130
[root@mongo2 ~]# hostname
mongo2
[root@mongo2 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@mongo2 ~]# 
    角色:副本集secondset主节点--分片集群分片2(192.168.144.120
[root@mongo1 ~]# hostname
mongo1
[root@mongo1 ~]# cat /etc/redhat-release 
Red Hat Enterprise Linux Server release 6.4 (Santiago)
[root@mongo1 ~]# 
    副本集仲裁者--分片集群配置服务器(192.168.144.111)需要创建的文件目录
#数据存放目录
/opt/mongo/data/dns_arbiter1
/opt/mongo/data/dns_arbiter2
/opt/mongo/data/dns_sdconfig1
/opt/mongo/data/dns_sdconfig2
#日志存放目录
/opt/mongo/logs/dns_aribter1
/opt/mongo/logs/dns_aribter2
/opt/mongo/logs/dns_config1
/opt/mongo/logs/dns_config2
   副本集firstset主节点--分片集群分片1(192.168.144.130)需要创建的文件目录
#数据存放目录
/opt/mongo/data/dns_repset1
/opt/mongo/data/dns_repset2
/opt/mongo/data/dns_shard2
#日志存放目录
/opt/mongo/logs/dns_sd2
/opt/mongo/logs/firstset
/opt/mongo/logs/secondset
   副本集second主节点--分片集群分片2(192.168.144.120)需要创建的文件目录
#数据存放目录
/opt/mongo/data/dns_repset1
/opt/mongo/data/dns_repset2
/opt/mongo/data/dns_shard2
#日志存放目录
/opt/mongo/logs/dns_sd1
/opt/mongo/logs/firstset
/opt/mongo/logs/secondset
   第一步:初始化副本集1
#初始化副本集1的实例进程,三个节点需要执行的命令
aibiter执行命令
mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb

mongo1r执行命令
mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb

mongo2r执行命令
mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb

#三个节点命令执行过程日志
[mongo@arbiter logs]$ mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:14.653-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:14.653-0800 I CONTROL  [main] **          enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 6566
child process started successfully, parent exiting
[mongo@arbiter logs]$

[mongo@mongo1 logs]$ mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:26.838-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:26.838-0800 I CONTROL  [main] **          enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 10478
child process started successfully, parent exiting
[mongo@mongo1 logs]$ 

[mongo@mongo2 logs]$ mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
2016-11-14T18:53:43.808-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:43.808-0800 I CONTROL  [main] **          enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 6374
child process started successfully, parent exiting
[mongo@mongo2 logs]$
#初始化副本集1(firstset)
#mongo2需要执行的命令
config={_id:"firstset",members:[]}
config.members.push({_id:0,host:"192.168.144.120:10001"})
config.members.push({_id:1,host:"192.168.144.130:10001"})
config.members.push({_id:2,host:"192.168.144.111:10001",arbiterOnly:true})
rs.initiate(config);
#mongo2执行命令过程日志
[mongo@mongo2 logs]$ mongo --port 10001
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:10001/test
Server has startup warnings: 
2016-11-14T18:53:43.808-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T18:53:43.808-0800 I CONTROL  [main] **          enabling http interface
2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] 
2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] 
2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-11-14T18:53:43.921-0800 I CONTROL  [initandlisten] 
> config={_id:"firstset",members:[]}
{ "_id" : "firstset", "members" : [ ] }
> config.members.push({_id:0,host:"192.168.144.120:10001"})
1
> config.members.push({_id:1,host:"192.168.144.130:10001"})
2
> config.members.push({_id:2,host:"192.168.144.111:10001",arbiterOnly:true})
3
> rs.initiate(config);
{ "ok" : 1 }
firstset:OTHER> use dns_testdb
switched to db dns_testdb
firstset:PRIMARY> rs.isMaster()
{
"hosts" : [
"192.168.144.120:10001",
"192.168.144.130:10001"
],
"arbiters" : [
"192.168.144.111:10001"
],
"setName" : "firstset",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.144.130:10001",
"me" : "192.168.144.130:10001",
"electionId" : ObjectId("7fffffff0000000000000001"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-11-15T03:07:06.392Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
#通过mongo2向firstset副本集插入100000条数据
firstset:OTHER> use dns_testdb
switched to db dns_testdb
firstset:PRIMARY> animal = ["dog", "tiger", "cat", "lion", "elephant", "bird", "horse", "pig", "rabbit", "cow", "dragon", "snake"];
[
"dog",
"tiger",
"cat",
"lion",
"elephant",
"bird",
"horse",
"pig",
"rabbit",
"cow",
"dragon",
"snake"
]
firstset:PRIMARY> for(var i=0; i<100000; i++){
...   name = animal[Math.floor(Math.random()*animal.length)];
...   user_id = i;
...   boolean = [true, false][Math.floor(Math.random()*2)];
...   added_at = new Date();
...   number = Math.floor(Math.random()*10001);
...   db.test_collection.save({"name":name, "user_id":user_id, "boolean": boolean, "added_at":added_at, "number":number });
... }
WriteResult({ "nInserted" : 1 })
firstset:PRIMARY> firstset:PRIMARY> show collections
test_collection
firstset:PRIMARY> db.test_collection.findOne();
{
"_id" : ObjectId("582a7c095490e553bc98919e"),
"name" : "snake",
"user_id" : 0,
"boolean" : false,
"added_at" : ISODate("2016-11-15T03:07:53.561Z"),
"number" : 746
}
firstset:PRIMARY> db.test_collection.count();
100000
firstset:PRIMARY> show dbs
dns_testdb  0.004GB
local       0.006GB
    第二步:初始化分片
#启动配置服务器进程,三个节点需要执行的命令
aribter
mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend

mongo1
mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend

mongo2
mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend

#三个节点命令执行过程日志
[mongo@arbiter dns_config1]$ mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 7038
child process started successfully, parent exiting
[mongo@arbiter dns_config1]$

[mongo@mongo1 dns_shard1]$ mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 11566
child process started successfully, parent exiting
[mongo@mongo1 dns_shard1]$

[mongo@mongo2 logs]$ mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 6670
child process started successfully, parent exiting
[mongo@mongo2 logs]$ l

#在节点mongo1、mongo2启动Mongos进程,2个节点需要执行的命令
mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
#2个节点执行命令过程日志
[mongo@mongo1 logs]$ mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 14689
child process started successfully, parent exiting
[mongo@mongo1 logs]$

[mongo@mongo2 logs]$ mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 7093
child process started successfully, parent exiting
[mongo@mongo2 logs]$

#在mongo1配置分片,将firstset添加到分片
mongo1需要执行的命令
mongo --port 27017
use admin
db.runCommand( { addShard : "firstset/192.168.144.120:10001,192.168.144.130:10001,192.168.144.111:10001" } )
#mongo1执行命令的日志

[mongo@mongo1 logs]$ mongo --port 27017
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
switched to db admin
mongos> db.runCommand( { addShard : "firstset/192.168.144.120:10001,192.168.144.130:10001,192.168.144.111:10001" } )
{ "shardAdded" : "firstset", "ok" : 1 }
mongos> 
   第三步:初始化副本集2(secondset)
#初始化副本集2的实例进程,三个节点需要执行的命令
aibiter执行命令
mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb

mongo1执行命令
mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb

mongo2执行命令
mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb

#三个节点执行命令的日志
[mongo@arbiter dns_aribter2]$ mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:29.478-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:29.478-0800 I CONTROL  [main] **          enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 10192
child process started successfully, parent exiting
[mongo@arbiter dns_aribter2]$ 

[mongo@mongo1 secondset]$ mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:38.786-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:38.786-0800 I CONTROL  [main] **          enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 15770
child process started successfully, parent exiting
[mongo@mongo1 secondset]$

[mongo@mongo2 secondset]$ mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
2016-11-14T20:32:53.327-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:53.327-0800 I CONTROL  [main] **          enabling http interface
about to fork child process, waiting until server is ready for connections.
forked process: 7344
child process started successfully, parent exiting
[mongo@mongo2 secondset]$
#初始化副本集2(secondset)
mongo1需要执行的命令
mongo 192.168.144.120:30001/admin
config={_id:"secondset",members:[]}
config.members.push({_id:0,host:"192.168.144.120:30001"})
config.members.push({_id:1,host:"192.168.144.130:30001"})
config.members.push({_id:2,host:"192.168.144.111:30001",arbiterOnly:true})
rs.initiate(config);
#mongo1执行命令的日志(红色部分个人觉得有点莫名其妙,但是不妨碍操作正常进程
[mongo@mongo1 secondset]$ mongo 192.168.144.120:30001/admin
MongoDB shell version: 3.2.7
connecting to: 192.168.144.120:30001/admin
Server has startup warnings: 
2016-11-14T20:32:38.786-0800 I CONTROL  [main] ** WARNING: --rest is specified without --httpinterface,
2016-11-14T20:32:38.786-0800 I CONTROL  [main] **          enabling http interface
2016-11-14T20:32:38.858-0800 I CONTROL  [initandlisten] 
2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] 
2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2016-11-14T20:32:38.859-0800 I CONTROL  [initandlisten] 
> config={_id:"secondset",members:[]}
{ "_id" : "secondset", "members" : [ ] }
> config.members.push({_id:0,host:"192.168.144.120:30001"})
1
> config.members.push({_id:1,host:"192.168.144.130:30001"})
2
> config.members.push({_id:2,host:"192.168.144.111:30001",arbiterOnly:true})
3
> rs.initiate(config);
{ "ok" : 1 }
secondset:OTHER> firstset:PRIMARY> rs.isMaster()
2016-11-14T20:33:32.215-0800 E QUERY    [thread1] ReferenceError: PRIMARY is not defined :
@(shell):1:10
secondset:SECONDARY> firstset:PRIMARY> rs.isMaster();
2016-11-14T20:33:40.892-0800 E QUERY    [thread1] ReferenceError: PRIMARY is not defined :
@(shell):1:10
secondset:PRIMARY> rs.isMaster();
{
"hosts" : [
"192.168.144.120:30001",
"192.168.144.130:30001"
],
"arbiters" : [
"192.168.144.111:30001"
],
"setName" : "secondset",
"setVersion" : 1,
"ismaster" : true,
"secondary" : false,
"primary" : "192.168.144.120:30001",
"me" : "192.168.144.120:30001",
"electionId" : ObjectId("7fffffff0000000000000001"),
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2016-11-15T04:34:35.210Z"),
"maxWireVersion" : 4,
"minWireVersion" : 0,
"ok" : 1
}
secondset:PRIMARY> 
   第四部:将secondset添加到分片集群
#mongo1需要执行的命令
mongo --port 27017
use admin
db.runCommand( { addShard : "secondset/192.168.144.120:30001,192.168.144.130:30001,192.168.144.111:30001" } )
#mongo1执行命令的日志
[mongo@mongo1 logs]$ mongo --port 27017
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
mongos> db.runCommand( { addShard : "secondset/192.168.144.120:30001,192.168.144.130:30001,192.168.144.111:30001" } )
{ "shardAdded" : "secondset", "ok" : 1 }
mongos> db.runCommand({listShards:1})
{
"shards" : [
{
"_id" : "firstset",
"host" : "firstset/192.168.144.120:10001,192.168.144.130:10001"
},
{
"_id" : "secondset",
"host" : "secondset/192.168.144.120:30001,192.168.144.130:30001"
}
],
"ok" : 1
}
mongos> 
   第五步:开启测试数据库dns_testdb的分片功能并打开集合分片
#mongo1需要执行的命令
mongo --port 27017
use admin
sh.enableSharding("dns_testdb");
db.runCommand({"shardcollection":"dns_testdb.test_collection","key":{"_id":1}});
#mongo1执行命令日志

[mongo@mongo1 logs]$ mongo --port 27017
MongoDB shell version: 3.2.7
connecting to: 127.0.0.1:27017/test
mongos> use admin
switched to db admin
mongos> sh.enableSharding("dns_testdb");
{ "ok" : 1 }
mongos> db.runCommand({"shardcollection":"dns_testdb.test_collection","key":{"_id":1}});
{ "collectionsharded" : "dns_testdb.test_collection", "ok" : 1 }
#查看副本集--分片集群信息
mongos> use config
switched to db config
mongos>  db.shards.find();
{ "_id" : "firstset", "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{ "_id" : "secondset", "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
mongos>
mongos> use config
switched to db config
mongos> db.printShardingStatus()
--- Sharding Status --- 
  sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("582a888bd617c6f7926f8843")
}
  shards:
{  "_id" : "firstset",  "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{  "_id" : "secondset",  "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
  active mongoses:
"3.2.7" : 2
  balancer:
Currently enabled:  yes
Currently running:  yes
Balancer lock taken at Mon Nov 14 2016 21:34:45 GMT-0800 (PST) by mongo2:27017:1479182487:368039610:Balancer:970925433
Collections with active migrations: 
dns_testdb.test_collection started at Mon Nov 14 2016 21:34:45 GMT-0800 (PST)
Failed balancer rounds in last 5 attempts:  0
Migration Results for the last 24 hours: 
6 : Success
  databases:
{  "_id" : "dns_testdb",  "primary" : "firstset",  "partitioned" : true }
dns_testdb.test_collection
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
firstset 13
secondset 6
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("582a7c0c5490e553bc98a683") } on : secondset Timestamp(2, 0) 
{ "_id" : ObjectId("582a7c0c5490e553bc98a683") } -->> { "_id" : ObjectId("582a7c0e5490e553bc98bb69") } on : secondset Timestamp(3, 0) 
{ "_id" : ObjectId("582a7c0e5490e553bc98bb69") } -->> { "_id" : ObjectId("582a7c115490e553bc98d04f") } on : secondset Timestamp(4, 0) 
{ "_id" : ObjectId("582a7c115490e553bc98d04f") } -->> { "_id" : ObjectId("582a7c135490e553bc98e535") } on : secondset Timestamp(5, 0) 
{ "_id" : ObjectId("582a7c135490e553bc98e535") } -->> { "_id" : ObjectId("582a7c155490e553bc98fa1b") } on : secondset Timestamp(6, 0) 
{ "_id" : ObjectId("582a7c155490e553bc98fa1b") } -->> { "_id" : ObjectId("582a7c175490e553bc990f01") } on : secondset Timestamp(7, 0) 
{ "_id" : ObjectId("582a7c175490e553bc990f01") } -->> { "_id" : ObjectId("582a7c1a5490e553bc9923e7") } on : firstset Timestamp(7, 1) 
{ "_id" : ObjectId("582a7c1a5490e553bc9923e7") } -->> { "_id" : ObjectId("582a7c1c5490e553bc9938cd") } on : firstset Timestamp(1, 7) 
{ "_id" : ObjectId("582a7c1c5490e553bc9938cd") } -->> { "_id" : ObjectId("582a7c1e5490e553bc994db3") } on : firstset Timestamp(1, 8) 
{ "_id" : ObjectId("582a7c1e5490e553bc994db3") } -->> { "_id" : ObjectId("582a7c215490e553bc996299") } on : firstset Timestamp(1, 9) 
{ "_id" : ObjectId("582a7c215490e553bc996299") } -->> { "_id" : ObjectId("582a7c235490e553bc99777f") } on : firstset Timestamp(1, 10) 
{ "_id" : ObjectId("582a7c235490e553bc99777f") } -->> { "_id" : ObjectId("582a7c255490e553bc998c65") } on : firstset Timestamp(1, 11) 
{ "_id" : ObjectId("582a7c255490e553bc998c65") } -->> { "_id" : ObjectId("582a7c275490e553bc99a14b") } on : firstset Timestamp(1, 12) 
{ "_id" : ObjectId("582a7c275490e553bc99a14b") } -->> { "_id" : ObjectId("582a7c2a5490e553bc99b631") } on : firstset Timestamp(1, 13) 
{ "_id" : ObjectId("582a7c2a5490e553bc99b631") } -->> { "_id" : ObjectId("582a7c2c5490e553bc99cb17") } on : firstset Timestamp(1, 14) 
{ "_id" : ObjectId("582a7c2c5490e553bc99cb17") } -->> { "_id" : ObjectId("582a7c2e5490e553bc99dffd") } on : firstset Timestamp(1, 15) 
{ "_id" : ObjectId("582a7c2e5490e553bc99dffd") } -->> { "_id" : ObjectId("582a7c305490e553bc99f4e3") } on : firstset Timestamp(1, 16) 
{ "_id" : ObjectId("582a7c305490e553bc99f4e3") } -->> { "_id" : ObjectId("582a7c335490e553bc9a09c9") } on : firstset Timestamp(1, 17) 
{ "_id" : ObjectId("582a7c335490e553bc9a09c9") } -->> { "_id" : { "$maxKey" : 1 } } on : firstset Timestamp(1, 18) 
mongos> 
mongos> db.printShardingStatus()
--- Sharding Status --- 
  sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("582a888bd617c6f7926f8843")
}
  shards:
{  "_id" : "firstset",  "host" : "firstset/192.168.144.120:10001,192.168.144.130:10001" }
{  "_id" : "secondset",  "host" : "secondset/192.168.144.120:30001,192.168.144.130:30001" }
  active mongoses:
"3.2.7" : 2
  balancer:
Currently enabled:  yes
Currently running:  no
Failed balancer rounds in last 5 attempts:  0
Migration Results for the last 24 hours: 
9 : Success
  databases:
{  "_id" : "dns_testdb",  "primary" : "firstset",  "partitioned" : true }
dns_testdb.test_collection
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
firstset 10
secondset 9
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : ObjectId("582a7c0c5490e553bc98a683") } on : secondset Timestamp(2, 0) 
{ "_id" : ObjectId("582a7c0c5490e553bc98a683") } -->> { "_id" : ObjectId("582a7c0e5490e553bc98bb69") } on : secondset Timestamp(3, 0) 
{ "_id" : ObjectId("582a7c0e5490e553bc98bb69") } -->> { "_id" : ObjectId("582a7c115490e553bc98d04f") } on : secondset Timestamp(4, 0) 
{ "_id" : ObjectId("582a7c115490e553bc98d04f") } -->> { "_id" : ObjectId("582a7c135490e553bc98e535") } on : secondset Timestamp(5, 0) 
{ "_id" : ObjectId("582a7c135490e553bc98e535") } -->> { "_id" : ObjectId("582a7c155490e553bc98fa1b") } on : secondset Timestamp(6, 0) 
{ "_id" : ObjectId("582a7c155490e553bc98fa1b") } -->> { "_id" : ObjectId("582a7c175490e553bc990f01") } on : secondset Timestamp(7, 0) 
{ "_id" : ObjectId("582a7c175490e553bc990f01") } -->> { "_id" : ObjectId("582a7c1a5490e553bc9923e7") } on : secondset Timestamp(8, 0) 
{ "_id" : ObjectId("582a7c1a5490e553bc9923e7") } -->> { "_id" : ObjectId("582a7c1c5490e553bc9938cd") } on : secondset Timestamp(9, 0) 
{ "_id" : ObjectId("582a7c1c5490e553bc9938cd") } -->> { "_id" : ObjectId("582a7c1e5490e553bc994db3") } on : secondset Timestamp(10, 0) 
{ "_id" : ObjectId("582a7c1e5490e553bc994db3") } -->> { "_id" : ObjectId("582a7c215490e553bc996299") } on : firstset Timestamp(10, 1) 
{ "_id" : ObjectId("582a7c215490e553bc996299") } -->> { "_id" : ObjectId("582a7c235490e553bc99777f") } on : firstset Timestamp(1, 10) 
{ "_id" : ObjectId("582a7c235490e553bc99777f") } -->> { "_id" : ObjectId("582a7c255490e553bc998c65") } on : firstset Timestamp(1, 11) 
{ "_id" : ObjectId("582a7c255490e553bc998c65") } -->> { "_id" : ObjectId("582a7c275490e553bc99a14b") } on : firstset Timestamp(1, 12) 
{ "_id" : ObjectId("582a7c275490e553bc99a14b") } -->> { "_id" : ObjectId("582a7c2a5490e553bc99b631") } on : firstset Timestamp(1, 13) 
{ "_id" : ObjectId("582a7c2a5490e553bc99b631") } -->> { "_id" : ObjectId("582a7c2c5490e553bc99cb17") } on : firstset Timestamp(1, 14) 
{ "_id" : ObjectId("582a7c2c5490e553bc99cb17") } -->> { "_id" : ObjectId("582a7c2e5490e553bc99dffd") } on : firstset Timestamp(1, 15) 
{ "_id" : ObjectId("582a7c2e5490e553bc99dffd") } -->> { "_id" : ObjectId("582a7c305490e553bc99f4e3") } on : firstset Timestamp(1, 16) 
{ "_id" : ObjectId("582a7c305490e553bc99f4e3") } -->> { "_id" : ObjectId("582a7c335490e553bc9a09c9") } on : firstset Timestamp(1, 17) 
{ "_id" : ObjectId("582a7c335490e553bc9a09c9") } -->> { "_id" : { "$maxKey" : 1 } } on : firstset Timestamp(1, 18) 
mongos> 

到此,验证操作已经完毕,总结:副本集--分片集群部署过程中,可以只使用IP地址,但是需要注意的是副本集及分片配置时,如果使用IP则均使用IP地址。
附副本集--分片集群部署完成后,3个节点的mong0进程:
[mongo@arbiter ~]$ ps -ef|grep mongo
root      6125  5922  0 18:17 pts/5    00:00:00 su - mongo
mongo     6126  6125  0 18:17 pts/5    00:00:00 -bash
mongo     6566     1  0 18:53 ?        00:01:55 mongod --dbpath /opt/mongo/data/dns_arbiter1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter1/aribter1.log --logappend --nojournal --directoryperdb
mongo     7038     1  0 19:22 ?        00:01:47 mongod --configsvr --dbpath /opt/mongo/data/dns_sdconfig1 --port 20001 --fork --logpath /opt/mongo/logs/dns_config1/config1.log --logappend
mongo    10192     1  0 20:32 ?        00:00:56 mongod --dbpath /opt/mongo/data/dns_arbiter2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/dns_aribter2/aribter2.log --logappend --nojournal --directoryperdb
mongo    12503  6126  7 23:03 pts/5    00:00:00 ps -ef
mongo    12504  6126  0 23:03 pts/5    00:00:00 grep mongo
[mongo@arbiter ~]$

[root@mongo1 ~]# ps -ef|grep mongo
root      9467  9040  0 18:20 pts/4    00:00:00 su - mongo
mongo     9468  9467  0 18:20 pts/4    00:00:00 -bash
mongo    10478     1  1 18:53 ?        00:04:20 mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
mongo    11566     1  0 19:24 ?        00:01:29 mongod --configsvr --dbpath /opt/mongo/data/dns_shard1 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd1/sd1_mymongo1.log --logappend
mongo    14689     1  0 20:01 ?        00:00:35 mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
mongo    14793  9468  0 20:02 pts/4    00:00:00 mongo --port 27017
root     15383 15365  0 20:21 pts/0    00:00:00 su - mongo
mongo    15384 15383  0 20:21 pts/0    00:00:00 -bash
mongo    15770     1  1 20:32 ?        00:01:55 mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
mongo    15796 15384  0 20:32 pts/0    00:00:00 mongo 192.168.144.120:30001/admin
root     20392 18350  0 23:03 pts/1    00:00:00 grep mongo
[root@mongo1 ~]# 

[root@mongo2 ~]# ps -ef|grep mongo
root      6187  3834  0 18:18 pts/1    00:00:00 su - mongo
mongo     6188  6187  0 18:18 pts/1    00:00:00 -bash
mongo     6374     1  1 18:53 ?        00:04:26 mongod --dbpath /opt/mongo/data/dns_repset1 --port 10001 --replSet firstset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/firstset/firstset.log --logappend --nojournal --directoryperdb
mongo     6401  6188  0 18:55 pts/1    00:00:28 mongo --port 10001
root      6589  6534  0 19:16 pts/2    00:00:00 su - mongo
mongo     6590  6589  0 19:16 pts/2    00:00:00 -bash
mongo     6670     1  0 19:25 ?        00:01:26 mongod --configsvr --dbpath /opt/mongo/data/dns_shard2 --port 20001 --fork --logpath /opt/mongo/logs/dns_sd2/sd1_mymongo2.log --logappend
mongo     7093     1  0 20:01 ?        00:00:34 mongos --configdb 192.168.144.111:20001,192.168.144.120:20001,192.168.144.130:20001 --port 27017 --chunkSize 1 --fork --logpath /opt/mongo/logs/dns_sd.log --logappend
mongo     7344     1  1 20:32 ?        00:01:53 mongod --dbpath /opt/mongo/data/dns_repset2 --port 30001 --replSet secondset --oplogSize 512 --rest --fork --logpath /opt/mongo/logs/secondset/secondset.log --logappend --nojournal --directoryperdb
mongo     7666  6590  0 21:11 pts/2    00:00:00 mongo --port 27017
root      8253  7909  0 23:04 pts/0    00:00:00 grep mongo
[root@mongo2 ~]# 

时间: 2024-10-27 19:22:19

MongoDB 3.2.7 for rhel6.4 副本集-分片集群部署的相关文章

MongoDB 3.4 副本集分片集群搭建

MongoDB 3.4 副本集分片集群搭建 Author:Jeffrey Jhon Timed:2017-10-16 16:56:24 Company:hangzhoutianque 案例主要参考,然后细化了下安装部署过程,并且贴出详细的配置信息,以便后续翻阅查询. 官方文档:Sharding - MongoDB Manual 3.4 一.测试环境 操作体统:VMware + CentOS7 + 静态IP mongodb版本:mongodb-3.4.4 二.布局预览 app1 app2 app3

搭建mongodb集群(副本集+分片)

搭建mongodb集群(副本集+分片) 转载自:http://blog.csdn.net/bluejoe2000/article/details/41323051 完整的搭建mongodb集群(副本集+分片)的例子... 准备四台机器,分别是bluejoe1,bluejoe2,bluejoe3,以及bluejoe0 副本集及分片策略确定如下:   将创建3个副本集,命名为shard1,shard2,shard3: 以上3个副本集作为3个分片: 每个副本集包含3个副本(主.辅1.辅2): 副本分开

mongodb 分片集群 使用 新的服务器代替原有的 mongos 和config 服务

问题描述 mongodb 分片集群 使用 新的服务器代替原有的 mongos 和config 服务 目前有已搭好的mongodb 分片集群 状况为:三台 服务器 每台上运行 一个路由进程(mongos) 一个config_sever 和三个分片,其中三台服务器上的每一个相同shard搭建了副本集 栗子: config = { _id:"shard1", members:[ {_id:0,host:"192.168.30.136:22001"}, {_id:1,hos

MongoDB的分片集群基本配置教程_MongoDB

为何要分片1.减少单机请求数,降低单机负载,提高总负载 2.减少单机的存储空间,提高总存空间. 常见的mongodb sharding 服务器架构 要构建一个 MongoDB Sharding Cluster,需要三种角色:1.Shard Server即存储实际数据的分片,每个Shard可以是一个mongod实例,也可以是一组mongod实例构成的Replication Set.为了实现每个Shard内部的auto-failover(自动故障切换),MongoDB官方建议每个Shard为一组Re

MongoDB主从+分片集群配置

Sharding  One 192.168.100.208 192.168.100.209 192.168.100.210 tar zxvf mongodb-linux-x86_64-2.2.0.tgz mv mongodb-linux-x86_64-2.2.0 /usr/local/mongo 192.168.100.208 mkdir -p /usr/local/mongo/data/shard1_1 mkdir -p /usr/local/mongo/data/shard2_1 mkdir

MongoDB 3.2.7 基于keyFile的认证在副本集+集群分片中的使用

    基于副本集的分片集群打建好后,mongodb数据库并没有提供用户安全认证,需要用户手工配置,才能使得数据库只接受特定用户特定方式的连接,增加数据库的安全性与稳定性.本文提供 MongoDB 3.2.7 基于keyFile的认证在副本集+集群分片中的使用方法.     首先,参照博文MongoDB 3.2.7 for rhel6.4 副本集-分片集群部署(http://blog.itpub.net/29357786/viewspace-2128515/)部署MongoDB 3.2.7集群环

深入理解MongoDB分片的管理_MongoDB

前言 在MongoDB(版本 3.2.9)中,分片集群(sharded cluster)是一种水平扩展数据库系统性能的方法,能够将数据集分布式存储在不同的分片(shard)上,每个分片只保存数据集的一部分,MongoDB保证各个分片之间不会有重复的数据,所有分片保存的数据之和就是完整的数据集.分片集群将数据集分布式存储,能够将负载分摊到多个分片上,每个分片只负责读写一部分数据,充分利用了各个shard的系统资源,提高数据库系统的吞吐量. 数据集被拆分成数据块(chunk),每个数据块包含多个do

mongodb 副本集 搭建高可用性服务

NoSQL的产生就是为了解决大数据量.高扩展性.高性能.灵活数据模型.高可用性.但是光通过主从模式的架构远远达不到上面几点,由此MongoDB设计了副本集和分片的功能.这篇文章主要介绍副本集: mongoDB官方已经不建议使用主从模式了,替代方案是采用副本集的模式,如图: 那什么是副本集呢?打魔兽世界总说打副本,其实这两个概念差不多一个意思.游戏里的副本是指玩家集中在高峰时间去一个场景打怪,会出现玩家暴多怪物少的情况,游戏开发商为了保证玩家的体验度,就为每一批玩家单独开放一个同样的空间同样的数量

架构之路之spring+springmvc+maven+mongodb的搭建(转载:http://blog.csdn.net/tomcat_2014/article/details/55100130)

1.前言       最近项目开发使用到了spring+springmvc+maven+MongoDB,项目中的框架是用springboot进项开发的,对于我们中级开发人员来说,有利有弊,好处呢是springboot开发减少了很多配置文件,同时也使开发更加规范快捷,但是,不好的地方就是长此以往,我们就在码农的道路上越走越远...      所以,为了抵制码农的身份,在工作之余自己亲手由零一点一点搭建了这个框架,当然框架暂时只包含基本的东西,但是千里之行始于足下,良好的开始就是成功的一半,我会在以