MongoDB的一些知識
https://www.percona.com/doc/percona-server-for-mongodb/LATEST/changed_in_34.html
The MongoRocks storage engine is now based on RocksDB 4.13.5.
facebook的RocksDB存儲引擎
MySQL版叫MyRocks
mongodb版叫MongoRocks
這篇文章用的版本,mongodb版本:3.0.7,應該是mmapv1引擎
mongodb文檔:https://docs.mongodb.com/manual/reference/method/rs.remove/
主庫除了local庫之外,其他庫的數據都會同步到從庫,包括用戶,認證等信息
percona版本mongodb的文檔
https://www.percona.com/doc/percona-server-for-mongodb/LATEST/install/yum.html
https://www.percona.com/doc/percona-server-for-mongodb/LATEST/install/tarball.html
https://www.percona.com/doc/percona-server-for-mongodb/LATEST/index.html
https://www.percona.com/downloads/percona-server-mongodb-3.4/
mongodb的Windows客戶端:
Robo 3T 前身robomongo(免費)
https://robomongo.org/
MongoChef(收費)
mongochef-x64.msi mongodb管理工具
連接復制集和分片集群都可以 ,而且不崩潰
需要在Windows上安裝mongodb,調用mongo.exe
各種角色和權限:
https://docs.mongodb.com/manual/reference/built-in-roles/ Database User Roles read #授予User只讀數據的權限 readWrite #授予User讀寫數據的權限 Database Administration Roles dbAdmin #在當前DB中執行管理操作 dbOwner #在當前DB中執行任意操作 userAdmin #在當前DB中管理User Cluster Administration Roles clusterAdmin #授予管理集群的最高權限 clusterManager #授予管理和監控集群的權限 clusterMonitor #授予監控集群的權限,對監控工具具有readonly的權限 hostManager #管理Server Backup and Restoration Roles backup restore All-Database Roles readAnyDatabase #授予在所有數據庫上讀取數據的權限 readWriteAnyDatabase #授予在所有數據庫上讀寫數據的權限 userAdminAnyDatabase #授予在所有數據庫上管理User的權限,不能操作數據庫 dbAdminAnyDatabase #授予管理所有數據庫的權限,只包括管理數據庫權限 Internal Roles __system #不能分配這個角色給用戶 Superuser Roles root #具有一切權限
用戶權限控制
實例上的所有用戶都保存在admin數據庫下的system.users 集合里 //沒有權限 hard0:RECOVERING> show dbs; 2015-12-15T16:35:10.504+0800 E QUERY Error: listDatabases failed:{ "ok" : 0, "errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }", "code" : 13 } at Error (<anonymous>) at Mongo.getDBs (src/mongo/shell/mongo.js:47:15) at shellHelper.show (src/mongo/shell/utils.js:630:33) at shellHelper (src/mongo/shell/utils.js:524:36) at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47 rsshard0:RECOVERING> show dbs 2015-12-15T16:35:13.263+0800 E QUERY Error: listDatabases failed:{ "ok" : 0, "errmsg" : "not authorized on admin to execute command { listDatabases: 1.0 }", "code" : 13 } at Error (<anonymous>) at Mongo.getDBs (src/mongo/shell/mongo.js:47:15) at shellHelper.show (src/mongo/shell/utils.js:630:33) at shellHelper (src/mongo/shell/utils.js:524:36) at (shellhelp2):1:1 at src/mongo/shell/mongo.js:47 rsshard0:RECOVERING> exit //改配置文件 vi /data/replset0/config/rs0.conf
port=27017
dbpath=/data/mongodb/mongodb27017/data
logpath=/data/mongodb/mongodb27017/logs/mongo.log
pidfilepath=/data/mongodb/mongodb27017/logs/mongo.pid
#profile = 1 #打開profiling 分析慢查詢
slowms = 1000 #分析的慢查詢的時間,默認是100毫秒
fork=true #是否后台運行,啟動之后mongo會fork一個子進程來后台運行
logappend=true #寫錯誤日志的模式:設置為true為追加。默認是覆蓋現有的日志文件
oplogSize=2048 #一旦mongod第一次創建OPLOG,改變oplogSize將不會影響OPLOG的大小
directoryperdb=true
storageEngine=wiredTiger
wiredTigerCacheSizeGB=4 #bufferpool大小
syncdelay=30 #刷寫數據到日志的頻率,通過fsync操作數據。默認60秒。
wiredTigerCollectionBlockCompressor=snappy #數據壓縮策略,google發明的壓縮算法
journal=true #redolog
#replSet=ck1 #指定一個副本集名稱作為參數,所有主機都必須有相同的名稱作為同一個副本集。
#auth = true #進入數據庫需要auth驗證
#shardsvr=true #表示是否是一個分片集群
//添加用戶 擁有root角色 rsshard0:PRIMARY> use admin rsshard0:PRIMARY> db.createUser({user:"lyhabc",pwd:"123456",roles:[{role:"root",db:"admin"}]}) Successfully added user: { "user" : "lyhabc", "roles" : [ { "role" : "root", "db" : "admin" } ] } /usr/local/mongodb/bin/mongod --config /data/replset0/config/rs0.conf mongo --port 4000 -u lyhabc -p 123456 --authenticationDatabase admin # cat /data/replset0/log/rs0.log 2015-12-15T17:07:12.388+0800 I COMMAND [conn38] command admin.$cmd command: replSetHeartbeat { replSetHeartbeat: "rsshard0", pv: 1, v: 2, from: "192.168.14.198:4000", fromId: 2, checkEmpty: false } ntoreturn:1 keyUpdates:0 writeConflicts:0 numYields:0 reslen:142 locks:{} 19ms 2015-12-15T17:07:13.595+0800 I NETWORK [conn37] end connection 192.168.14.221:43932 (1 connection now open) 2015-12-15T17:07:13.596+0800 I NETWORK [initandlisten] connection accepted from 192.168.14.221:44114 #39 (2 connections now open) 2015-12-15T17:07:14.393+0800 I NETWORK [conn38] end connection 192.168.14.198:35566 (1 connection now open) 2015-12-15T17:07:14.394+0800 I NETWORK [initandlisten] connection accepted from 192.168.14.198:35568 #40 (2 connections now open) 2015-12-15T17:07:15.277+0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:46271 #41 (3 connections now open) 2015-12-15T17:07:15.283+0800 I ACCESS [conn41] SCRAM-SHA-1 authentication failed for lyhabc on admin from client 127.0.0.1 ; UserNotFound Could not find user lyhabc@admin 2015-12-15T17:07:15.291+0800 I NETWORK [conn41] end connection 127.0.0.1:46271 (2 connections now open)
打開web后台
vi /data/replset0/config/rs0.conf journal=true rest=true //打開更多web后天監控指標 httpinterface=true //打開web后台 port=4000 replSet=rsshard0 dbpath = /data/replset0/data/rs0 shardsvr = true oplogSize = 100 pidfilepath = /usr/local/mongodb/mongodb0.pid logpath = /data/replset0/log/rs0.log logappend = true profile = 1 slowms = 5 fork = true
性能監控
db.serverStatus() "resident" : 86, //當前所使用的物理內存總量 單位MB 如果超過系統內存表示系統內存過小 "supported" : true, //系統是否支持可擴展內存 "mapped" : 368, //映射數據文件所使用的內存大小 單位MB 如果超過系統內存表示系統內存過小 需要使用swap 映射的空間比內存空間還要大 "extra_info" : { "note" : "fields vary by platform", "heap_usage_bytes" : 63345440, "page_faults" : 63 //缺頁中斷的次數 內存不夠缺頁中斷也會增多 "activeClients" : { "total" : 12, //連接到mongodb實例的連接數 mongodb監控工具 mongostat --port 4000 mongotop --port 4000 --locks mongotop --port 4000 rsshard0:SECONDARY> db.serverStatus() { "host" : "steven:4000", "version" : "3.0.7", "process" : "mongod", "pid" : NumberLong(1796), "uptime" : 63231, "uptimeMillis" : NumberLong(63230882), "uptimeEstimate" : 3033, "localTime" : ISODate("2015-12-15T03:26:39.707Z"), "asserts" : { "regular" : 0, "warning" : 0, "msg" : 0, "user" : 2226, "rollovers" : 0 }, "backgroundFlushing" : { "flushes" : 57, "total_ms" : 61, "average_ms" : 1.0701754385964912, "last_ms" : 0, "last_finished" : ISODate("2015-12-15T03:26:33.960Z") }, "connections" : { "current" : 1, "available" : 818, "totalCreated" : NumberLong(15) }, "cursors" : { "note" : "deprecated, use server status metrics", "clientCursors_size" : 0, "totalOpen" : 0, "pinned" : 0, "totalNoTimeout" : 0, "timedOut" : 0 }, "dur" : { "commits" : 28, "journaledMB" : 0, "writeToDataFilesMB" : 0, "compression" : 0, "commitsInWriteLock" : 0, "earlyCommits" : 0, "timeMs" : { "dt" : 3010, "prepLogBuffer" : 0, "writeToJournal" : 0, "writeToDataFiles" : 0, "remapPrivateView" : 0, "commits" : 33, "commitsInWriteLock" : 0 } }, "extra_info" : { "note" : "fields vary by platform", "heap_usage_bytes" : 63345440, "page_faults" : 63 //缺頁中斷的次數 內存不夠缺頁中斷也會增多 }, "globalLock" : { "totalTime" : NumberLong("63230887000"), "currentQueue" : { "total" : 0, //如果這個值一直很大,表示並發問題 鎖太長時間 "readers" : 0, "writers" : 0 }, "activeClients" : { "total" : 12, //連接到mongodb實例的連接數 "readers" : 0, "writers" : 0 } }, "locks" : { "Global" : { "acquireCount" : { "r" : NumberLong(27371), "w" : NumberLong(21), "R" : NumberLong(1), "W" : NumberLong(5) }, "acquireWaitCount" : { "r" : NumberLong(1) }, "timeAcquiringMicros" : { "r" : NumberLong(135387) } }, "MMAPV1Journal" : { "acquireCount" : { "r" : NumberLong(13668), "w" : NumberLong(45), "R" : NumberLong(31796) }, "acquireWaitCount" : { "w" : NumberLong(4), "R" : NumberLong(5) }, "timeAcquiringMicros" : { "w" : NumberLong(892), "R" : NumberLong(1278323) } }, "Database" : { "acquireCount" : { "r" : NumberLong(13665), "R" : NumberLong(7), "W" : NumberLong(21) }, "acquireWaitCount" : { "W" : NumberLong(1) }, "timeAcquiringMicros" : { "W" : NumberLong(21272) } }, "Collection" : { "acquireCount" : { "R" : NumberLong(13490) } }, "Metadata" : { "acquireCount" : { "R" : NumberLong(1) } }, "oplog" : { "acquireCount" : { "R" : NumberLong(900) } } }, "network" : { "bytesIn" : NumberLong(7646), "bytesOut" : NumberLong(266396), "numRequests" : NumberLong(113) }, "opcounters" : { "insert" : 0, "query" : 7, "update" : 0, "delete" : 0, "getmore" : 0, "command" : 107 }, "opcountersRepl" : { "insert" : 0, "query" : 0, "update" : 0, "delete" : 0, "getmore" : 0, "command" : 0 }, "repl" : { "setName" : "rsshard0", "setVersion" : 2, "ismaster" : false, "secondary" : true, "hosts" : [ "192.168.1.155:4000", "192.168.14.221:4000", "192.168.14.198:4000" ], "me" : "192.168.1.155:4000", "rbid" : 107705010 }, "storageEngine" : { "name" : "mmapv1" }, "writeBacksQueued" : false, "mem" : { "bits" : 64, //跑在64位系統上 "resident" : 86, //當前所使用的物理內存總量 單位MB "virtual" : 1477, //mongodb進程所映射的虛擬內存總量 單位MB "supported" : true, //系統是否支持可擴展內存 "mapped" : 368, //映射數據文件所使用的內存大小 單位MB "mappedWithJournal" : 736 //映射Journaling所使用的內存大小 單位MB }, "metrics" : { "commands" : { "count" : { "failed" : NumberLong(0), "total" : NumberLong(6) }, "dbStats" : { "failed" : NumberLong(0), "total" : NumberLong(1) }, "getLog" : { "failed" : NumberLong(0), "total" : NumberLong(4) }, "getnonce" : { "failed" : NumberLong(0), "total" : NumberLong(11) }, "isMaster" : { "failed" : NumberLong(0), "total" : NumberLong(16) }, "listCollections" : { "failed" : NumberLong(0), "total" : NumberLong(2) }, "listDatabases" : { "failed" : NumberLong(0), "total" : NumberLong(1) }, "listIndexes" : { "failed" : NumberLong(0), "total" : NumberLong(2) }, "ping" : { "failed" : NumberLong(0), "total" : NumberLong(18) }, "replSetGetStatus" : { "failed" : NumberLong(0), "total" : NumberLong(15) }, "replSetStepDown" : { "failed" : NumberLong(1), "total" : NumberLong(1) }, "serverStatus" : { "failed" : NumberLong(0), "total" : NumberLong(21) }, "top" : { "failed" : NumberLong(0), "total" : NumberLong(5) }, "whatsmyuri" : { "failed" : NumberLong(0), "total" : NumberLong(4) } }, "cursor" : { "timedOut" : NumberLong(0), "open" : { "noTimeout" : NumberLong(0), "pinned" : NumberLong(0), "total" : NumberLong(0) } }, "document" : { "deleted" : NumberLong(0), "inserted" : NumberLong(0), "returned" : NumberLong(5), "updated" : NumberLong(0) }, "getLastError" : { "wtime" : { "num" : 0, "totalMillis" : 0 }, "wtimeouts" : NumberLong(0) }, "operation" : { "fastmod" : NumberLong(0), "idhack" : NumberLong(0), "scanAndOrder" : NumberLong(0), "writeConflicts" : NumberLong(0) }, "queryExecutor" : { "scanned" : NumberLong(2), "scannedObjects" : NumberLong(5) }, "record" : { "moves" : NumberLong(0) }, "repl" : { "apply" : { "batches" : { "num" : 0, "totalMillis" : 0 }, "ops" : NumberLong(0) }, "buffer" : { "count" : NumberLong(0), "maxSizeBytes" : 268435456, "sizeBytes" : NumberLong(0) }, "network" : { "bytes" : NumberLong(0), "getmores" : { "num" : 0, "totalMillis" : 0 }, "ops" : NumberLong(0), "readersCreated" : NumberLong(1) }, "preload" : { "docs" : { "num" : 0, "totalMillis" : 0 }, "indexes" : { "num" : 0, "totalMillis" : 0 } } }, "storage" : { "freelist" : { "search" : { "bucketExhausted" : NumberLong(0), "requests" : NumberLong(0), "scanned" : NumberLong(0) } } }, "ttl" : { "deletedDocuments" : NumberLong(0), "passes" : NumberLong(56) } }, "ok" : 1 rsshard0:SECONDARY> use aaaa switched to db aaaa rsshard0:SECONDARY> db.stats() { "db" : "aaaa", "collections" : 4, "objects" : 7, "avgObjSize" : 149.71428571428572, "dataSize" : 1048, "storageSize" : 1069056, "numExtents" : 4, "indexes" : 1, "indexSize" : 8176, "fileSize" : 67108864, "nsSizeMB" : 16, "extentFreeList" : { "num" : 0, "totalSize" : 0 }, "dataFileVersion" : { "major" : 4, "minor" : 22 }, "ok" : 1 }
備份
tail /data/replset1/log/rs1.log jobs bg %1 netstat -lnp |grep mongo w top --導出文本備份 mongoexport --port 4000 --db aaaa --collection testaaa --out :/tmp/aaaa.csv sz :/tmp/aaaa.csv --導出二進制備份 mongodump --port 4000 --db aaaa --out /tmp/aaaa.bak ls /tmp cd /tmp/aaaa.bak/ --打包備份 tar zcvf aaaa.tar.gz aaaa/ ls sz aaaa.tar.gz cd aaaa ---C 輸出十六進制和對應字符 hexdump -C testaaa.bson history
MongoDB復制集成員的重新同步(mongodb內部有一個 initial sync進程不停初始化同步)
http://www.linuxidc.com/Linux/2015-06/118981.htm?utm_source=tuicool&utm_medium=referral
關閉 mongod 進程。通過在 mongo 窗口中使用 db.shutdownServer() 命令或者在Linux系統中使用 mongod --shutdown 參數來安全關閉
use admin;
db.shutdownServer() ;
mongod --shutdown
MongoDB中的_id和ObjectId
http://blog.csdn.net/magneto7/article/details/23842941?utm_source=tuicool&utm_medium=referral
數據表的復制 db.runCommand({cloneCollection:"庫.表",from:"198.61.104.31:27017"});
數據庫的復制 db.copyDatabase("源庫","目標庫","198.61.104.31:27017");
刷新磁盤:將內存中尚未寫入磁盤的信息寫入磁盤,並鎖住對數據庫更新的操作,但讀操作可以使用,使用runCommand命令
格式:db.runCommand({fsync:1,async:true})
async:是否異步執行
lock:1 鎖定數據庫
Query Translator
http://www.querymongo.com/
{ "_id" : ObjectId("56341908c4393e7396b20594"), "id" : 2 }
{ "_id" : ObjectId("56551e020be4a0c3355a5ba7"), "id" : 1 }
{ "_id" : 121, "age" : 22, "Attribute" : 33 }
第一次插入數據時不需要先創建collection,插入數據會自動建立
每次插入數據如果沒有指定_id字段,系統會默認創建一個主鍵_id,ObjectId類型 更好支持分布式存儲 ObjectId類型為12字節 4字節時間戳 3字節機器唯一標識 2字節進程id 3字節隨機計數器
每個集合都必須有一個_id字段,不管是自動生成還是指定的,而且不能重復
插入語句
db.users.insert({id:1},{class:1})
更新語句
db.people.update({country:"JP"},{$set:{country:"DDDDDDD"}},{multi:true})
刪除語句
db.people.remove({country:"DDDDDDD"}) //不刪除索引
db.people.drop() //刪除數據和索引
db.people.dropIndexes() //刪除所有索引
db.people.dropIndex() //刪除特定索引
db.system.indexes.find()
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 2, _id一個隱藏索引加db.people.ensureIndex({name:1},{unique:true}) 總共兩個索引
"numIndexesAfter" : 3,
"ok" : 1
}
MongoDB文檔定位
字段下面是一個數組 字段.數組元素下標
字段下面是一個嵌套文檔 字段.嵌套文檔某個key
字段下面是一個嵌套文檔數組 字段.數組元素下標.嵌套文檔某個key
下表為MongoDB中常用的幾種數據類型。
數據類型 描述
String 字符串。存儲數據常用的數據類型。在 MongoDB 中,UTF-8 編碼的字符串才是合法的。
Integer 整型數值。用於存儲數值。根據你所采用的服務器,可分為 32 位或 64 位。
Boolean 布爾值。用於存儲布爾值(真/假)。
Double 雙精度浮點值。用於存儲浮點值。
Min/Max keys 將一個值與 BSON(二進制的 JSON)元素的最低值和最高值相對比。
Array 用於將數組或列表或多個值存儲為一個鍵。
Timestamp 時間戳。記錄文檔修改或添加的具體時間。
Object 用於內嵌文檔。
Null 用於創建空值。
Symbol 符號。該數據類型基本上等同於字符串類型,但不同的是,它一般用於采用特殊符號類型的語言。
Date 日期時間。用 UNIX 時間格式來存儲當前日期或時間。你可以指定自己的日期時間:創建 Date 對象,傳入年月日信息。
Object ID 對象 ID。用於創建文檔的 ID。
Binary Data 二進制數據。用於存儲二進制數據。
Code 代碼類型。用於在文檔中存儲 JavaScript 代碼。
Regular expression 正則表達式類型。用於存儲正則表達式。
MongoDB兩個100ms
1、100ms做一次checkpoint 寫一次journal日志文件
2、超過100ms的查詢會記錄到慢查詢日志
MongoDB的日志
cat /data/mongodb/logs//mongo.log
每個庫一個文件夾
2015-10-30T05:59:12.386+0800 I JOURNAL [initandlisten] journal dir=/data/mongodb/data/journal 2015-10-30T05:59:12.386+0800 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed 2015-10-30T05:59:12.518+0800 I JOURNAL [durability] Durability thread started 2015-10-30T05:59:12.518+0800 I JOURNAL [journal writer] Journal writer thread started 2015-10-30T05:59:12.521+0800 I CONTROL [initandlisten] MongoDB starting : pid=4479 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven 2015-10-30T05:59:12.521+0800 I CONTROL [initandlisten] 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] db version v3.0.7 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] allocator: tcmalloc 2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } } 2015-10-30T05:59:12.536+0800 I INDEX [initandlisten] allocating new ns file /data/mongodb/data/local/local.ns, filling with zeroes... 2015-10-30T05:59:12.858+0800 I STORAGE [FileAllocator] allocating new datafile /data/mongodb/data/local/local.0, filling with zeroes... //填0初始化 數據文件 2015-10-30T05:59:12.858+0800 I STORAGE [FileAllocator] creating directory /data/mongodb/data/local/_tmp 2015-10-30T05:59:12.866+0800 I STORAGE [FileAllocator] done allocating datafile /data/mongodb/data/local/local.0, size: 64MB, took 0.001 secs 2015-10-30T05:59:12.876+0800 I NETWORK [initandlisten] waiting for connections on port 27017 2015-10-30T05:59:14.325+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:40766 #1 (1 connection now open) 2015-10-30T05:59:14.328+0800 I NETWORK [conn1] end connection 192.168.1.106:40766 (0 connections now open) 2015-10-30T05:59:24.339+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:40769 #2 (1 connection now open) //接受192.168.1.106的連接 2015-10-30T06:00:20.348+0800 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends 2015-10-30T06:00:20.348+0800 I CONTROL [signalProcessingThread] now exiting 2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets... 2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] closing listening socket: 6 2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] closing listening socket: 7 2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock //socket方式通信 2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog... 2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] shutdown: going to close sockets... 2015-10-30T06:00:20.348+0800 I STORAGE [signalProcessingThread] shutdown: waiting for fs preallocator... 2015-10-30T06:00:20.348+0800 I STORAGE [signalProcessingThread] shutdown: final commit... 2015-10-30T06:00:20.349+0800 I JOURNAL [signalProcessingThread] journalCleanup... 2015-10-30T06:00:20.349+0800 I JOURNAL [signalProcessingThread] removeJournalFiles 2015-10-30T06:00:20.349+0800 I NETWORK [conn2] end connection 192.168.1.106:40769 (0 connections now open) 2015-10-30T06:00:20.356+0800 I JOURNAL [signalProcessingThread] Terminating durability thread ... 2015-10-30T06:00:20.453+0800 I JOURNAL [journal writer] Journal writer thread stopped 2015-10-30T06:00:20.454+0800 I JOURNAL [durability] Durability thread stopped 2015-10-30T06:00:20.455+0800 I STORAGE [signalProcessingThread] shutdown: closing all files... 2015-10-30T06:00:20.457+0800 I STORAGE [signalProcessingThread] closeAllFiles() finished 2015-10-30T06:00:20.457+0800 I STORAGE [signalProcessingThread] shutdown: removing fs lock... 2015-10-30T06:00:20.457+0800 I CONTROL [signalProcessingThread] dbexit: rc: 0 2015-10-30T06:01:20.259+0800 I CONTROL ***** SERVER RESTARTED ***** 2015-10-30T06:01:20.290+0800 I JOURNAL [initandlisten] journal dir=/data/mongodb/data/journal 2015-10-30T06:01:20.291+0800 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed 2015-10-30T06:01:20.439+0800 I JOURNAL [initandlisten] preallocateIsFaster=true 2.36 2015-10-30T06:01:20.544+0800 I JOURNAL [durability] Durability thread started 2015-10-30T06:01:20.546+0800 I JOURNAL [journal writer] Journal writer thread started 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] MongoDB starting : pid=4557 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] db version v3.0.7 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] allocator: tcmalloc 2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } } 2015-10-30T06:01:20.582+0800 I NETWORK [initandlisten] waiting for connections on port 27017 2015-10-30T06:01:28.390+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:40798 #1 (1 connection now open) 2015-10-30T06:01:28.398+0800 I NETWORK [conn1] end connection 192.168.1.106:40798 (0 connections now open) 2015-10-30T06:01:38.394+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:40800 #2 (1 connection now open) 2015-10-30T07:01:39.383+0800 I NETWORK [conn2] end connection 192.168.1.106:40800 (0 connections now open) 2015-10-30T07:01:39.384+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:42327 #3 (1 connection now open) 2015-10-30T07:32:40.910+0800 I NETWORK [conn3] end connection 192.168.1.106:42327 (0 connections now open) 2015-10-30T07:32:40.910+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:43130 #4 (2 connections now open) 2015-10-30T08:32:43.957+0800 I NETWORK [conn4] end connection 192.168.1.106:43130 (0 connections now open) 2015-10-30T08:32:43.957+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:46481 #5 (2 connections now open) 2015-10-31T04:27:00.240+0800 I CONTROL ***** SERVER RESTARTED ***** //服務器非法關機,需要recover 鳳勝踢了機器電源 2015-10-31T04:27:00.703+0800 W - [initandlisten] Detected unclean shutdown - /data/mongodb/data/mongod.lock is not empty. //檢測到不是clean shutdown 2015-10-31T04:27:00.812+0800 I JOURNAL [initandlisten] journal dir=/data/mongodb/data/journal 2015-10-31T04:27:00.812+0800 I JOURNAL [initandlisten] recover begin //mongodb開始還原 記錄lsn 2015-10-31T04:27:01.048+0800 I JOURNAL [initandlisten] recover lsn: 6254831 2015-10-31T04:27:01.048+0800 I JOURNAL [initandlisten] recover /data/mongodb/data/journal/j._0 2015-10-31T04:27:01.089+0800 I JOURNAL [initandlisten] recover skipping application of section seq:0 < lsn:6254831 2015-10-31T04:27:01.631+0800 I JOURNAL [initandlisten] recover cleaning up 2015-10-31T04:27:01.632+0800 I JOURNAL [initandlisten] removeJournalFiles 2015-10-31T04:27:01.680+0800 I JOURNAL [initandlisten] recover done 2015-10-31T04:27:03.006+0800 I JOURNAL [initandlisten] preallocateIsFaster=true 25.68 2015-10-31T04:27:04.076+0800 I JOURNAL [initandlisten] preallocateIsFaster=true 19.9 2015-10-31T04:27:06.896+0800 I JOURNAL [initandlisten] preallocateIsFaster=true 35.5 2015-10-31T04:27:06.896+0800 I JOURNAL [initandlisten] preallocateIsFaster check took 5.215 secs 2015-10-31T04:27:06.896+0800 I JOURNAL [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.0 2015-10-31T04:27:09.005+0800 I - [initandlisten] File Preallocator Progress: 325058560/1073741824 30% 2015-10-31T04:27:12.236+0800 I - [initandlisten] File Preallocator Progress: 440401920/1073741824 41% 2015-10-31T04:27:15.006+0800 I - [initandlisten] File Preallocator Progress: 713031680/1073741824 66% 2015-10-31T04:27:18.146+0800 I - [initandlisten] File Preallocator Progress: 817889280/1073741824 76% 2015-10-31T04:27:21.130+0800 I - [initandlisten] File Preallocator Progress: 912261120/1073741824 84% 2015-10-31T04:27:24.477+0800 I - [initandlisten] File Preallocator Progress: 1017118720/1073741824 94% 2015-10-31T04:28:08.132+0800 I JOURNAL [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.1 2015-10-31T04:28:11.904+0800 I - [initandlisten] File Preallocator Progress: 629145600/1073741824 58% 2015-10-31T04:28:14.260+0800 I - [initandlisten] File Preallocator Progress: 692060160/1073741824 64% 2015-10-31T04:28:17.335+0800 I - [initandlisten] File Preallocator Progress: 796917760/1073741824 74% 2015-10-31T04:28:20.440+0800 I - [initandlisten] File Preallocator Progress: 859832320/1073741824 80% 2015-10-31T04:28:23.274+0800 I - [initandlisten] File Preallocator Progress: 922746880/1073741824 85% 2015-10-31T04:28:26.638+0800 I - [initandlisten] File Preallocator Progress: 1017118720/1073741824 94% 2015-10-31T04:29:01.643+0800 I JOURNAL [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.2 2015-10-31T04:29:04.032+0800 I - [initandlisten] File Preallocator Progress: 450887680/1073741824 41% 2015-10-31T04:29:09.015+0800 I - [initandlisten] File Preallocator Progress: 566231040/1073741824 52% 2015-10-31T04:29:12.181+0800 I - [initandlisten] File Preallocator Progress: 828375040/1073741824 77% 2015-10-31T04:29:15.125+0800 I - [initandlisten] File Preallocator Progress: 964689920/1073741824 89% 2015-10-31T04:29:34.755+0800 I JOURNAL [durability] Durability thread started 2015-10-31T04:29:34.755+0800 I JOURNAL [journal writer] Journal writer thread started 2015-10-31T04:29:35.029+0800 I CONTROL [initandlisten] MongoDB starting : pid=1672 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'. 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'. 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never' 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files. 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] db version v3.0.7 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] allocator: tcmalloc 2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } } 2015-10-31T04:29:36.869+0800 I NETWORK [initandlisten] waiting for connections on port 27017 2015-10-31T04:39:39.671+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:3134 #1 (1 connection now open) 2015-10-31T04:39:40.042+0800 I COMMAND [conn1] command admin.$cmd command: isMaster { isMaster: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:178 locks:{} 229ms 2015-10-31T04:39:40.379+0800 I NETWORK [conn1] end connection 192.168.1.106:3134 (0 connections now open) 2015-10-31T04:40:10.117+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:3137 #2 (1 connection now open) 2015-10-31T04:40:13.357+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:3138 #3 (2 connections now open) 2015-10-31T04:40:13.805+0800 I COMMAND [conn3] command local.$cmd command: usersInfo { usersInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:49 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 304ms 2015-10-31T04:49:30.223+0800 I NETWORK [conn2] end connection 192.168.1.106:3137 (1 connection now open) 2015-10-31T04:49:30.223+0800 I NETWORK [conn3] end connection 192.168.1.106:3138 (0 connections now open) 2015-10-31T04:56:27.271+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4335 #4 (1 connection now open) 2015-10-31T04:56:29.449+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4336 #5 (2 connections now open) 2015-10-31T04:58:17.514+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4356 #6 (3 connections now open) 2015-10-31T05:02:55.219+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4902 #7 (4 connections now open) 2015-10-31T05:03:57.954+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4907 #8 (5 connections now open) 2015-10-31T05:10:25.905+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:5064 #9 (6 connections now open) 2015-10-31T05:16:00.026+0800 I NETWORK [conn7] end connection 192.168.1.106:4902 (5 connections now open) 2015-10-31T05:16:00.101+0800 I NETWORK [conn8] end connection 192.168.1.106:4907 (4 connections now open) 2015-10-31T05:16:00.163+0800 I NETWORK [conn9] end connection 192.168.1.106:5064 (3 connections now open) 2015-10-31T05:26:28.837+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:5654 #10 (4 connections now open) 2015-10-31T05:26:28.837+0800 I NETWORK [conn4] end connection 192.168.1.106:4335 (2 connections now open) 2015-10-31T05:26:30.969+0800 I NETWORK [conn5] end connection 192.168.1.106:4336 (2 connections now open) 2015-10-31T05:26:30.973+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:5655 #11 (3 connections now open) 2015-10-31T05:56:30.336+0800 I NETWORK [conn10] end connection 192.168.1.106:5654 (2 connections now open) 2015-10-31T05:56:30.337+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6153 #12 (3 connections now open) 2015-10-31T05:56:32.457+0800 I NETWORK [conn11] end connection 192.168.1.106:5655 (2 connections now open) 2015-10-31T05:56:32.458+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6154 #13 (4 connections now open) 2015-10-31T06:26:31.837+0800 I NETWORK [conn12] end connection 192.168.1.106:6153 (2 connections now open) 2015-10-31T06:26:31.838+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6514 #14 (3 connections now open) 2015-10-31T06:26:33.961+0800 I NETWORK [conn13] end connection 192.168.1.106:6154 (2 connections now open) 2015-10-31T06:26:33.962+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6515 #15 (4 connections now open) 2015-10-31T06:27:09.518+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6563 #16 (4 connections now open) 2015-10-31T06:29:57.407+0800 I INDEX [conn16] allocating new ns file /data/mongodb/data/testlyh/testlyh.ns, filling with zeroes... 2015-10-31T06:29:57.846+0800 I STORAGE [FileAllocator] allocating new datafile /data/mongodb/data/testlyh/testlyh.0, filling with zeroes... 2015-10-31T06:29:57.847+0800 I STORAGE [FileAllocator] creating directory /data/mongodb/data/testlyh/_tmp 2015-10-31T06:29:57.871+0800 I STORAGE [FileAllocator] done allocating datafile /data/mongodb/data/testlyh/testlyh.0, size: 64MB, took 0.003 secs 2015-10-31T06:29:57.890+0800 I COMMAND [conn16] command testlyh.$cmd command: create { create: "temporary" } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, MMAPV1Journal: { acquireCount: { w: 6 } }, Database: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 483ms 2015-10-31T06:29:57.894+0800 I COMMAND [conn16] CMD: drop testlyh.temporary 2015-10-31T06:45:06.955+0800 I NETWORK [conn16] end connection 192.168.1.106:6563 (3 connections now open) 2015-10-31T06:56:33.323+0800 I NETWORK [conn14] end connection 192.168.1.106:6514 (2 connections now open) 2015-10-31T06:56:33.324+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:7692 #17 (3 connections now open) 2015-10-31T06:56:35.461+0800 I NETWORK [conn15] end connection 192.168.1.106:6515 (2 connections now open) 2015-10-31T06:56:35.462+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:7693 #18 (4 connections now open) 2015-10-31T07:13:30.230+0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51696 #19 (4 connections now open) 2015-10-31T07:21:06.715+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:8237 #20 (5 connections now open) 2015-10-31T07:21:32.193+0800 I INDEX [conn6] build index on: local.people properties: { v: 1, unique: true, key: { name: 1.0 }, name: "name_1", ns: "local.people" } //創建索引 2015-10-31T07:21:32.193+0800 I INDEX [conn6] building index using bulk method //bulk insert方式建立索引 2015-10-31T07:21:32.194+0800 I INDEX [conn6] build index done. scanned 36 total records. 0 secs 2015-10-31T07:26:34.826+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:8328 #21 (6 connections now open) 2015-10-31T07:26:34.827+0800 I NETWORK [conn17] end connection 192.168.1.106:7692 (4 connections now open) 2015-10-31T07:26:36.962+0800 I NETWORK [conn18] end connection 192.168.1.106:7693 (4 connections now open) 2015-10-31T07:26:36.963+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:8329 #22 (6 connections now open) 2015-10-31T07:51:08.214+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9202 #23 (6 connections now open) 2015-10-31T07:51:08.214+0800 I NETWORK [conn20] end connection 192.168.1.106:8237 (4 connections now open) 2015-10-31T07:56:36.327+0800 I NETWORK [conn21] end connection 192.168.1.106:8328 (4 connections now open) 2015-10-31T07:56:36.328+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9310 #24 (6 connections now open) 2015-10-31T07:56:38.450+0800 I NETWORK [conn22] end connection 192.168.1.106:8329 (4 connections now open) 2015-10-31T07:56:38.452+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9313 #25 (5 connections now open) 2015-10-31T08:03:56.823+0800 I NETWORK [conn25] end connection 192.168.1.106:9313 (4 connections now open) 2015-10-31T08:03:58.309+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9470 #26 (5 connections now open) 2015-10-31T08:03:58.309+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9471 #27 (6 connections now open) 2015-10-31T08:03:58.313+0800 I NETWORK [conn26] end connection 192.168.1.106:9470 (5 connections now open) 2015-10-31T08:03:58.314+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9469 #28 (6 connections now open) 2015-10-31T08:03:58.315+0800 I NETWORK [conn27] end connection 192.168.1.106:9471 (5 connections now open) 2015-10-31T08:03:58.317+0800 I NETWORK [conn28] end connection 192.168.1.106:9469 (4 connections now open) 2015-10-31T08:04:04.852+0800 I NETWORK [conn19] end connection 127.0.0.1:51696 (3 connections now open) 2015-10-31T08:04:05.944+0800 I NETWORK [conn23] end connection 192.168.1.106:9202 (2 connections now open) 2015-10-31T08:04:06.215+0800 I NETWORK [conn24] end connection 192.168.1.106:9310 (1 connection now open) 2015-10-31T08:04:09.233+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9531 #29 (2 connections now open) 2015-10-31T08:04:09.233+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9530 #30 (3 connections now open) 2015-10-31T08:04:09.233+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9532 #31 (4 connections now open) 2015-10-31T08:34:18.767+0800 I NETWORK [conn29] end connection 192.168.1.106:9531 (3 connections now open) 2015-10-31T08:34:18.767+0800 I NETWORK [conn30] end connection 192.168.1.106:9530 (3 connections now open) 2015-10-31T08:34:18.769+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10157 #32 (3 connections now open) 2015-10-31T08:34:18.769+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10158 #33 (4 connections now open) 2015-10-31T08:34:18.771+0800 I NETWORK [conn31] end connection 192.168.1.106:9532 (3 connections now open) 2015-10-31T08:34:18.774+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10159 #34 (4 connections now open) 2015-10-31T08:36:23.662+0800 I NETWORK [conn33] end connection 192.168.1.106:10158 (3 connections now open) 2015-10-31T08:36:23.933+0800 I NETWORK [conn6] end connection 192.168.1.106:4356 (2 connections now open) 2015-10-31T08:36:24.840+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10238 #35 (3 connections now open) 2015-10-31T08:36:24.840+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10239 #36 (4 connections now open) 2015-10-31T08:36:24.844+0800 I NETWORK [conn36] end connection 192.168.1.106:10239 (3 connections now open) 2015-10-31T08:36:24.845+0800 I NETWORK [conn35] end connection 192.168.1.106:10238 (2 connections now open) 2015-10-31T08:36:28.000+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10279 #37 (3 connections now open) 2015-10-31T08:36:28.004+0800 I NETWORK [conn37] end connection 192.168.1.106:10279 (2 connections now open) 2015-10-31T08:36:32.751+0800 I NETWORK [conn32] end connection 192.168.1.106:10157 (1 connection now open) 2015-10-31T08:36:32.756+0800 I NETWORK [conn34] end connection 192.168.1.106:10159 (0 connections now open) 2015-10-31T08:36:35.835+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10339 #38 (1 connection now open) 2015-10-31T08:36:35.837+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10341 #39 (2 connections now open) 2015-10-31T08:36:35.837+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10340 #40 (3 connections now open) 2015-10-31T09:06:45.368+0800 I NETWORK [conn39] end connection 192.168.1.106:10341 (2 connections now open) 2015-10-31T09:06:45.370+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:12600 #41 (3 connections now open) 2015-10-31T09:06:45.371+0800 I NETWORK [conn40] end connection 192.168.1.106:10340 (2 connections now open) 2015-10-31T09:06:45.371+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:12601 #42 (4 connections now open) 2015-10-31T09:06:45.380+0800 I NETWORK [conn38] end connection 192.168.1.106:10339 (2 connections now open) 2015-10-31T09:06:45.381+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:12602 #43 (4 connections now open) 2015-10-31T09:23:54.705+0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51697 #44 (4 connections now open) 2015-10-31T09:25:07.727+0800 I INDEX [conn44] allocating new ns file /data/mongodb/data/test/test.ns, filling with zeroes... 2015-10-31T09:25:08.375+0800 I STORAGE [FileAllocator] allocating new datafile /data/mongodb/data/test/test.0, filling with zeroes... 2015-10-31T09:25:08.375+0800 I STORAGE [FileAllocator] creating directory /data/mongodb/data/test/_tmp 2015-10-31T09:25:08.378+0800 I STORAGE [FileAllocator] done allocating datafile /data/mongodb/data/test/test.0, size: 64MB, took 0.001 secs 2015-10-31T09:25:08.386+0800 I WRITE [conn44] insert test.users query: { _id: ObjectId('56341873c4393e7396b20592'), id: 1.0 } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, MMAPV1Journal: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 659ms 2015-10-31T09:25:08.386+0800 I COMMAND [conn44] command test.$cmd command: insert { insert: "users", documents: [ { _id: ObjectId('56341873c4393e7396b20592'), id: 1.0 } ], ordered: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, MMAPV1Journal: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 660ms 2015-10-31T09:26:09.405+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:13220 #45 (5 connections now open) 2015-10-31T09:36:46.873+0800 I NETWORK [conn41] end connection 192.168.1.106:12600 (4 connections now open) 2015-10-31T09:36:46.874+0800 I NETWORK [conn42] end connection 192.168.1.106:12601 (3 connections now open) 2015-10-31T09:36:46.875+0800 I NETWORK [conn43] end connection 192.168.1.106:12602 (2 connections now open) 2015-10-31T09:36:46.875+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:13498 #46 (3 connections now open) 2015-10-31T09:36:46.876+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:13499 #47 (4 connections now open) 2015-10-31T09:36:46.876+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:13500 #48 (5 connections now open) 2015-10-31T09:43:52.490+0800 I INDEX [conn45] build index on: local.people properties: { v: 1, key: { country: 1.0 }, name: "country_1", ns: "local.people" } 2015-10-31T09:43:52.490+0800 I INDEX [conn45] building index using bulk method 2015-10-31T09:43:52.491+0800 I INDEX [conn45] build index done. scanned 36 total records. 0 secs 2015-10-31T09:51:32.977+0800 I INDEX [conn45] build index on: local.people properties: { v: 1, key: { country: 1.0, name: 1.0 }, name: "country_1_name_1", ns: "local.people" } //建立復合索引 2015-10-31T09:51:32.977+0800 I INDEX [conn45] building index using bulk method 2015-10-31T09:51:32.977+0800 I INDEX [conn45] build index done. scanned 36 total records. 0 secs 2015-10-31T09:59:49.802+0800 I NETWORK [conn44] end connection 127.0.0.1:51697 (4 connections now open) 2015-10-31T10:06:48.357+0800 I NETWORK [conn47] end connection 192.168.1.106:13499 (3 connections now open) 2015-10-31T10:06:48.358+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:14438 #49 (5 connections now open) 2015-10-31T10:06:48.358+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:14439 #50 (5 connections now open) 2015-10-31T10:06:48.358+0800 I NETWORK [conn48] end connection 192.168.1.106:13500 (4 connections now open) 2015-10-31T10:06:48.358+0800 I NETWORK [conn46] end connection 192.168.1.106:13498 (4 connections now open) 2015-10-31T10:06:48.359+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:14440 #51 (5 connections now open) 2015-10-31T10:12:15.409+0800 I INDEX [conn45] build index on: local.users properties: { v: 1, key: { Attribute: 1.0 }, name: "Attribute_1", ns: "local.users" } 2015-10-31T10:12:15.409+0800 I INDEX [conn45] building index using bulk method 2015-10-31T10:12:15.409+0800 I INDEX [conn45] build index done. scanned 35 total records. 0 secs 2015-10-31T10:28:27.422+0800 I COMMAND [conn45] CMD: dropIndexes local.people //刪除索引 2015-11-25T15:25:23.248+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:23227 #76 (4 connections now open) 2015-11-25T15:25:23.247+0800 I NETWORK [conn73] end connection 192.168.1.106:21648 (2 connections now open) 2015-11-25T15:25:36.226+0800 I NETWORK [conn75] end connection 192.168.1.106:21659 (2 connections now open) 2015-11-25T15:25:36.227+0800 I NETWORK [conn74] end connection 192.168.1.106:21658 (1 connection now open) 2015-11-25T15:25:36.227+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:23236 #77 (2 connections now open) 2015-11-25T15:25:36.227+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:23237 #78 (3 connections now open)
復制集搭建步驟
1、各個機器都安裝mongodb http://www.cnblogs.com/lyhabc/p/8529966.html 2、設置keyfile,在主節點生成keyfile,然后發送給各個從節點 openssl rand -base64 741 > /data/mongodb/mongodb27017/data/mongodb-keyfile chown -R mongodb.mongodb /data/mongodb/mongodb27017/data/ chmod 600 /data/mongodb/mongodb27017/data/mongodb-keyfile chown mongodb.mongodb /data/mongodb/mongodb27017/data/mongodb-keyfile 2、設置3台機器的mongod.conf里面的 replSet=復制集名 keyFile=/data/mongodb/mongodb27017/data/mongodb-keyfile 然后重啟各個機器的mongodb 3、在主節點執行下面命令,這里的 _id:dbset 和上面配置文件中的replSet要保持一樣,secondary不會從hidden和slavedelay成員復制數據 use admin config = { _id:"dbset", members:[ {_id:0,host:"192.168.1.155:27017",priority:3,votes:1}, {_id:1,host:"192.168.1.156:27017",priority:2,votes:1}, {_id:2,host:"192.168.1.157:27017",arbiterOnly:true,hidden:true,priority:1,slaveDelay:10,votes:1}] } #初始化副本集配置 執行初始化命令的這個mongodb實例將成為復制集中的primary節點 rs.initiate(config); 4、mongodb默認副本節點上不允許讀,需要設置副本節點可以讀,在各個從節點執行 db.getMongo().setSlaveOk(); 5、查看集群節點的狀態 rs.status(); rs.conf() //查看配置 6、測試,主節點插入數據,可以看到數據已經復制到了從節點。 db.testdb.find(); 7、設置寫關注為大多數 cfg = rs.conf() cfg.settings = {} cfg.settings.getLastErrorDefaults = {w: "majority"} rs.reconfig(cfg) --------------------------------------------------------------------------------- #初始化副本集配置 執行初始化命令的這個mongodb實例將成為復制集中的primary節點 rs.initiate(config); 在主庫的admin庫下執行 #定義副本集配置變量,這里的 _id:dbset 和上面配置文件中的replSet要保持一樣。 config = { _id:"dbset", members:[ {_id:0,host:"192.168.1.155:27017"}, {_id:1,host:"192.168.14.221:27017"}, {_id:2,host:"192.168.14.198:27017"}] } rs.initiate(config); 或 rs.add()/rs.reremove 語法 rs.add({ _id: <int>, host: <string>, // required arbiterOnly: <boolean>, buildIndexes: <boolean>, hidden: <boolean>, priority: <number>, tags: <document>, slaveDelay: <int>, //單位秒 votes: <number> }) rs.reremove(host: <string>) #第一次做復制集 rs.initiate() #在主庫執行 rs.add("192.168.245.131") #在主庫執行 rs.add("192.168.245.132") #在主庫執行 #做好復制集並上線運行之后,后續添加 rs.add( { _id:2, host: "192.168.1.6:27017", priority: 0, votes: 0 } ) #rs.add()的時候才用rs.reconfig() #設置權重,有權重必須要有選舉權,priority不等於0,votes也要不等於0 cfg = rs.conf() cfg.members[0].priority = 2 cfg.members[1].priority = 1 cfg.members[0].votes = 1 #設置members[0]沒選舉權 rs.reconfig(cfg) #會導致重新選主,所以建議在維護窗口期間添加secondary ------------------------------------------------------------------------- 配置復制集時,"has data already, cannot initiate set解決方法 1、把從節點配置文件李的replSet=注釋掉 2、重啟mongodb服務 3、在從上執行db.dropDatabase(),清空所有的數據庫,直到show dbs為空
副本集節點狀態來看(rs.status命令):
stateStr 的值有
STARTUP //在副本集每個節點啟動的時候,mongod加載副本集配置信息,然后將狀態轉換為STARTUP2 STARTUP2 //加載配置之后決定是否需要做Initial Sync,需要則停留在STARTUP2狀態,不需要則進入RECOVERING狀態 RECOVERING //處於不可對外提供讀寫的階段,主要在Initial Sync之后追增量數據時候。 成員狀態 STARTUP Not yet an active member of any set. All members start up in this state. The mongod parses the replica set configuration document while in STARTUP. PRIMARY The member in state primary is the only member that can accept write operations. Eligible to vote. SECONDARY A member in state secondary is replicating the data store. Eligible to vote. RECOVERING Members either perform startup self-checks, or transition from completing a rollback or resync. Eligible to vote. STARTUP2 The member has joined the set and is running an initial sync. UNKNOWN The member’s state, as seen from another member of the set, is not yet known. ARBITER Arbiters do not replicate data and exist solely to participate in elections. DOWN The member, as seen from another member of the set, is unreachable. ROLLBACK This member is actively performing a rollback. Data is not available for reads. REMOVED This member was once in a replica set but was subsequently removed.
變更同步源
ping不通自己的同步源
自己的同步源角色發生變化
自己的同步源與副本集任意一個節點延遲超過30s
觸發MongoDB執行主從切換。
1、 新初始化一套副本集
2、 從庫不能連接到主庫(默認超過10s,可通過heartbeatTimeoutSecs參數控制),從庫發起選舉
3、 主庫主動放棄primary 角色
主動執行rs.stepdown 命令
主庫與大部分節點都無法通信的情況下
修改副本集配置的時候,rs.reconfig()
4、 移除從庫的時候
終止回滾:
對比老主庫的optime和同步源的optime,如果超過了30分鍾,那么放棄回滾。
在回滾的過程中,如果發現單條oplog超過512M,則放棄回滾。
如果有dropDatabase操作,則放棄回滾。
最終生成的回滾記錄超過300M,也會放棄回滾。
arbiter
dbset:PRIMARY> db.system.replset.find(); //rs.conf()返回的信息就是從db.system.replset里取
{
"_id": "dbset",
"version": 1,
"members": [
{
"_id": 0,
"host": "192.168.1.155:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 1,
"tags": { },
"slaveDelay": 0,
"votes": 1
},
{
"_id": 1,
"host": "192.168.14.221:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 1,
"tags": { },
"slaveDelay": 0,
"votes": 1
},
{
"_id": 2,
"host": "192.168.14.198:27017",
"arbiterOnly": false,
"buildIndexes": true,
"hidden": false,
"priority": 1,
"tags": { },
"slaveDelay": 0,
"votes": 1
}
],
"settings": {
"chainingAllowed": true,
"heartbeatTimeoutSecs": 10, //心跳超時10秒
"getLastErrorModes": { },
"getLastErrorDefaults": {
"w": 1,
"wtimeout": 0
}
}
}
db.oplog.rs.find(); //復制集每個節點都有 local.oplog.rs
{ "ts" : Timestamp(1448617001, 1), "h" : NumberLong(0), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
{ "ts" : Timestamp(1448619771, 1), "h" : NumberLong("-4910297248929153005"), "v" : 2, "op" : "c", "ns" : "foobar.$cmd", "o" : { "create" : "persons" } }
{ "ts" : Timestamp(1448619771, 2), "h" : NumberLong("-1223034904388786835"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad4811121921c"), "num" : 0 } }
{ "ts" : Timestamp(1448619771, 3), "h" : NumberLong("1509093586256204652"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad4811121921d"), "num" : 1 } }
{ "ts" : Timestamp(1448619771, 4), "h" : NumberLong("-1466302071499787062"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad4811121921e"), "num" : 2 } }
{ "ts" : Timestamp(1448619771, 5), "h" : NumberLong("-5291309432364303979"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad4811121921f"), "num" : 3 } }
{ "ts" : Timestamp(1448619771, 6), "h" : NumberLong("-1186940023830631529"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219220"), "num" : 4 } }
{ "ts" : Timestamp(1448619771, 7), "h" : NumberLong("8105416294429864718"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219221"), "num" : 5 } }
{ "ts" : Timestamp(1448619771, 8), "h" : NumberLong("4936086358438093652"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219222"), "num" : 6 } }
{ "ts" : Timestamp(1448619771, 9), "h" : NumberLong("-6505444938187353001"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219223"), "num" : 7 } }
{ "ts" : Timestamp(1448619771, 10), "h" : NumberLong("6604667343543284097"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219224"), "num" : 8 } }
{ "ts" : Timestamp(1448619771, 11), "h" : NumberLong("1628850075451893232"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219225"), "num" : 9 } }
{ "ts" : Timestamp(1448619771, 12), "h" : NumberLong("6976982335364958110"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219226"), "num" : 10 } }
{ "ts" : Timestamp(1448619771, 13), "h" : NumberLong("670853545390097497"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219227"), "num" : 11 } }
{ "ts" : Timestamp(1448619771, 14), "h" : NumberLong("-5105721635655707861"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219228"), "num" : 12 } }
{ "ts" : Timestamp(1448619771, 15), "h" : NumberLong("6288713624602787858"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad48111219229"), "num" : 13 } }
{ "ts" : Timestamp(1448619771, 16), "h" : NumberLong("-1023807204070269528"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad4811121922a"), "num" : 14 } }
{ "ts" : Timestamp(1448619771, 17), "h" : NumberLong("2467324426565008795"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad4811121922b"), "num" : 15 } }
{ "ts" : Timestamp(1448619771, 18), "h" : NumberLong("-7308533254100947819"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad4811121922c"), "num" : 16 } }
{ "ts" : Timestamp(1448619771, 19), "h" : NumberLong("7461162953794131316"), "v" : 2, "op" : "i", "ns" : "foobar.persons", "o" : { "_id" : ObjectId("56582ef8041ad4811121922d"), "num" : 17 } }
local庫下才有
oplog.rs 32位系統50m 64位 5%空閑磁盤空間 指定啟動時候加--oplogSize
me
minvalid
startup_log
system.indexes
system.replset
設置副本可讀
db.getMongo().setSlaveOk();
getLastError配置
db.system.replset.find();
"settings": {
"chainingAllowed": true,
"heartbeatTimeoutSecs": 10, //心跳超時10秒
"getLastErrorModes": { },
"getLastErrorDefaults": {
"w": 1,
"wtimeout": 0
}
}
w:
-1 驅動程序不會使用寫關注,忽略掉所有網絡或socket錯誤
0 驅動程序不會使用寫關注,只返回網絡或socket錯誤
1 驅動程序使用寫關注,但是只針對primary節點,對復制集或單實例是默認配置
>1 寫關注將針對復制集中的n個節點,當客戶端收到這些節點的反饋信息后命令才返回給客戶端繼續執行
wtimeout:寫關注應該在多長時間內返回,如果不指定可能因為不確定因素導致程序的寫操作一直阻塞
節點奇數 官方推薦副本集的成員數量為奇數,最多12個副本集節點,最多7個節點參與選舉。最多12個副本集節點是因為沒必要一份數據復制那么多份,備份太多反而增加了網絡負載和拖慢了集群性能;而最多7個節點參與選舉是因為內部選舉機制節點數量太多就會導致1分鍾內還選不出主節點,凡事只要適當就好
相關文章
http://www.lanceyan.com/tech/mongodb/mongodb_repset1.html
http://www.lanceyan.com/tech/arch/mongodb_shard1.html
http://www.lanceyan.com/tech/mongodb_repset2.html
http://blog.nosqlfan.com/html/4139.html (Bully 算法)
心跳
綜上所述,整個集群需要保持一定的通信才能知道哪些節點活着哪些節點掛掉。mongodb節點會向副本集中的其他節點每兩秒就會發送一次pings包,
如果其他節點在10秒鍾之內沒有返回就標示為不能訪問。每個節點內部都會維護一個狀態映射表,表明當前每個節點是什么角色、日志時間戳等關鍵信息。
如果是主節點,除了維護映射表外還需要檢查自己能否和集群中內大部分節點通訊,如果不能則把自己降級為secondary只讀節點。
同步
副本集同步分為初始化同步和增量同步。初始化同步指全量從主節點同步數據,如果主節點數據量比較大同步時間會比較長。而keep復制指初始化同步過后,節點之間的實時同步一般是增量同步。
初始化同步不只是在第一次才會被觸發,有以下兩種情況會觸發:
secondary第一次加入,這個是肯定的。
secondary落后的數據量超過了oplog的大小,這樣也會被全量復制。
那什么是oplog的大小?前面說過oplog保存了數據的操作記錄,secondary復制oplog並把里面的操作在secondary執行一遍。但是oplog也是mongodb的一個集合,保存在local.oplog.rs里,但是這個oplog是一個capped collection也就是固定大小的集合,新數據加入超過集合的大小會覆蓋。所以這里需要注意,跨IDC的復制要設置合適的oplogSize,避免在生產環境經常產生全量復制。oplogSize 可以通過–oplogSize設置大小,
對於linux 和windows 64位,oplog size默認為剩余磁盤空間的5%。
同步也並非只能從主節點同步,假設集群中3個節點,節點1是主節點在IDC1,節點2、節點3在IDC2,初始化節點2、節點3會從節點1同步數據。
后面節點2、節點3會使用就近原則從當前IDC的副本集中進行復制,只要有一個節點從IDC1的節點1復制數據。
設置同步還要注意以下幾點:
secondary不會從delayed和hidden成員上復制數據。MongoDB默認是采取級聯復制的架構,就是默認不一定選擇主庫作為自己的同步源
只要是需要同步,兩個成員的buildindexes必須要相同無論是否是true和false。buildindexes主要用來設置是否這個節點的數據用於查詢,默認為true。
如果同步操作30秒都沒有反應,則會重新選擇一個節點進行同步。
增刪節點: 后台有兩個deamon做 chunk的split , 和 shard之前的balance
When removing a shard, the balancer migrates all chunks from a shard to other shards. After migrating all data and updating the meta data, you can safely remove the shard (這里的意思,必須要等搬遷完畢,不然數據就會丟失)
這個是因為片鍵的設置,文章中是為了做demo用的設置,這是不太好的方式,最好不要用自增id做片鍵,因為會出現數據熱點,可以選用objectid
相關文章
http://www.lanceyan.com/tech/arch/mongodb_shard1.html
f
mongodb事務機制和數據安全
f
f
f
f
f
f
f
f
f
f
f
f
數據丟失常見分析
主要兩個參數
1、寫關注
w: 0 | 1 | n | majority | tag
wtimeout: millis 毫秒
2、日志
j: 1
f
f
f
f
f
f
f
f
f
f
數據安全總結
w:majority #寫關注 復制集
j:1 #事務日志 單機,兩種方式,1、連接字符串(每寫一條數據都刷盤一次,性能損耗大)要達到MySQL的雙1設置,連接字符串j:1 2、mongod.conf (100毫秒刷盤一次)
安全寫級別(write concern) 目前有兩種方法: 1、重載writeConcern方法: db.products.insert( { item: "envelopes", qty : 100, type: "Clasp" }, { writeConcern: { w: majority, wtimeout: 5000 } } ) 在插入數據時重載writeConcern方法,將w的值改為majority,表示寫確認發送到集群中兩台主機上包括主庫。 writeConcern方法參數說明: { w: <value>, j: <boolean>, wtimeout: <number> } w:表示寫操作的請求確認發送到mongod實例個數或者指定tag的mongod實例。具體有以下幾個值: 0:表示不用寫操作確認; 1:表示發送到單獨一個mongod實例,對於復制集群環境,就發送到主庫上; 大於1:表示發送到集群中實例的個數,但是不能超過集群個數,否則出現寫一直阻塞; majority:v3.2版本中,發送到集群中大多數節點,包括主庫,並且必須寫到本地硬盤的日志文件中,才算這次寫入是成功的。 <tag set>:表示發送到指定tag的實例上; j:表示寫操作是否已經寫入日志文件中,是boolean類型的。 wtimeout:確認請求的超時數,比如w設置10,但是集群一共才9個節點,那么就一直阻塞在那,通過設置超時數,避免寫確認返回阻塞。 2、修改配置方法: 修改復制集的配置 cfg = rs.conf() cfg.settings = {} cfg.settings.getLastErrorDefaults = {w: "majority"} rs.reconfig(cfg) 日志刷盤 vim /etc/mongod.conf journal=true journalCommitInterval=100
f
f
不支持join意味着肆無忌憚的橫向擴展能力
f
百度雲
f
所有業務使用ssd,raid0
f
oplog並行同步
f
搭建分片
1、建立相關目錄和文件 #機器1 mkdir -p /data/replset0/data/rs0 mkdir -p /data/replset0/log mkdir -p /data/replset0/config touch /data/replset0/config/rs0.conf touch /data/replset0/log/rs0.log mkdir -p /data/replset1/data/rs1 mkdir -p /data/replset1/log mkdir -p /data/replset1/config touch /data/replset1/config/rs1.conf touch /data/replset1/log/rs1.log mkdir -p /data/replset2/data/rs2 mkdir -p /data/replset2/log mkdir -p /data/replset2/config touch /data/replset2/config/rs2.conf touch /data/replset2/log/rs2.log mkdir -p /data/db_config/data/config0 mkdir -p /data/db_config/log/ mkdir -p /data/db_config/config/ touch /data/db_config/log/config0.log touch /data/db_config/config/cfgserver0.conf #機器2 mkdir -p /data/replset0/data/rs0 mkdir -p /data/replset0/log mkdir -p /data/replset0/config touch /data/replset0/config/rs0.conf touch /data/replset0/log/rs0.log mkdir -p /data/replset1/data/rs1 mkdir -p /data/replset1/log mkdir -p /data/replset1/config touch /data/replset1/config/rs1.conf touch /data/replset1/log/rs1.log mkdir -p /data/replset2/data/rs2 mkdir -p /data/replset2/log mkdir -p /data/replset2/config touch /data/replset2/config/rs2.conf touch /data/replset2/log/rs2.log mkdir -p /data/db_config/data/config1 mkdir -p /data/db_config/log/ mkdir -p /data/db_config/config/ touch /data/db_config/log/config1.log touch /data/db_config/config/cfgserver1.conf #機器3 mkdir -p /data/replset0/data/rs0 mkdir -p /data/replset0/log mkdir -p /data/replset0/config touch /data/replset0/config/rs0.conf touch /data/replset0/log/rs0.log mkdir -p /data/replset1/data/rs1 mkdir -p /data/replset1/log mkdir -p /data/replset1/config touch /data/replset1/config/rs1.conf touch /data/replset1/log/rs1.log mkdir -p /data/replset2/data/rs2 mkdir -p /data/replset2/log mkdir -p /data/replset2/config touch /data/replset2/config/rs2.conf touch /data/replset2/log/rs2.log mkdir -p /data/db_config/data/config2 mkdir -p /data/db_config/log/ mkdir -p /data/db_config/config/ touch /data/db_config/log/config2.log touch /data/db_config/config/cfgserver2.conf 2、配置復制集配置文件 #機器1 vi /data/replset0/config/rs0.conf journal=true port=4000 replSet=rsshard0 dbpath = /data/replset0/data/rs0 shardsvr = true oplogSize = 100 pidfilepath = /usr/local/mongodb/mongodb0.pid logpath = /data/replset0/log/rs0.log logappend = true profile = 1 slowms = 5 fork = true vi /data/replset1/config/rs1.conf journal=true port=4001 replSet=rsshard1 dbpath = /data/replset1/data/rs1 shardsvr = true oplogSize = 100 pidfilepath =/usr/local/mongodb/mongodb1.pid logpath = /data/replset1/log/rs1.log logappend = true profile = 1 slowms = 5 fork = true vi /data/replset2/config/rs2.conf journal=true port=4002 replSet=rsshard2 dbpath = /data/replset2/data/rs2 shardsvr = true oplogSize = 100 pidfilepath =/usr/local/mongodb/mongodb2.pid logpath = /data/replset2/log/rs2.log logappend = true profile = 1 slowms = 5 fork = true #機器2 vi /data/replset0/config/rs0.conf journal=true port=4000 replSet=rsshard0 dbpath = /data/replset0/data/rs0 shardsvr = true oplogSize = 100 pidfilepath = /usr/local/mongodb/mongodb0.pid logpath = /data/replset0/log/rs0.log logappend = true profile = 1 slowms = 5 fork = true vi /data/replset1/config/rs1.conf journal=true port=4001 replSet=rsshard1 dbpath = /data/replset1/data/rs1 shardsvr = true oplogSize = 100 pidfilepath =/usr/local/mongodb/mongodb1.pid logpath = /data/replset1/log/rs1.log logappend = true profile = 1 slowms = 5 fork = true vi /data/replset2/config/rs2.conf journal=true port=4002 replSet=rsshard2 dbpath = /data/replset2/data/rs2 shardsvr = true oplogSize = 100 pidfilepath =/usr/local/mongodb/mongodb2.pid logpath = /data/replset2/log/rs2.log logappend = true profile = 1 slowms = 5 fork = true #機器3 vi /data/replset0/config/rs0.conf journal=true port=4000 replSet=rsshard0 dbpath = /data/replset0/data/rs0 shardsvr = true oplogSize = 100 pidfilepath = /usr/local/mongodb/mongodb0.pid logpath = /data/replset0/log/rs0.log logappend = true profile = 1 slowms = 5 fork = true vi /data/replset1/config/rs1.conf journal=true port=4001 replSet=rsshard1 dbpath = /data/replset1/data/rs1 shardsvr = true oplogSize = 100 pidfilepath =/usr/local/mongodb/mongodb1.pid logpath = /data/replset1/log/rs1.log logappend = true profile = 1 slowms = 5 fork = true vi /data/replset2/config/rs2.conf journal=true port=4002 replSet=rsshard2 dbpath = /data/replset2/data/rs2 shardsvr = true oplogSize = 100 pidfilepath =/usr/local/mongodb/mongodb2.pid logpath = /data/replset2/log/rs2.log logappend = true profile = 1 slowms = 5 fork = true 3、啟動復制集 #三台機器都啟動mongodb /usr/local/mongodb/bin/mongod --config /data/replset0/config/rs0.conf /usr/local/mongodb/bin/mongod --config /data/replset1/config/rs1.conf /usr/local/mongodb/bin/mongod --config /data/replset2/config/rs2.conf /usr/local/mongodb/bin/mongod --config /data/replset0/config/rs0.conf /usr/local/mongodb/bin/mongod --config /data/replset1/config/rs1.conf /usr/local/mongodb/bin/mongod --config /data/replset2/config/rs2.conf /usr/local/mongodb/bin/mongod --config /data/replset0/config/rs0.conf /usr/local/mongodb/bin/mongod --config /data/replset1/config/rs1.conf /usr/local/mongodb/bin/mongod --config /data/replset2/config/rs2.conf 4、設置復制集 #機器1 mongo --port 4000 use admin config = { _id:"rsshard0", members:[ {_id:0,host:"192.168.1.155:4000"}, {_id:1,host:"192.168.14.221:4000"}, {_id:2,host:"192.168.14.198:4000"}] } rs.initiate(config); rs.conf() #機器2 mongo --port 4001 use admin config = { _id:"rsshard1", members:[ {_id:0,host:"192.168.1.155:4001"}, {_id:1,host:"192.168.14.221:4001"}, {_id:2,host:"192.168.14.198:4001"}] } rs.initiate(config); rs.conf() #機器3 mongo --port 4002 use admin config = { _id:"rsshard2", members:[ {_id:0,host:"192.168.1.155:4002"}, {_id:1,host:"192.168.14.221:4002"}, {_id:2,host:"192.168.14.198:4002"}] } rs.initiate(config); rs.conf() #機器1 cfg = rs.conf() cfg.members[0].priority = 2 cfg.members[1].priority = 1 cfg.members[2].priority = 1 rs.reconfig(cfg) #機器2 cfg = rs.conf() cfg.members[0].priority = 1 cfg.members[1].priority = 2 cfg.members[2].priority = 1 rs.reconfig(cfg) #機器3 cfg = rs.conf() cfg.members[0].priority = 1 cfg.members[1].priority = 1 cfg.members[2].priority = 2 rs.reconfig(cfg) 5、配置config服務器 #機器1 vi /data/db_config/config/cfgserver0.conf journal=true pidfilepath = /data/db_config/config/mongodb.pid dbpath = /data/db_config/data/config0 directoryperdb = true configsvr = true port = 5000 logpath =/data/db_config/log/config0.log logappend = true fork = true #機器2 vi /data/db_config/config/cfgserver1.conf journal=true pidfilepath = /data/db_config/config/mongodb.pid dbpath = /data/db_config/data/config1 directoryperdb = true configsvr = true port = 5000 logpath =/data/db_config/log/config1.log logappend = true fork = true #機器3 vi /data/db_config/config/cfgserver2.conf journal=true pidfilepath = /data/db_config/config/mongodb.pid dbpath = /data/db_config/data/config2 directoryperdb = true configsvr = true port = 5000 logpath =/data/db_config/log/config2.log logappend = true fork = true #機器1 /usr/local/mongodb/bin/mongod --config /data/db_config/config/cfgserver0.conf #機器2 /usr/local/mongodb/bin/mongod --config /data/db_config/config/cfgserver1.conf #機器3 /usr/local/mongodb/bin/mongod --config /data/db_config/config/cfgserver2.conf 6、配置mongos路由服務器 三台機器都要執行 mkdir -p /data/mongos/log/ touch /data/mongos/log/mongos.log touch /data/mongos/mongos.conf vi /data/mongos/mongos.conf #configdb = 192.168.1.155:5000,192.168.14.221:5000,192.168.14.198:5000 configdb = 192.168.1.155:5000 //最后還是只能使用一個config server port = 6000 chunkSize = 1 logpath =/data/mongos/log/mongos.log logappend = true fork = true mongos --config /data/mongos/mongos.conf 7、添加分片 mongo 192.168.1.155:6000 //連接到第一台 #添加分片,不能添加arbiter節點 sh.addShard("rsshard0/192.168.1.155:4000,192.168.14.221:4000,192.168.14.198:4000") sh.addShard("rsshard1/192.168.1.155:4001,192.168.14.221:4001,192.168.14.198:4001") sh.addShard("rsshard2/192.168.1.155:4002,192.168.14.221:4002,192.168.14.198:4002") #查看狀況 sh.status(); --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("565eac6d8e75f6a7d3e6e65e") } shards: { "_id" : "rsshard0", "host" : "rsshard0/192.168.1.155:4000,192.168.14.198:4000,192.168.14.221:4000" } { "_id" : "rsshard1", "host" : "rsshard1/192.168.1.155:4001,192.168.14.198:4001,192.168.14.221:4001" } { "_id" : "rsshard2", "host" : "rsshard2/192.168.1.155:4002,192.168.14.198:4002,192.168.14.221:4002" } balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } #聲明庫和表要分片 mongos> use admin mongos> db.runCommand({enablesharding:"testdb"}) mongos> db.runCommand( { shardcollection : "testdb.books", key : { id : 1 } } ) #測試 use testdb mongos> for (var i = 1; i <= 20000; i++){db.books.save({id:i,name:"ttbook",sex:"male",age:27,value:"test"})} #查看分片統計 db.books.stats() -------------------------------------------------------------------------------------------------------------------------- 遇到的問題 http://www.thinksaas.cn/group/topic/344494/ mongodb "config servers not in sync"問題的解決方案 tintindesign tintindesign 發表於 2014-11-10 00:12:58 我有一個mongodb的sharding,兩個mongod,三個config server,一個mongos,本來一切正常 但因為mongos所在的服務器沒有外網ip,但線下又需要將數據發布到線上去,所在准備在線上的另一台有外網ip的服務器上再啟一個mongos,結果啟不起來,一看日志說是"config servers not in sync",而且都是說3台config server中的兩台不一致。。。 google了下,看到有人說把出問題的那台config server的數據清掉,選一台正常的config server 把數據dump出來再restore進有問題的那台,但問題是我不知道到底哪台出問題了。mongodb的JIRA里有這么個未修復的issue: https://jira.mongodb.org/browse/SERVER-3698,所以現在也沒有辦法知道是哪台config server出問題了 算了,那就一台台試吧,我停掉了所有三台config server,把其中兩台的data目錄重命名了下,把另外一台config server 的data目錄整個scp到那兩台,然后再將config server全部啟起來,再啟動mongos,一切又和諧了~~ 個人感覺mongodb的config server之間的同步還是有些不怎么靠譜,也許目前來說單個config server反而更穩定些 I NETWORK [mongosMain] scoped connection to 192.168.1.155:5000,192.168.14.221:5000,192.168.14.198:5000 not being returned to the pool 2015-11-28T16:15:00.254+0800 E - [mongosMain] error upgrading config database to v6 :: caused by :: DistributedClockSkewed clock skew of the cluster 192.168.1.155:5000,192.168.14.221:5000,192.168.14.198:5000 is too far out of bounds to allow distributed locking. [mongosMain] error upgrading config database to v6 :: caused by :: DistributedClockSkewed clock skew of the cluster 192.168.1.155:5000,192.168.14.221:5000,192.168.14.198:5000 is too far out of bounds to allow distributed locking. 2015-11-28T16:45:43.846+0800 I CONTROL ***** SERVER RESTARTED ***** 2015-11-28T16:45:43.851+0800 I CONTROL ** WARNING: You are running this process as the root user, which is not recommended. 2015-11-28T16:45:43.851+0800 I CONTROL 2015-11-28T16:45:43.851+0800 I SHARDING [mongosMain] MongoS version 3.0.7 starting: pid=46938 port=6000 64-bit host=steven (--help for usage) 2015-11-28T16:45:43.851+0800 I CONTROL [mongosMain] db version v3.0.7 2015-11-28T16:45:43.851+0800 I CONTROL [mongosMain] git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd 2015-11-28T16:45:43.851+0800 I CONTROL [mongosMain] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49 2015-11-28T16:45:43.851+0800 I CONTROL [mongosMain] allocator: tcmalloc 2015-11-28T16:45:43.851+0800 I CONTROL [mongosMain] options: { config: "/data/mongos/mongos.conf", net: { port: 6000 }, processManagement: { fork: true }, sharding: { chunkSize: 1, configDB: "192.168.1.155:5000,192.168.14.221:5000,192.168.14.198:5000" }, systemLog: { destination: "file", logAppend: true, path: "/data/mongos/log/mongos.log" } } 2015-11-28T16:45:43.860+0800 W SHARDING [mongosMain] config servers 192.168.1.155:5000 and 192.168.14.221:5000 differ 2015-11-28T16:45:43.861+0800 W SHARDING [mongosMain] config servers 192.168.1.155:5000 and 192.168.14.221:5000 differ 2015-11-28T16:45:43.863+0800 W SHARDING [mongosMain] config servers 192.168.1.155:5000 and 192.168.14.221:5000 differ 2015-11-28T16:45:43.864+0800 W SHARDING [mongosMain] config servers 192.168.1.155:5000 and 192.168.14.221:5000 differ 2015-11-28T16:45:43.864+0800 E SHARDING [mongosMain] could not verify that config servers are in sync :: caused by :: config servers 192.168.1.155:5000 and 192.168.14.221:5000 differ: { chunks: "d41d8cd98f00b204e9800998ecf8427e", shards: "d41d8cd98f00b204e9800998ecf8427e", version: "8c18a7ed8908f1c2ec628d2a0af4bf3c" } vs {} 2015-11-28T16:45:43.864+0800 I - [mongosMain] configServer connection startup check failed ------------------------------------------------------------------------------------------------------------------------------------- db.books.stats() { "sharded" : true, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "ns" : "testdb.books", "count" : 20000, "numExtents" : 11, "size" : 2240000, "storageSize" : 5595136, "totalIndexSize" : 1267280, "indexSizes" : { "_id_" : 678608, "id_1" : 588672 }, "avgObjSize" : 112, "nindexes" : 2, "nchunks" : 5, "shards" : { "rsshard0" : { "ns" : "testdb.books", "count" : 9443, "size" : 1057616, "avgObjSize" : 112, "numExtents" : 5, "storageSize" : 2793472, "lastExtentSize" : 2097152, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 596848, "indexSizes" : { "_id_" : 318864, "id_1" : 277984 }, "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("565975792a5b76c2553522a5") } }, "rsshard1" : { "ns" : "testdb.books", "count" : 10549, "size" : 1181488, "avgObjSize" : 112, "numExtents" : 5, "storageSize" : 2793472, "lastExtentSize" : 2097152, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 654080, "indexSizes" : { "_id_" : 351568, "id_1" : 302512 }, "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("565eac15357442cd3ead5103") } }, "rsshard2" : { "ns" : "testdb.books", "count" : 8, "size" : 896, "avgObjSize" : 112, "numExtents" : 1, "storageSize" : 8192, "lastExtentSize" : 8192, "paddingFactor" : 1, "paddingFactorNote" : "paddingFactor is unused and unmaintained in 3.0. It remains hard coded to 1.0 for compatibility only.", "userFlags" : 1, "capped" : false, "nindexes" : 2, "totalIndexSize" : 16352, "indexSizes" : { "_id_" : 8176, "id_1" : 8176 }, "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("565eab094c148b20ecf4b442") } } }, "ok" : 1 sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("565eac6d8e75f6a7d3e6e65e") } shards: { "_id" : "rsshard0", "host" : "rsshard0/192.168.1.155:4000,192.168.14.198:4000,192.168.14.221:4000" } { "_id" : "rsshard1", "host" : "rsshard1/192.168.1.155:4001,192.168.14.198:4001,192.168.14.221:4001" } { "_id" : "rsshard2", "host" : "rsshard2/192.168.1.155:4002,192.168.14.198:4002,192.168.14.221:4002" } balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "admin", "partitioned" : false, "primary" : "config" } { "_id" : "test", "partitioned" : false, "primary" : "rsshard0" } { "_id" : "testdb", "partitioned" : true, "primary" : "rsshard0" } testdb.books shard key: { "id" : 1 } chunks: rsshard0 2 rsshard1 2 rsshard2 1 { "id" : { "$minKey" : 1 } } -->> { "id" : 2 } on : rsshard1 Timestamp(2, 0) { "id" : 2 } -->> { "id" : 10 } on : rsshard2 Timestamp(3, 0) { "id" : 10 } -->> { "id" : 4691 } on : rsshard0 Timestamp(4, 1) { "id" : 4691 } -->> { "id" : 9453 } on : rsshard0 Timestamp(3, 3) { "id" : 9453 } -->> { "id" : { "$maxKey" : 1 } } on : rsshard1 Timestamp(4, 0) { "_id" : "aaaa", "partitioned" : false, "primary" : "rsshard0" } -------------------------------------------------------------------------------------- 配置文件樣例 rs journal=true port=4000 replset=rsshard0 dbpath = /data/mongodb/data shardsvr = true //啟動分片 oplogSize = 100 //oplog大小 MB pidfilepath = /usr/local/mongodb/mongodb.pid //pid文件路徑 logpath = /data/replset0/log/rs0.log //日志路徑 logappend = true //以追加方式寫入 profile = 1 //數據庫分析,1表示僅記錄較慢的操作 slowms = 5 //認定為慢查詢的時間設置 fork = true //以守護進程的方式運行,創建服務器進程 mongos configdb = 192.168.1.155:5000,192.168.14.221:5000,192.168.14.198:5000 //監聽的config服務器ip和端口,只能有1個或者3個 port = 6000 chunkSize = 1 //單位 mb 生成環境請使用 100 或刪除,刪除后默認是64 logpath =/data/mongos/log/mongos.log logappend = true fork = true
分片之后的執行計划
db.books.find({id:1}).explain() { "queryPlanner" : { "mongosPlannerVersion" : 1, "winningPlan" : { "stage" : "SINGLE_SHARD", "shards" : [ { "shardName" : "rsshard1", // 在第二個分片上 "connectionString" : "rsshard1/192.168.1.155:4001,192.168.14.198:4001,192.168.14.221:4001", "serverInfo" : { // 第二個分片的服務器信息 "host" : "steven2", "port" : 4001, "version" : "3.0.7", "gitVersion" : "6ce7cbe8c6b899552dadd907604559806aa2e9bd" }, "plannerVersion" : 1, "namespace" : "testdb.books", "indexFilterSet" : false, "parsedQuery" : { "id" : { // id等於1 "$eq" : 1 } }, "winningPlan" : { "stage" : "FETCH", "inputStage" : { "stage" : "SHARDING_FILTER", "inputStage" : { "stage" : "IXSCAN", "keyPattern" : { "id" : 1 }, "indexName" : "id_1", "isMultiKey" : false, "direction" : "forward", "indexBounds" : { //索引范圍 "id" : [ "[1.0, 1.0]" ] } } } }, "rejectedPlans" : [ ] } ] } }, "ok" : 1 }
f
[mongosMain] warning: config servers social-11:27021 and social-11:27023 differ
config servers are not in sync【mongo】
http://m.blog.csdn.net/blog/u011321811/38372937
http://xiao9.iteye.com/blog/1395593
今天兩台開發機突然掛掉了,只剩下一台,機器重新恢復后,在恢復mongos的過程中,config server報錯,
具體日志見:
ERROR: could notverify that config servers are in sync :: caused by :: config servers xx.xx.xx.xx:20000 and yy.yy.yy.yy:20000 differ: { chunks:"f0d00cf4266edb17c63538d24e51b545", colle
ctions:"331a71ef5fd89be1d4e02d0ad6ed1e55", databases:"8653e07cb59685b0e89b1fd094a30133", shards:"0a1b3f23160cd5dc731fd837cfb6d081", version:"9ec885c985db1d9fb06e6e7d61814668" } vs { chunks:"99771bf8ac9d42dfbb7332e7fa08d377",
collections:"331a71ef5fd89be1d4e02d0ad6ed1e55", databases:"8653e07cb59685b0e89b1fd094a30133", shards:"0a1b3f23160cd5dc731fd837cfb6d081", version:"9ec885c985db1d9fb06e6e7d61814668" }
2014-08-04T17:03:40.232+0800[mongosMain] configServer connection startup check failed
直接google,發現這種情況的原因在於兩個機器的config server記錄的信息不一致導致。修復的方法,在mongo官方的jira中已經列出(https://jira.mongodb.org/browse/SERVER-10737)。
這里做個記錄,並且簡單說明下恢復的方法:
連接到每個分片的configserver,在我機器上是20000端口,運行db.runCommand('dbhash')
在每台機器上都運行上述命令,比較理想的情況,會找到兩個md5一樣的機器。
然后將與其他兩台不一致的mongo進程都殺死,將另一台機器上的dbpath下的數據都拷到出問題的那台機器上。
重啟日志中報錯的兩台機器的config server
試着啟動mongos,看是否還存在上述問題。
而在我的環境中,由於兩台機器先后掛掉,最終比較發現,shard中的3台機器,配置均不一樣。所以我決定采用一直存活的mongoconfig的配置,將另外兩台機器的進程殺死,數據刪除,拷貝數據,重啟。由於我的是線下環境,處理比較隨意,生產環境請一定選擇正確的數據恢復方法。
http://www.mongoing.com/anspress/question/2321/%E8%AF%B7%E9%97%AE%E5%A4%8D%E5%88%B6%E9%9B%86%E5%A6%82%E4%BD%95%E6%89%8B%E5%8A%A8%E5%88%87%E6%8D%A2
你可以用rs.stepDown()來降下主節點,但是不能保證新的主節點一定是在某一台從節點機器。另外一個方式就是 通過提高那台機器的優先級的方式來導致重新選舉並選為主節點。如下:
conf=rs.conf();
conf.members[2].priority=100; // 假設你希望第3個節點成為主。默認priority是1
rs.reconfig(conf);
注意:在維護窗口期間做,不要在高峰做
復制集里面沒有arbiter也可以故障轉移
http://www.mongoing.com/anspress/question/2310/復制集里面沒有arbiter也可以故障轉移
lyhabc 在 1周 之前 提問了問題
搭建了一個復制集,三個機器,問一下為何沒有arbiter也可以故障轉移
TJ answered about 1小時 ago
arbiter這個名字有點誤導人,以為它是權威的仲裁者。其實他只是一個有投票能力的空節點。
普通節點: 數據+投票
arbiter: 投票
所以,當你使用3個普通節點的時候,你實際上已經有了3個投票節點。arbiter作為專門的投票節點,只是在你數據節點湊不夠3個或奇數的時候才用得着。
跟sqlserver的鏡像原理一樣,見證機器只是空節點,我明白了
f
公司內網192.168.1.8上面的mongodb測試庫
f
f
f
f
F
F
F
F
.
F
用table view
F
地址:http://files.cnblogs.com/files/MYSQLZOUQI/mongodbportaldb.7z
f
2015-10-30T05:59:12.386+0800 I JOURNAL [initandlisten] journal dir=/data/mongodb/data/journal
2015-10-30T05:59:12.386+0800 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed
2015-10-30T05:59:12.518+0800 I JOURNAL [durability] Durability thread started
2015-10-30T05:59:12.518+0800 I JOURNAL [journal writer] Journal writer thread started
2015-10-30T05:59:12.521+0800 I CONTROL [initandlisten] MongoDB starting : pid=4479 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven
2015-10-30T05:59:12.521+0800 I CONTROL [initandlisten]
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten]
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten]
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten]
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] db version v3.0.7
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] allocator: tcmalloc
2015-10-30T05:59:12.522+0800 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } }
2015-10-30T05:59:12.536+0800 I INDEX [initandlisten] allocating new ns file /data/mongodb/data/local/local.ns, filling with zeroes...
2015-10-30T05:59:12.858+0800 I STORAGE [FileAllocator] allocating new datafile /data/mongodb/data/local/local.0, filling with zeroes... //填0初始化 數據文件
2015-10-30T05:59:12.858+0800 I STORAGE [FileAllocator] creating directory /data/mongodb/data/local/_tmp
2015-10-30T05:59:12.866+0800 I STORAGE [FileAllocator] done allocating datafile /data/mongodb/data/local/local.0, size: 64MB, took 0.001 secs
2015-10-30T05:59:12.876+0800 I NETWORK [initandlisten] waiting for connections on port 27017
2015-10-30T05:59:14.325+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:40766 #1 (1 connection now open)
2015-10-30T05:59:14.328+0800 I NETWORK [conn1] end connection 192.168.1.106:40766 (0 connections now open)
2015-10-30T05:59:24.339+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:40769 #2 (1 connection now open) //接受192.168.1.106的連接
2015-10-30T06:00:20.348+0800 I CONTROL [signalProcessingThread] got signal 15 (Terminated), will terminate after current cmd ends
2015-10-30T06:00:20.348+0800 I CONTROL [signalProcessingThread] now exiting
2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] shutdown: going to close listening sockets...
2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] closing listening socket: 6
2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] closing listening socket: 7
2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] removing socket file: /tmp/mongodb-27017.sock //socket方式通信
2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] shutdown: going to flush diaglog...
2015-10-30T06:00:20.348+0800 I NETWORK [signalProcessingThread] shutdown: going to close sockets...
2015-10-30T06:00:20.348+0800 I STORAGE [signalProcessingThread] shutdown: waiting for fs preallocator...
2015-10-30T06:00:20.348+0800 I STORAGE [signalProcessingThread] shutdown: final commit...
2015-10-30T06:00:20.349+0800 I JOURNAL [signalProcessingThread] journalCleanup...
2015-10-30T06:00:20.349+0800 I JOURNAL [signalProcessingThread] removeJournalFiles
2015-10-30T06:00:20.349+0800 I NETWORK [conn2] end connection 192.168.1.106:40769 (0 connections now open)
2015-10-30T06:00:20.356+0800 I JOURNAL [signalProcessingThread] Terminating durability thread ...
2015-10-30T06:00:20.453+0800 I JOURNAL [journal writer] Journal writer thread stopped
2015-10-30T06:00:20.454+0800 I JOURNAL [durability] Durability thread stopped
2015-10-30T06:00:20.455+0800 I STORAGE [signalProcessingThread] shutdown: closing all files...
2015-10-30T06:00:20.457+0800 I STORAGE [signalProcessingThread] closeAllFiles() finished
2015-10-30T06:00:20.457+0800 I STORAGE [signalProcessingThread] shutdown: removing fs lock...
2015-10-30T06:00:20.457+0800 I CONTROL [signalProcessingThread] dbexit: rc: 0
2015-10-30T06:01:20.259+0800 I CONTROL ***** SERVER RESTARTED *****
2015-10-30T06:01:20.290+0800 I JOURNAL [initandlisten] journal dir=/data/mongodb/data/journal
2015-10-30T06:01:20.291+0800 I JOURNAL [initandlisten] recover : no journal files present, no recovery needed
2015-10-30T06:01:20.439+0800 I JOURNAL [initandlisten] preallocateIsFaster=true 2.36
2015-10-30T06:01:20.544+0800 I JOURNAL [durability] Durability thread started
2015-10-30T06:01:20.546+0800 I JOURNAL [journal writer] Journal writer thread started
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] MongoDB starting : pid=4557 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten]
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten]
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten]
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten]
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] db version v3.0.7
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] allocator: tcmalloc
2015-10-30T06:01:20.547+0800 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } }
2015-10-30T06:01:20.582+0800 I NETWORK [initandlisten] waiting for connections on port 27017
2015-10-30T06:01:28.390+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:40798 #1 (1 connection now open)
2015-10-30T06:01:28.398+0800 I NETWORK [conn1] end connection 192.168.1.106:40798 (0 connections now open)
2015-10-30T06:01:38.394+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:40800 #2 (1 connection now open)
2015-10-30T07:01:39.383+0800 I NETWORK [conn2] end connection 192.168.1.106:40800 (0 connections now open)
2015-10-30T07:01:39.384+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:42327 #3 (1 connection now open)
2015-10-30T07:32:40.910+0800 I NETWORK [conn3] end connection 192.168.1.106:42327 (0 connections now open)
2015-10-30T07:32:40.910+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:43130 #4 (2 connections now open)
2015-10-30T08:32:43.957+0800 I NETWORK [conn4] end connection 192.168.1.106:43130 (0 connections now open)
2015-10-30T08:32:43.957+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:46481 #5 (2 connections now open)
2015-10-31T04:27:00.240+0800 I CONTROL ***** SERVER RESTARTED ***** //服務器非法關機,需要recover 鳳勝踢了機器電源
2015-10-31T04:27:00.703+0800 W - [initandlisten] Detected unclean shutdown - /data/mongodb/data/mongod.lock is not empty. //檢測到不是clean shutdown
2015-10-31T04:27:00.812+0800 I JOURNAL [initandlisten] journal dir=/data/mongodb/data/journal
2015-10-31T04:27:00.812+0800 I JOURNAL [initandlisten] recover begin //mongodb開始還原 記錄lsn
2015-10-31T04:27:01.048+0800 I JOURNAL [initandlisten] recover lsn: 6254831
2015-10-31T04:27:01.048+0800 I JOURNAL [initandlisten] recover /data/mongodb/data/journal/j._0
2015-10-31T04:27:01.089+0800 I JOURNAL [initandlisten] recover skipping application of section seq:0 < lsn:6254831
2015-10-31T04:27:01.631+0800 I JOURNAL [initandlisten] recover cleaning up
2015-10-31T04:27:01.632+0800 I JOURNAL [initandlisten] removeJournalFiles
2015-10-31T04:27:01.680+0800 I JOURNAL [initandlisten] recover done
2015-10-31T04:27:03.006+0800 I JOURNAL [initandlisten] preallocateIsFaster=true 25.68
2015-10-31T04:27:04.076+0800 I JOURNAL [initandlisten] preallocateIsFaster=true 19.9
2015-10-31T04:27:06.896+0800 I JOURNAL [initandlisten] preallocateIsFaster=true 35.5
2015-10-31T04:27:06.896+0800 I JOURNAL [initandlisten] preallocateIsFaster check took 5.215 secs
2015-10-31T04:27:06.896+0800 I JOURNAL [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.0
2015-10-31T04:27:09.005+0800 I - [initandlisten] File Preallocator Progress: 325058560/1073741824 30%
2015-10-31T04:27:12.236+0800 I - [initandlisten] File Preallocator Progress: 440401920/1073741824 41%
2015-10-31T04:27:15.006+0800 I - [initandlisten] File Preallocator Progress: 713031680/1073741824 66%
2015-10-31T04:27:18.146+0800 I - [initandlisten] File Preallocator Progress: 817889280/1073741824 76%
2015-10-31T04:27:21.130+0800 I - [initandlisten] File Preallocator Progress: 912261120/1073741824 84%
2015-10-31T04:27:24.477+0800 I - [initandlisten] File Preallocator Progress: 1017118720/1073741824 94%
2015-10-31T04:28:08.132+0800 I JOURNAL [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.1
2015-10-31T04:28:11.904+0800 I - [initandlisten] File Preallocator Progress: 629145600/1073741824 58%
2015-10-31T04:28:14.260+0800 I - [initandlisten] File Preallocator Progress: 692060160/1073741824 64%
2015-10-31T04:28:17.335+0800 I - [initandlisten] File Preallocator Progress: 796917760/1073741824 74%
2015-10-31T04:28:20.440+0800 I - [initandlisten] File Preallocator Progress: 859832320/1073741824 80%
2015-10-31T04:28:23.274+0800 I - [initandlisten] File Preallocator Progress: 922746880/1073741824 85%
2015-10-31T04:28:26.638+0800 I - [initandlisten] File Preallocator Progress: 1017118720/1073741824 94%
2015-10-31T04:29:01.643+0800 I JOURNAL [initandlisten] preallocating a journal file /data/mongodb/data/journal/prealloc.2
2015-10-31T04:29:04.032+0800 I - [initandlisten] File Preallocator Progress: 450887680/1073741824 41%
2015-10-31T04:29:09.015+0800 I - [initandlisten] File Preallocator Progress: 566231040/1073741824 52%
2015-10-31T04:29:12.181+0800 I - [initandlisten] File Preallocator Progress: 828375040/1073741824 77%
2015-10-31T04:29:15.125+0800 I - [initandlisten] File Preallocator Progress: 964689920/1073741824 89%
2015-10-31T04:29:34.755+0800 I JOURNAL [durability] Durability thread started
2015-10-31T04:29:34.755+0800 I JOURNAL [journal writer] Journal writer thread started
2015-10-31T04:29:35.029+0800 I CONTROL [initandlisten] MongoDB starting : pid=1672 port=27017 dbpath=/data/mongodb/data/ 64-bit host=steven
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten]
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten]
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** We suggest setting it to 'never'
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten]
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] ** WARNING: soft rlimits too low. rlimits set to 1024 processes, 64000 files. Number of processes should be at least 32000 : 0.5 times number of files.
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten]
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] db version v3.0.7
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] git version: 6ce7cbe8c6b899552dadd907604559806aa2e9bd
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] build info: Linux ip-10-101-218-12 2.6.32-220.el6.x86_64 #1 SMP Wed Nov 9 08:03:13 EST 2011 x86_64 BOOST_LIB_VERSION=1_49
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] allocator: tcmalloc
2015-10-31T04:29:35.031+0800 I CONTROL [initandlisten] options: { config: "/etc/mongod.conf", net: { port: 27017 }, processManagement: { fork: true, pidFilePath: "/usr/local/mongodb/mongo.pid" }, replication: { oplogSizeMB: 2048 }, sharding: { clusterRole: "shardsvr" }, storage: { dbPath: "/data/mongodb/data/", directoryPerDB: true }, systemLog: { destination: "file", logAppend: true, path: "/data/mongodb/logs/mongo.log" } }
2015-10-31T04:29:36.869+0800 I NETWORK [initandlisten] waiting for connections on port 27017
2015-10-31T04:39:39.671+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:3134 #1 (1 connection now open)
2015-10-31T04:39:40.042+0800 I COMMAND [conn1] command admin.$cmd command: isMaster { isMaster: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:178 locks:{} 229ms
2015-10-31T04:39:40.379+0800 I NETWORK [conn1] end connection 192.168.1.106:3134 (0 connections now open)
2015-10-31T04:40:10.117+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:3137 #2 (1 connection now open)
2015-10-31T04:40:13.357+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:3138 #3 (2 connections now open)
2015-10-31T04:40:13.805+0800 I COMMAND [conn3] command local.$cmd command: usersInfo { usersInfo: 1 } keyUpdates:0 writeConflicts:0 numYields:0 reslen:49 locks:{ Global: { acquireCount: { r: 2 } }, MMAPV1Journal: { acquireCount: { r: 1 } }, Database: { acquireCount: { r: 1 } }, Collection: { acquireCount: { R: 1 } } } 304ms
2015-10-31T04:49:30.223+0800 I NETWORK [conn2] end connection 192.168.1.106:3137 (1 connection now open)
2015-10-31T04:49:30.223+0800 I NETWORK [conn3] end connection 192.168.1.106:3138 (0 connections now open)
2015-10-31T04:56:27.271+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4335 #4 (1 connection now open)
2015-10-31T04:56:29.449+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4336 #5 (2 connections now open)
2015-10-31T04:58:17.514+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4356 #6 (3 connections now open)
2015-10-31T05:02:55.219+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4902 #7 (4 connections now open)
2015-10-31T05:03:57.954+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:4907 #8 (5 connections now open)
2015-10-31T05:10:25.905+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:5064 #9 (6 connections now open)
2015-10-31T05:16:00.026+0800 I NETWORK [conn7] end connection 192.168.1.106:4902 (5 connections now open)
2015-10-31T05:16:00.101+0800 I NETWORK [conn8] end connection 192.168.1.106:4907 (4 connections now open)
2015-10-31T05:16:00.163+0800 I NETWORK [conn9] end connection 192.168.1.106:5064 (3 connections now open)
2015-10-31T05:26:28.837+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:5654 #10 (4 connections now open)
2015-10-31T05:26:28.837+0800 I NETWORK [conn4] end connection 192.168.1.106:4335 (2 connections now open)
2015-10-31T05:26:30.969+0800 I NETWORK [conn5] end connection 192.168.1.106:4336 (2 connections now open)
2015-10-31T05:26:30.973+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:5655 #11 (3 connections now open)
2015-10-31T05:56:30.336+0800 I NETWORK [conn10] end connection 192.168.1.106:5654 (2 connections now open)
2015-10-31T05:56:30.337+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6153 #12 (3 connections now open)
2015-10-31T05:56:32.457+0800 I NETWORK [conn11] end connection 192.168.1.106:5655 (2 connections now open)
2015-10-31T05:56:32.458+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6154 #13 (4 connections now open)
2015-10-31T06:26:31.837+0800 I NETWORK [conn12] end connection 192.168.1.106:6153 (2 connections now open)
2015-10-31T06:26:31.838+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6514 #14 (3 connections now open)
2015-10-31T06:26:33.961+0800 I NETWORK [conn13] end connection 192.168.1.106:6154 (2 connections now open)
2015-10-31T06:26:33.962+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6515 #15 (4 connections now open)
2015-10-31T06:27:09.518+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:6563 #16 (4 connections now open)
2015-10-31T06:29:57.407+0800 I INDEX [conn16] allocating new ns file /data/mongodb/data/testlyh/testlyh.ns, filling with zeroes...
2015-10-31T06:29:57.846+0800 I STORAGE [FileAllocator] allocating new datafile /data/mongodb/data/testlyh/testlyh.0, filling with zeroes...
2015-10-31T06:29:57.847+0800 I STORAGE [FileAllocator] creating directory /data/mongodb/data/testlyh/_tmp
2015-10-31T06:29:57.871+0800 I STORAGE [FileAllocator] done allocating datafile /data/mongodb/data/testlyh/testlyh.0, size: 64MB, took 0.003 secs
2015-10-31T06:29:57.890+0800 I COMMAND [conn16] command testlyh.$cmd command: create { create: "temporary" } keyUpdates:0 writeConflicts:0 numYields:0 reslen:37 locks:{ Global: { acquireCount: { r: 1, w: 1 } }, MMAPV1Journal: { acquireCount: { w: 6 } }, Database: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 483ms
2015-10-31T06:29:57.894+0800 I COMMAND [conn16] CMD: drop testlyh.temporary
2015-10-31T06:45:06.955+0800 I NETWORK [conn16] end connection 192.168.1.106:6563 (3 connections now open)
2015-10-31T06:56:33.323+0800 I NETWORK [conn14] end connection 192.168.1.106:6514 (2 connections now open)
2015-10-31T06:56:33.324+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:7692 #17 (3 connections now open)
2015-10-31T06:56:35.461+0800 I NETWORK [conn15] end connection 192.168.1.106:6515 (2 connections now open)
2015-10-31T06:56:35.462+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:7693 #18 (4 connections now open)
2015-10-31T07:13:30.230+0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51696 #19 (4 connections now open)
2015-10-31T07:21:06.715+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:8237 #20 (5 connections now open)
2015-10-31T07:21:32.193+0800 I INDEX [conn6] build index on: local.people properties: { v: 1, unique: true, key: { name: 1.0 }, name: "name_1", ns: "local.people" } //創建索引
2015-10-31T07:21:32.193+0800 I INDEX [conn6] building index using bulk method //bulk insert方式建立索引
2015-10-31T07:21:32.194+0800 I INDEX [conn6] build index done. scanned 36 total records. 0 secs
2015-10-31T07:26:34.826+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:8328 #21 (6 connections now open)
2015-10-31T07:26:34.827+0800 I NETWORK [conn17] end connection 192.168.1.106:7692 (4 connections now open)
2015-10-31T07:26:36.962+0800 I NETWORK [conn18] end connection 192.168.1.106:7693 (4 connections now open)
2015-10-31T07:26:36.963+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:8329 #22 (6 connections now open)
2015-10-31T07:51:08.214+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9202 #23 (6 connections now open)
2015-10-31T07:51:08.214+0800 I NETWORK [conn20] end connection 192.168.1.106:8237 (4 connections now open)
2015-10-31T07:56:36.327+0800 I NETWORK [conn21] end connection 192.168.1.106:8328 (4 connections now open)
2015-10-31T07:56:36.328+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9310 #24 (6 connections now open)
2015-10-31T07:56:38.450+0800 I NETWORK [conn22] end connection 192.168.1.106:8329 (4 connections now open)
2015-10-31T07:56:38.452+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9313 #25 (5 connections now open)
2015-10-31T08:03:56.823+0800 I NETWORK [conn25] end connection 192.168.1.106:9313 (4 connections now open)
2015-10-31T08:03:58.309+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9470 #26 (5 connections now open)
2015-10-31T08:03:58.309+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9471 #27 (6 connections now open)
2015-10-31T08:03:58.313+0800 I NETWORK [conn26] end connection 192.168.1.106:9470 (5 connections now open)
2015-10-31T08:03:58.314+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9469 #28 (6 connections now open)
2015-10-31T08:03:58.315+0800 I NETWORK [conn27] end connection 192.168.1.106:9471 (5 connections now open)
2015-10-31T08:03:58.317+0800 I NETWORK [conn28] end connection 192.168.1.106:9469 (4 connections now open)
2015-10-31T08:04:04.852+0800 I NETWORK [conn19] end connection 127.0.0.1:51696 (3 connections now open)
2015-10-31T08:04:05.944+0800 I NETWORK [conn23] end connection 192.168.1.106:9202 (2 connections now open)
2015-10-31T08:04:06.215+0800 I NETWORK [conn24] end connection 192.168.1.106:9310 (1 connection now open)
2015-10-31T08:04:09.233+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9531 #29 (2 connections now open)
2015-10-31T08:04:09.233+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9530 #30 (3 connections now open)
2015-10-31T08:04:09.233+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:9532 #31 (4 connections now open)
2015-10-31T08:34:18.767+0800 I NETWORK [conn29] end connection 192.168.1.106:9531 (3 connections now open)
2015-10-31T08:34:18.767+0800 I NETWORK [conn30] end connection 192.168.1.106:9530 (3 connections now open)
2015-10-31T08:34:18.769+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10157 #32 (3 connections now open)
2015-10-31T08:34:18.769+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10158 #33 (4 connections now open)
2015-10-31T08:34:18.771+0800 I NETWORK [conn31] end connection 192.168.1.106:9532 (3 connections now open)
2015-10-31T08:34:18.774+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10159 #34 (4 connections now open)
2015-10-31T08:36:23.662+0800 I NETWORK [conn33] end connection 192.168.1.106:10158 (3 connections now open)
2015-10-31T08:36:23.933+0800 I NETWORK [conn6] end connection 192.168.1.106:4356 (2 connections now open)
2015-10-31T08:36:24.840+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10238 #35 (3 connections now open)
2015-10-31T08:36:24.840+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10239 #36 (4 connections now open)
2015-10-31T08:36:24.844+0800 I NETWORK [conn36] end connection 192.168.1.106:10239 (3 connections now open)
2015-10-31T08:36:24.845+0800 I NETWORK [conn35] end connection 192.168.1.106:10238 (2 connections now open)
2015-10-31T08:36:28.000+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10279 #37 (3 connections now open)
2015-10-31T08:36:28.004+0800 I NETWORK [conn37] end connection 192.168.1.106:10279 (2 connections now open)
2015-10-31T08:36:32.751+0800 I NETWORK [conn32] end connection 192.168.1.106:10157 (1 connection now open)
2015-10-31T08:36:32.756+0800 I NETWORK [conn34] end connection 192.168.1.106:10159 (0 connections now open)
2015-10-31T08:36:35.835+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10339 #38 (1 connection now open)
2015-10-31T08:36:35.837+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10341 #39 (2 connections now open)
2015-10-31T08:36:35.837+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:10340 #40 (3 connections now open)
2015-10-31T09:06:45.368+0800 I NETWORK [conn39] end connection 192.168.1.106:10341 (2 connections now open)
2015-10-31T09:06:45.370+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:12600 #41 (3 connections now open)
2015-10-31T09:06:45.371+0800 I NETWORK [conn40] end connection 192.168.1.106:10340 (2 connections now open)
2015-10-31T09:06:45.371+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:12601 #42 (4 connections now open)
2015-10-31T09:06:45.380+0800 I NETWORK [conn38] end connection 192.168.1.106:10339 (2 connections now open)
2015-10-31T09:06:45.381+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:12602 #43 (4 connections now open)
2015-10-31T09:23:54.705+0800 I NETWORK [initandlisten] connection accepted from 127.0.0.1:51697 #44 (4 connections now open)
2015-10-31T09:25:07.727+0800 I INDEX [conn44] allocating new ns file /data/mongodb/data/test/test.ns, filling with zeroes...
2015-10-31T09:25:08.375+0800 I STORAGE [FileAllocator] allocating new datafile /data/mongodb/data/test/test.0, filling with zeroes...
2015-10-31T09:25:08.375+0800 I STORAGE [FileAllocator] creating directory /data/mongodb/data/test/_tmp
2015-10-31T09:25:08.378+0800 I STORAGE [FileAllocator] done allocating datafile /data/mongodb/data/test/test.0, size: 64MB, took 0.001 secs
2015-10-31T09:25:08.386+0800 I WRITE [conn44] insert test.users query: { _id: ObjectId('56341873c4393e7396b20592'), id: 1.0 } ninserted:1 keyUpdates:0 writeConflicts:0 numYields:0 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, MMAPV1Journal: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 659ms
2015-10-31T09:25:08.386+0800 I COMMAND [conn44] command test.$cmd command: insert { insert: "users", documents: [ { _id: ObjectId('56341873c4393e7396b20592'), id: 1.0 } ], ordered: true } keyUpdates:0 writeConflicts:0 numYields:0 reslen:40 locks:{ Global: { acquireCount: { r: 2, w: 2 } }, MMAPV1Journal: { acquireCount: { w: 8 } }, Database: { acquireCount: { w: 1, W: 1 } }, Collection: { acquireCount: { W: 1 } }, Metadata: { acquireCount: { W: 4 } } } 660ms
2015-10-31T09:26:09.405+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:13220 #45 (5 connections now open)
2015-10-31T09:36:46.873+0800 I NETWORK [conn41] end connection 192.168.1.106:12600 (4 connections now open)
2015-10-31T09:36:46.874+0800 I NETWORK [conn42] end connection 192.168.1.106:12601 (3 connections now open)
2015-10-31T09:36:46.875+0800 I NETWORK [conn43] end connection 192.168.1.106:12602 (2 connections now open)
2015-10-31T09:36:46.875+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:13498 #46 (3 connections now open)
2015-10-31T09:36:46.876+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:13499 #47 (4 connections now open)
2015-10-31T09:36:46.876+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:13500 #48 (5 connections now open)
2015-10-31T09:43:52.490+0800 I INDEX [conn45] build index on: local.people properties: { v: 1, key: { country: 1.0 }, name: "country_1", ns: "local.people" }
2015-10-31T09:43:52.490+0800 I INDEX [conn45] building index using bulk method
2015-10-31T09:43:52.491+0800 I INDEX [conn45] build index done. scanned 36 total records. 0 secs
2015-10-31T09:51:32.977+0800 I INDEX [conn45] build index on: local.people properties: { v: 1, key: { country: 1.0, name: 1.0 }, name: "country_1_name_1", ns: "local.people" } //建立復合索引
2015-10-31T09:51:32.977+0800 I INDEX [conn45] building index using bulk method
2015-10-31T09:51:32.977+0800 I INDEX [conn45] build index done. scanned 36 total records. 0 secs
2015-10-31T09:59:49.802+0800 I NETWORK [conn44] end connection 127.0.0.1:51697 (4 connections now open)
2015-10-31T10:06:48.357+0800 I NETWORK [conn47] end connection 192.168.1.106:13499 (3 connections now open)
2015-10-31T10:06:48.358+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:14438 #49 (5 connections now open)
2015-10-31T10:06:48.358+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:14439 #50 (5 connections now open)
2015-10-31T10:06:48.358+0800 I NETWORK [conn48] end connection 192.168.1.106:13500 (4 connections now open)
2015-10-31T10:06:48.358+0800 I NETWORK [conn46] end connection 192.168.1.106:13498 (4 connections now open)
2015-10-31T10:06:48.359+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:14440 #51 (5 connections now open)
2015-10-31T10:12:15.409+0800 I INDEX [conn45] build index on: local.users properties: { v: 1, key: { Attribute: 1.0 }, name: "Attribute_1", ns: "local.users" }
2015-10-31T10:12:15.409+0800 I INDEX [conn45] building index using bulk method
2015-10-31T10:12:15.409+0800 I INDEX [conn45] build index done. scanned 35 total records. 0 secs
2015-10-31T10:28:27.422+0800 I COMMAND [conn45] CMD: dropIndexes local.people //刪除索引
2015-11-25T15:25:23.248+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:23227 #76 (4 connections now open)
2015-11-25T15:25:23.247+0800 I NETWORK [conn73] end connection 192.168.1.106:21648 (2 connections now open)
2015-11-25T15:25:36.226+0800 I NETWORK [conn75] end connection 192.168.1.106:21659 (2 connections now open)
2015-11-25T15:25:36.227+0800 I NETWORK [conn74] end connection 192.168.1.106:21658 (1 connection now open)
2015-11-25T15:25:36.227+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:23236 #77 (2 connections now open)
2015-11-25T15:25:36.227+0800 I NETWORK [initandlisten] connection accepted from 192.168.1.106:23237 #78 (3 connections now open)
開啟認證
注意:
1、先創建用戶,之后關閉服務,然后修改mongod.conf再開啟認證#auth = true,否則不創建用戶就開啟認證無法進入數據庫!!
2、無論當前在哪個庫下創建用戶,所有用戶信息都存儲在admin庫下的system.users表里,即使是userAdminAnyDatabase,在當前庫下只能看到當前庫的用戶不能看到其他庫的用戶
//添加用戶 擁有root角色
use admin
db.createUser({user:"admin",pwd:"123456",roles:[{role:"root",db:"admin"}]})
查詢已添加的用戶:
> db.system.users.find()
修改mongod.conf的#auth = true,重啟mongodb
mongo --host 192.168.1.50 --port 27017 admin -u admin -p123456
或
mongo --host 192.168.1.50 --port 27017 admin
> db.auth("admin","123456")
mongo shell客戶端
mongo --help
db address can be:
foo foo database on local machine
192.169.0.5/foo foo database on 192.168.0.5 machine
192.169.0.5:9999/foo foo database on 192.168.0.5 machine on port 9999
options:
--port arg 端口
--host arg IP
--eval arg 運行javascript腳本
-u [ --username ] arg 用戶名
-p [ --password ] arg 密碼
-h [ --help ] 顯示這個幫助信息
--version 版本號
--verbose increase verbosity
--ipv6 開啟IPv6支持(默認關閉)
--authenticationDatabase 開啟auth方式之后,如果登錄的庫跟用戶所在的庫相同,比如當前要連接admin庫,登錄用戶創建在admin庫下,就可以不要加--authenticationDatabase
否則必須要加--authenticationDatabase ,authenticationDatabase指定了校驗用戶賬戶名和密碼的數據庫,也就是說在哪個數據庫創建的登陸用戶,就寫哪個數據庫
沒有創建任何用戶
mongo --host 192.168.1.50 --port 27017 admin
mongo 192.168.1.50:27017/admin
已經創建用戶
登錄庫跟用戶庫相同
mongo --host 192.168.1.50 --port 27017 admin -u admin -p123456
mongo 192.168.1.50:27017/admin -u admin -p123456
登錄庫跟用戶庫不同
mongo --host 192.168.1.50 --port 27017 abcdb -u admin -p123456 --authenticationDatabase admin
mongo 192.168.1.50:27017/abcdb -u admin -p123456 --authenticationDatabase admin
執行js腳本或語句
mongo 192.168.1.50:27017/abcdb -u admin -p123456 --eval 'db.ruboo.deleteMany({CreateTime:{$lt:ISODate("2017-11-01T00:00:00.000Z")}})' --authenticationDatabase admin
mongod.conf
#改mongodb配置文件 vim /etc/mongod.conf port=27017 dbpath=/data/mongodb/mongodb27017/data logpath=/data/mongodb/mongodb27017/logs/mongo.log pidfilepath=/data/mongodb/mongodb27017/logs/mongo.pid #profile = 1 bind_ip = 192.168.1.52 slowms = 1000 fork=true logappend=true oplogSize=4096 directoryperdb=true storageEngine=wiredTiger wiredTigerCacheSizeGB=3 syncdelay=30 wiredTigerCollectionBlockCompressor=snappy journal=true journalCommitInterval=100 #destination=file #format=JSON #path=/data/mongodb/mongodb27017/logs/audit.json #filter: '{ 'users.user' : 'tim' }' replSet=aiwanrs #auth = true #shardsvr=true #keyFile=/data/mongodb/mongodb27017/data/mongodb-keyfile
f
vim /etc/mongod.conf port=27017 dbpath=/data/mongodb/mongodb27017/data logpath=/data/mongodb/mongodb27017/logs/mongo.log pidfilepath=/data/mongodb/mongodb27017/logs/mongo.pid #profile = 1 #打開profiling 分析慢查詢 bind_ip = 192.168.1.50 #綁定ip,從mongodb3.6開始默認綁定127.0.0.1 slowms = 1000 #分析的慢查詢的時間,默認是100毫秒 fork=true #是否后台運行,啟動之后mongo會fork一個子進程來后台運行 logappend=true #寫錯誤日志的模式:設置為true為追加。默認是覆蓋現有的日志文件 oplogSize=2048 #一旦mongod第一次創建OPLOG,改變oplogSize將不會影響OPLOG的大小 directoryperdb=true storageEngine=wiredTiger wiredTigerCacheSizeGB=4 #bufferpool大小 syncdelay=30 #刷寫數據到數據文件的頻率,通過fsync操作數據。默認60秒。 wiredTigerCollectionBlockCompressor=snappy #數據壓縮策略 journal=true journalCommitInterval=100 #刷日志提交機制,默認是100ms,此選項接受2和300毫秒之間的值 destination=file #開啟審計日志 format=JSON #審計日志格式 path=/data/mongodb/mongodb$port/logs/audit.json #審計日志位置 #filter: '{ "users.user" : "tim" }' #審計日志過濾 #replSet=ck1 #指定一個副本集名稱作為參數,所有主機都必須有相同的名稱作為同一個副本集,使用副本集必須要指定這個參數。 #auth = true #進入數據庫需要auth驗證 #shardsvr=true #表示是否是一個分片集群
percona 版本去除了test庫,但是登錄mongodb的時候不加庫名還是會登錄test庫
mongodb3.4
cat mongo.log |grep test
2018-03-06T11:46:52.800+0800 I ACCESS [conn3] SCRAM-SHA-1 authentication failed for lyhabc on test from client 192.168.1.50:58268 ; UserNotFound: Could not find user lyhabc@test
文檔名
文檔中的key/value是有序的,沒有相同的兩個文檔。
文檔中的value的數據類型沒有限制,甚至可以是文檔。
文檔的key一般應該是字符串。
文檔的key不能含有空字符串,不能含有.和$以及_。
文檔的key不能重復,默認_id不能更改。
mangoDB中,key和value都是區分數據類型和大小寫的。
集合名
和key一樣不能有空字符串
不能以"system."開頭,不能含有"$","system."開頭是系統內置文檔
一個集合的完全限定名:數據庫名.集合(子集合)名稱,例如cms.blog.posts
數據庫名
不能是空字符串
不得含有空格、點、斜杠與反斜杠以及空字符串。
應該全部小寫
最多64字節
讀寫分離
驅動程序自動支持讀寫分離
secondaryPreferred
mongodb的Wiredtiger引擎存儲結構
網易樂得DBA dba加11月刊
Wiredtiger的Cache采用Btree的方式組織,每個Btree節點為一個page,root page是btree的根節點,internal page是btree的中間索引節點,leaf page是真正存儲數據的葉子節點;
btree的數據以page為單位按需從磁盤加載或寫入磁盤。
可以通過在配置文件中指定storage.wiredTiger.engineConfig.cacheSizeGB參數設定引擎使用的內存量。此內存用於緩存數據(索引、namespace,未提交的write,query緩沖等)。
http://www.mongoing.com/archives/5476
Mongodb的數據組織
在了解寫操作的事務性之前,需要先了解mongo層的每一個table,是如何與wiredtiger層的table(btree)對應的。mongo層一個最簡單的table包含一個 ObjectId(_id) 索引。_id類似於Mysql中主鍵的概念
rs1:PRIMARY> db.abc.getIndexes() [ { "v" : 1, "key" : { "_id" : 1 }, "name" : "_id_", "ns" : "test.abc" } ]
但是mongo中並不會將_id索引與行內容存放在一起(即沒有聚簇索引的概念,只有非聚集索引怎麽讀取行里面其他字段的值???)。取而代之的,mongodb將索引與數據分開存放,通過RecordId進行間接引用。 舉例一張包含兩個索引(_id 和 name)的表,在wt層將有三張表與其對應。通過name索引找到行記錄的過程為:先通過name->Record的索引找到RecordId,再通過RecordId->RowData的索引找到記錄內容。
此外,一個Mongodb實例還包含一張記錄對每一行的寫操作的表local.oplog.rs, 該表主要用於復制(primary-secondary replication)。每一次(對實例中任何一張表的任何一行的)更新操作,都會產生唯一的一條oplog,記錄在local.oplog.rs表里。
percona版mongodb3.4審計功能
cat /etc/mongod.conf
port=27017
auditDestination=file
#auditFilter={ <field1>: <expression1>, ... }
auditFormat=JSON
auditPath=/data/mongodb/mongodb27017/logs/auditLog.json
#auditAuthorizationSuccess= true
審計日志格式
{
atype: <String>,
ts : { "$date": <timestamp> },
local: { ip: <String>, port: <int> },
remote: { ip: <String>, port: <int> },
users : [ { user: <String>, db: <String> }, ... ],
roles: [ { role: <String>, db: <String> }, ... ],
param: <document>,
result: <int>
}
atype: Event type
ts: Date and UTC time of the event
local: Local IP address and port number of the instance
remote: Remote IP address and port number of the incoming connection associated with the event
users: Users associated with the event
roles: Roles granted to the user
param: Details of the event associated with the specific type
result: Exit code (0 for success)
} {
"atype": "authenticate",
"ts": {
"$date": "2018-03-29T09:54:59.269+0800"
},
"local": {
"ip": "192.168.1.51",
"port": 27017
},
"remote": {
"ip": "192.168.1.51",
"port": 35434
},
"users": [],
"roles": [],
"param": {
"user": "admin",
"db": "admin",
"mechanism": "SCRAM-SHA-1"
},
"result": 18
}
密碼驗證出錯
物理備份
http://www.ywnds.com/?p=12350
1)mongodump/mongoexport備份
這是mongodb官方提供的工具,屬於邏輯備份,適合於數據量不大的情況(比如<50GB),在數據量比較大的情況下顯得力不從心。
2)冷備份
所謂冷備份,就是停止mongodb服務,拷貝數據文件,屬於物理備份。優點是備份速度快,缺點是:需要停止服務,對應用服務影響比較大;
3)熱備份
percona mongodb 3.2及以上版本支持,屬於物理備份,注意:此處的物理備份不包括mongodb配置文件,keyFile等,需要用戶自行備份。
熱備份的備份文件可以用於恢復數據,也可以用於做從庫。
如果是做從庫,可以在mongodb副本集主庫里執行rs.add(“<ip1>:<port1>”)。ip1是恢復節點的IP,port1是恢復節點的端口
percona mongodb3.2支持WT引擎在線熱備份,恢復很簡單,吧備份目錄里的數據文件直接拷貝到你的dbpath下,然后啟動mongodb即可
建議:在mongodb副本集的secondary節點上執行createBackup命令,並且,backupDir目錄可以為空或者不存在,mognodb會自行創建,如果該目錄已經存在備份文件,此次備份可能失敗報錯。
備份失敗返回錯誤
> db.runCommand({createBackup: 1, backupDir: ""})
{ "ok" : 0, "errmsg" : "Destination path must be absolute" }
#不需要預先建好備份目錄,注意備份目錄的權限,mongodb是否有權限寫入
use admin
db.runCommand({createBackup: 1, backupDir: ""}) |gzip - > /data/backup/mongodb/$(date +%Y%m%d).tar.gz
還原
1)關閉mongodb
systemctl stop mongod.service
(2)清空data目錄和logs目錄
rm -rf ./data/*
rm -rf ./logs/*
(3)還原 解壓壓縮包拷貝到dbpath目錄
tar -xzvf xx -C xx
(4)修改datadir目錄權限
chown -R mongodb:mongodb /data/mongodb/mongodb27017/
(5)啟動mongodb進行驗證
systemctl start mongod.service
TTL 索引
可以指定過期時間,expireAfterSeconds單位是秒
在創建索引時,需要指定過期時間,過期后集合里的這個文檔就會自動刪除。
這里有一個注意事項是:字段必須時間類型的
db.集合名.createIndex({"a":1},{expireAfterSeconds:3600})
WiredTiger數據組織方式介紹
https://mp.weixin.qq.com/s/R2E61xU5IVeEO2ZSGHsFQw
為了能夠管理所有的集合、索引,MongoDB將集合的Catalog信息(包括對應到WiredTiger中的表名、集合創建選項、集合索引信息等)組織存放在一個_mdb_catalog的WiredTiger表中(對應到一個_mdb_catalog.wt的物理文件)。
因此,這個_mdb_catalog表可以認為是MongoDB的『元數據表』,普通的集合都是『數據表』。MongoDB在啟動時需要先從WiredTiger中加載這個元數據表的信息,然后才能加載出其他的數據表的信息。
同樣,在WiredTiger層,每張表也有一些元數據需要維護,這包括表創建時的相關配置,checkpoint信息等。這也是使用『元數據表』和『數據表』的管理組織方式。在WiredTiger中,所有『數據表』的元數據都會存放在一個WiredTiger.wt的表中,這個表可以認為是WiredTiger的『元數據表』。而這個WiredTiger.wt表本身的元數據,則是存放在一個WiredTiger.turtle的文本文件中。在WiredTiger啟動時,會先從WiredTiger.turtle文件中加載出WiredTiger.wt表的數據,然后就能加載出其他的數據表了
再回到_mdb_catalog表,雖然對MongoDB來說,它是一張『元數據表』,但是在WiredTiger看來,它只是一張普通的數據表,因此啟動時候,需要先等WiredTiger加載完WiredTiger.wt表后,從這個表中找到它的元數據。根據_mdb_catalog表的元數據可以對這個表做對應的初始化,並遍歷出MongodB的所有數據表(集合)的Catalog信息元數據,對它們做進一步的初始化
在上述這個過程中,對WiredTiger中的表做初始化,涉及到幾個步驟,包括:
1)檢查表的存儲格式版本是否和當前數據庫版本格式兼容;
2)確定該表是否需要開啟journal,這是在該表創建時的配置中指定的。
這兩個步驟都需要從WiredTiger.wt表中讀取該表的元數據進行判斷。
此外,結合目前的已知信息,我們可以看到,對MongoDB層可見的所有數據表,在_mdb_catalog表中維護了MongoDB需要的元數據,同樣在WiredTiger層中,會有一份對應的WiredTiger需要的元數據維護在WiredTiger.wt表中。因此,事實上這里有兩份數據表的列表,並且在某些情況下可能會存在不一致,比如,異常宕機的場景。因此MongoDB在啟動過程中,會對這兩份數據進行一致性檢查,如果是異常宕機啟動過程,會以WiredTiger.wt表中的數據為准,對_mdb_catalog表中的記錄進行修正。
這個過程會需要遍歷WiredTiger.wt表得到所有數據表的列表。
綜上,可以看到,在MongoDB啟動過程中,有多處涉及到需要從WiredTiger.wt表中讀取數據表的元數據。對這種需求,WiredTiger專門提供了一類特殊的『metadata』類型的cursor。
WiredTiger.turtle文本文件-》WiredTiger.wt表-》mdb_catalog.wt表的物理文件
有兩份數據表
第一份:WiredTiger.wt表,宕機情況以這個為准
第二份:mdb_catalog.wt的表
mongo4.0之后(包括),admin庫下面的系統表,放在WiredTiger.wt表,隱藏了系統表
f