MongoDB整理物理空间碎片
2021/4/27 19:26:56
本文主要是介绍MongoDB整理物理空间碎片,对大家解决编程问题具有一定的参考价值,需要的程序猿们随着小编来一起学习吧!
接触过MySQL的同学应该知道,表进行批量的delete操作之后,数据盘空间并不会马上释放,innodb只是把删除的行对应的数据块做下标记,下次使用时候会优先使用。
而MongoDB也有类似的情况,批量remove文档之后,磁盘空间并不会释放,我们可以使用compact进行碎片整理。
注意事项
- 实例的存储引擎为WiredTiger。
- 该操作会导致集合所属的数据库被锁定,且该数据库的读写操作将被阻塞,务必在业务低峰期操作。
单机或者副本集
查看集合基本信息
> db.usertable.stats().wiredTiger { "metadata" : { "formatVersion" : 1 }, "creationString" : "access_pattern_hint=none,allocation_size=4KB,app_metadata=(formatVersion=1),assert=(commit_timestamp=none,durable_timestamp=none,read_timestamp=none),block_allocation=best,block_compressor=snappy,cache_resident=false,checksum=on,colgroups=,collator=,columns=,dictionary=0,encryption=(keyid=,name=),exclusive=false,extractor=,format=btree,huffman_key=,huffman_value=,ignore_in_memory_cache_size=false,immutable=false,internal_item_max=0,internal_key_max=0,internal_key_truncate=true,internal_page_max=4KB,key_format=q,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=64MB,log=(enabled=true),lsm=(auto_throttle=true,bloom=true,bloom_bit_count=16,bloom_config=,bloom_hash_count=8,bloom_oldest=false,chunk_count_limit=0,chunk_max=5GB,chunk_size=10MB,merge_custom=(prefix=,start_generation=0,suffix=),merge_max=15,merge_min=0),memory_page_image_max=0,memory_page_max=10m,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=false,prefix_compression_min=4,source=,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=90,type=file,value_format=u", "type" : "file", "uri" : "statistics:table:ycsb1/collection-2--8436275121761636149", "LSM" : { "bloom filter false positives" : 0, "bloom filter hits" : 0, "bloom filter misses" : 0, "bloom filter pages evicted from cache" : 0, "bloom filter pages read into cache" : 0, "bloom filters in the LSM tree" : 0, "chunks in the LSM tree" : 0, "highest merge generation in the LSM tree" : 0, "queries that could have benefited from a Bloom filter that did not exist" : 0, "sleep for LSM checkpoint throttle" : 0, "sleep for LSM merge throttle" : 0, "total size of bloom filters" : 0 }, "block-manager" : { "allocations requiring file extension" : 49752, "blocks allocated" : 49774, "blocks freed" : 49761, "checkpoint size" : 0, "file allocation unit size" : 4096, "file bytes available for reuse" : 1420492800, "file magic number" : 120897, "file major version number" : 1, "file size in bytes" : 1420505088, "minor version number" : 0 }, "btree" : { "btree checkpoint generation" : 27, "btree clean tree checkpoint expiration time" : NumberLong("9223372036854775807"), "column-store fixed-size leaf pages" : 0, "column-store internal pages" : 0, "column-store variable-size RLE encoded values" : 0, "column-store variable-size deleted values" : 0, "column-store variable-size leaf pages" : 0, "fixed-record size" : 0, "maximum internal page key size" : 368, "maximum internal page size" : 4096, "maximum leaf page key size" : 2867, "maximum leaf page size" : 32768, "maximum leaf page value size" : 67108864, "maximum tree depth" : 4, "number of key/value pairs" : 0, "overflow pages" : 0, "pages rewritten by compaction" : 0, "row-store empty values" : 0, "row-store internal pages" : 0, "row-store leaf pages" : 0 }, "cache" : { "bytes currently in the cache" : 24555798, "bytes dirty in the cache cumulative" : 1563881912, "bytes read into cache" : 425821233, "bytes written from cache" : 1417712072, "checkpoint blocked page eviction" : 53, "data source pages selected for eviction unable to be evicted" : 81, "eviction walk passes of a file" : 2727, "eviction walk target pages histogram - 0-9" : 164, "eviction walk target pages histogram - 10-31" : 336, "eviction walk target pages histogram - 128 and higher" : 0, "eviction walk target pages histogram - 32-63" : 466, "eviction walk target pages histogram - 64-128" : 1761, "eviction walks abandoned" : 53, "eviction walks gave up because they restarted their walk twice" : 5, "eviction walks gave up because they saw too many pages and found no candidates" : 124, "eviction walks gave up because they saw too many pages and found too few candidates" : 60, "eviction walks reached end of tree" : 175, "eviction walks started from root of tree" : 243, "eviction walks started from saved location in tree" : 2484, "hazard pointer blocked page eviction" : 9, "in-memory page passed criteria to be split" : 356, "in-memory page splits" : 172, "internal pages evicted" : 404, "internal pages split during eviction" : 4, "leaf pages split during eviction" : 177, "modified pages evicted" : 48451, "overflow pages read into cache" : 0, "page split during eviction deepened the tree" : 1, "page written requiring cache overflow records" : 0, "pages read into cache" : 14676, "pages read into cache after truncate" : 1, "pages read into cache after truncate in prepare state" : 0, "pages read into cache requiring cache overflow entries" : 0, "pages requested from the cache" : 7689354, "pages seen by eviction walk" : 1194026, "pages written from cache" : 49765, "pages written requiring in-memory restoration" : 2, "tracked dirty bytes in the cache" : 0, "unmodified pages evicted" : 14256 }, "cache_walk" : { "Average difference between current eviction generation when the page was last considered" : 0, "Average on-disk page image size seen" : 0, "Average time in cache for pages that have been visited by the eviction server" : 0, "Average time in cache for pages that have not been visited by the eviction server" : 0, "Clean pages currently in cache" : 0, "Current eviction generation" : 0, "Dirty pages currently in cache" : 0, "Entries in the root page" : 0, "Internal pages currently in cache" : 0, "Leaf pages currently in cache" : 0, "Maximum difference between current eviction generation when the page was last considered" : 0, "Maximum page size seen" : 0, "Minimum on-disk page image size seen" : 0, "Number of pages never visited by eviction server" : 0, "On-disk page image sizes smaller than a single allocation unit" : 0, "Pages created in memory and never written" : 0, "Pages currently queued for eviction" : 0, "Pages that could not be queued for eviction" : 0, "Refs skipped during cache traversal" : 0, "Size of the root page" : 0, "Total number of pages currently in cache" : 0 }, "compression" : { "compressed page maximum internal page size prior to compression" : 4096, "compressed page maximum leaf page size prior to compression " : 32768, "compressed pages read" : 88, "compressed pages written" : 105, "page written failed to compress" : 49176, "page written was too small to compress" : 484 }, "cursor" : { "bulk loaded cursor insert calls" : 0, "cache cursors reuse count" : 981, "close calls that result in cache" : 0, "create calls" : 5, "insert calls" : 982334, "insert key and value bytes" : 1399137825, "modify" : 0, "modify key and value bytes affected" : 0, "modify value bytes modified" : 0, "next calls" : 990110, "open cursor count" : 0, "operation restarted" : 0, "prev calls" : 2, "remove calls" : 982334, "remove key bytes removed" : 3846971, "reserve calls" : 0, "reset calls" : 2971484, "search calls" : 1964668, "search near calls" : 990109, "truncate calls" : 0, "update calls" : 0, "update key and value bytes" : 0, "update value size change" : 0 }, "reconciliation" : { "dictionary matches" : 0, "fast-path pages deleted" : 0, "internal page key bytes discarded using suffix compression" : 99217, "internal page multi-block writes" : 9, "internal-page overflow keys" : 0, "leaf page key bytes discarded using prefix compression" : 0, "leaf page multi-block writes" : 179, "leaf-page overflow keys" : 0, "maximum blocks required for a page" : 1, "overflow values written" : 0, "page checksum matches" : 413, "page reconciliation calls" : 50044, "page reconciliation calls for eviction" : 47098, "pages deleted" : 49020 }, "session" : { "object compaction" : 0 }, "transaction" : { "update conflicts" : 0 } }
删除文档
> use ycsb1 switched to db ycsb1 > > db.usertable.remove({}) WriteResult({ "nRemoved" : 982334 }) > db.usertable.count() 0
可以看到集合已经清空。
查看磁盘占用
找到数据目录
[root@mongodb data]# du -sm ycsb1/ 1405 ycsb1/
碎片整理
> db.runCommand({compact:"usertable",force:true}) { "ok" : 1 }
查看磁盘占用
[root@mongodb data]# du -sm ycsb1/ 1 ycsb1/
可以看到磁盘空间已经释放。
接下来登录secondary节点进行同样的操作。
分片集
查看分片信息
mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("60545017224c766911a9c440") } shards: { "_id" : "hdshard1", "host" : "hdshard1/172.16.254.136:40001,172.16.254.137:40001,172.16.254.138:40001", "state" : 1 } { "_id" : "hdshard2", "host" : "hdshard2/172.16.254.136:40002,172.16.254.137:40002,172.16.254.138:40002", "state" : 1 } { "_id" : "hdshard3", "host" : "hdshard3/172.16.254.136:40003,172.16.254.137:40003,172.16.254.138:40003", "state" : 1 } active mongoses: "4.2.12" : 3 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 52 : Success databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: hdshard1 342 hdshard2 341 hdshard3 341 too many chunks to print, use verbose if you want to force print { "_id" : "db1", "primary" : "hdshard3", "partitioned" : true, "version" : { "uuid" : UUID("71bb472c-7896-4a31-a77c-e3aaf723be3c"), "lastMod" : 1 } } { "_id" : "db2", "primary" : "hdshard2", "partitioned" : false, "version" : { "uuid" : UUID("add90941-a8b1-4c40-94e9-9ccc38d73096"), "lastMod" : 2 } } { "_id" : "db3", "primary" : "hdshard3", "partitioned" : false, "version" : { "uuid" : UUID("f0278f73-d999-453f-8739-eac30a8bcf9b"), "lastMod" : 1 } } { "_id" : "recommend", "primary" : "hdshard1", "partitioned" : true, "version" : { "uuid" : UUID("cb833b8e-cc4f-4c52-83c3-719aa383bac4"), "lastMod" : 1 } } recommend.rcmd_1_min_tag_mei_rong shard key: { "_id" : "hashed" } unique: false balancing: true chunks: hdshard1 2 hdshard2 3 hdshard3 3 { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6701866976688134138") } on : hdshard3 Timestamp(9, 0) { "_id" : NumberLong("-6701866976688134138") } -->> { "_id" : NumberLong("-4163240026901542572") } on : hdshard3 Timestamp(3, 0) { "_id" : NumberLong("-4163240026901542572") } -->> { "_id" : NumberLong("-1616330844721205691") } on : hdshard2 Timestamp(7, 1) { "_id" : NumberLong("-1616330844721205691") } -->> { "_id" : NumberLong("909129560750995399") } on : hdshard3 Timestamp(5, 0) { "_id" : NumberLong("909129560750995399") } -->> { "_id" : NumberLong("3449289120186727718") } on : hdshard2 Timestamp(6, 0) { "_id" : NumberLong("3449289120186727718") } -->> { "_id" : NumberLong("5980358241733552715") } on : hdshard2 Timestamp(10, 0) { "_id" : NumberLong("5980358241733552715") } -->> { "_id" : NumberLong("8520801504243263436") } on : hdshard1 Timestamp(8, 1) { "_id" : NumberLong("8520801504243263436") } -->> { "_id" : { "$maxKey" : 1 } } on : hdshard1 Timestamp(1, 7) recommend.rcmd_1_tag_li_liao shard key: { "_id" : 1 } unique: false balancing: true chunks: hdshard1 36 hdshard2 35 hdshard3 36 too many chunks to print, use verbose if you want to force print { "_id" : "ycsb", "primary" : "hdshard2", "partitioned" : true, "version" : { "uuid" : UUID("df4f702f-bb9f-477c-a327-c4b4f28ccf8f"), "lastMod" : 1 } } ycsb.usertable shard key: { "_id" : "hashed" } unique: false balancing: true chunks: hdshard1 11 hdshard2 11 hdshard3 11 too many chunks to print, use verbose if you want to force print { "_id" : "ycsb1", "primary" : "hdshard2", "partitioned" : true, "version" : { "uuid" : UUID("c7e227d8-0739-41c7-b47e-9d36065454d3"), "lastMod" : 1 } } ycsb1.usertable shard key: { "_id" : "hashed" } unique: false balancing: true chunks: hdshard1 8 hdshard2 8 hdshard3 9 too many chunks to print, use verbose if you want to force print
可以看到ycsb1库的主分片节点是hdshard2。
删除文档
mongos> use ycsb1 switched to db ycsb1 mongos> show collections usertable mongos> db.usertable.remove({}) WriteResult({ "nRemoved" : 982334 }) mongos> db.usertable.count() 0
查看磁盘
登录分片所在服务器,进入数据目录
[mongodb@mongo7 shard2]$ du -sm ycsb1 1448 ycsb1
碎片整理
hdshard2:PRIMARY> db.runCommand({compact:"usertable",force:true}) { "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(0, 0), "electionId" : ObjectId("7fffffff0000000000000030") }, "lastCommittedOpTime" : Timestamp(1619505175, 4), "$configServerState" : { "opTime" : { "ts" : Timestamp(1619505181, 1), "t" : NumberLong(22) } }, "$clusterTime" : { "clusterTime" : Timestamp(1619505181, 1), "signature" : { "hash" : BinData(0,"zcROSPOVYMxzJouTvGAZ4S0Ddh4="), "keyId" : NumberLong("6941260985399246879") } }, "operationTime" : Timestamp(1619505175, 4) }
查看磁盘
[mongodb@mongo7 shard2]$ du -sm ycsb1 1 ycsb1
可以看到磁盘空间已经释放。
接下来登录secondary节点重复上述操作。
说明
- 如果新数据写入较快,可以不进行compact,这些碎片会很快被使用。
- 如果使用db.collection.drop()进行集合删除,无需进行碎片整理。
- 如果涉及副本集,需在primary节点和secondary节点进行同样的操作,compact不会传递给secondary节点。
这篇关于MongoDB整理物理空间碎片的文章就介绍到这儿,希望我们推荐的文章对大家有所帮助,也希望大家多多支持为之网!
- 2024-11-20MongoDB教程:从入门到实践详解
- 2024-11-17执行 Google Ads API 查询后返回的是空数组什么原因?-icode9专业技术文章分享
- 2024-11-17google广告数据不同经理账户下的凭证可以获取对方的api数据吗?-icode9专业技术文章分享
- 2024-11-15SendGrid 的 Go 客户端库怎么实现同时向多个邮箱发送邮件?-icode9专业技术文章分享
- 2024-11-15SendGrid 的 Go 客户端库怎么设置header 和 标签tag 呢?-icode9专业技术文章分享
- 2024-11-12Cargo deny安装指路
- 2024-11-02MongoDB项目实战:从入门到初级应用
- 2024-11-01随时随地一键转录,Google Cloud 新模型 Chirp 2 让语音识别更上一层楼
- 2024-10-25Google Cloud动手实验详解:如何在Cloud Run上开发无服务器应用
- 2024-10-24AI ?先驱齐聚 BAAI 2024,发布大规模语言、多模态、具身、生物计算以及 FlagOpen 2.0 等 AI 模型创新成果。