[轉] 關於 Ceph PG


本系列文章會深入研究 Ceph 以及 Ceph 和 OpenStack 的集成:

(1)安裝和部署

(2)Ceph RBD 接口和工具

(3)Ceph 物理和邏輯結構

(4)Ceph 的基礎數據結構

(5)Ceph 與 OpenStack 集成的實現

(6)QEMU-KVM 和 Ceph RBD 的 緩存機制總結

(7)Ceph 的基本操作和常見故障排除方法

(8)關於Ceph PGs

 

https://docs.google.com/presentation/d/1_0eIWNYWvJON4GrJwktQ1Rumgqesc8QQbVkhutXgcNs/edit#slide=id.g7365a7bef_0_17 

 

Placement Group, PG

 

How PGs are Used?

 

Should OSD#2 fail

 

Data Durability Issue

 

Object Distribution Issue

 

Choosing the number of PGs

 

Example

 

如何根據現狀調整 PG and PGP?

 

Monitoring OSDs

 

Ceph is NOT Healthy_OK

 

OSD Status Check

 

PG Sets

 

When A OSD in the Acting Set is down

 

Up Set

 

Check PG Status

 

Point

 

Peering

 

Peering: Establish Agreement of the PG status

 

Monitoring PG States

 

Check PG Stat

 

List Pool

 

PG IDs

 

The Output Format of the placement group

 

 PG
Creating PG

 

Create A Pool

 

 Peering
A Peering Process for a Pool with Replica 3

 

Active

 

Clean

 

DEGRADED

 

PG with {active + degraded}

 

Recovering

 

Backing Filling (1/2)

 

Backing Filling (2/2)

 

Remapped

 

 Stale
Identifying Troubled PGs (1/2)

 

Identify Trouble PGs (2/2)

 

Finding An Object Location

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM