103) ceph更換故障OSD/添加新OSD步驟


1- 更換故障OSD

1.1- 故障現象

$ ceph health detail
......
OSD_SCRUB_ERRORS 31 scrub errors
PG_DAMAGED Possible data damage: 5 pgs inconsistent
    pg 41.33 is active+clean+inconsistent, acting [35,33]
    pg 41.42 is active+clean+inconsistent, acting [29,35]
    pg 51.24 is active+clean+inconsistent, acting [35,43]
    pg 51.77 is active+clean+inconsistent, acting [28,35]
    pg 51.7b is active+clean+inconsistent, acting [35,46]

1.2- 臨時解決辦法

執行 ceph pg repair 解決,此時由於磁盤壞道造成不可讀的數據會拷貝到其他位置。但這不能從根本上解決問題,磁盤損壞會持續報出類似的錯誤。

$ ceph pg repair 41.33
$ ceph pg repair 41.42
$ ceph pg repair 51.24
$ ceph pg repair 51.77
$ ceph pg repair 51.7b

1.3- 獲取磁盤錯誤信息

  • 定位磁盤:
apt install -y smartmontools
# yum install -y smartmontools
smartctl -a /dev/sdc
smartctl 6.5 2016-01-24 r4214 [x86_64-linux-4.4.0-121-generic] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
 
=== START OF INFORMATION SECTION ===
Device Model:     TOSHIBA MG04ACA600E
Serial Number:    57J6KA41F6CD
LU WWN Device Id: 5 000039 7cb9822be
Firmware Version: FS1K
User Capacity:    6,001,175,126,016 bytes [6.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    7200 rpm
Form Factor:      3.5 inches
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ATA8-ACS (minor revision not indicated)
SATA Version is:  SATA 3.0, 6.0 Gb/s (current: 6.0 Gb/s)
Local Time is:    Tue Aug  7 14:46:45 2018 CST
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

1.4- 關閉ceph集群數據遷移:

osd硬盤故障,狀態變為down。在經過mod osd down out interval 設定的時間間隔后,ceph將其標記為out,並開始進行數據遷移恢復。為了降低ceph進行數據恢復或scrub等操作對性能的影響,可以先將其暫時關閉,待硬盤更換完成且osd恢復后再開啟:

1.5- 進入osd故障的節點,卸載osd掛載目錄

umount /var/lib/ceph/osd/ceph-5

1.6- 從crush map 中移除osd

ceph osd crush remove osd.5

1.7- 刪除故障osd的密鑰

ceph auth del osd.5

1.8- 刪除故障osd

ceph osd rm 5

1.9- 更換完新硬盤后,注意新硬盤的盤符,並創建osd

ceph-deploy osd create --data /dev/sdd node3

1.10- 待新osd添加crush map后,重新開啟集群禁用標志

for i in noout nobackfill norecover noscrub nodeep-scrub;do ceph osd unset $i;done

1.11- ceph集群經過一段時間的數據遷移后,恢復active+clean狀態

2- 添加新OSD

  1. 選擇一個osd節點,添加好新的硬盤
  2. 擦凈節點磁盤
ceph-deploy disk zap [node_name] /dev/sdb
  1. 准備Object Storage Daemon
ceph-deploy osd prepare [node_name]:/var/lib/ceph/osd1
  1. 激活Object Storage Daemon
ceph-deploy osd activate [node_name]:/var/lib/ceph/osd1

3- 刪除OSD

  1. 把 OSD 踢出集群
ceph osd out osd.4
  1. 在相應的節點,停止ceph-osd服務
systemctl stop ceph-osd@4.service
systemctl disable ceph-osd@4.service
  1. 刪除 CRUSH 圖的對應 OSD 條目,它就不再接收數據了
ceph osd crush remove osd.4
  1. 刪除 OSD 認證密鑰
ceph auth del osd.4
  1. 刪除osd.4
ceph osd rm osd.4


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM