RHEL_高級磁盤管理(vdo、stratis)


RHEL高級磁盤管理—VDO

VDO簡介

  • Virtual Data Optimizer 通過數據去重、壓縮的方式來優化存儲空間。
  • VDO層放置在現有塊存儲設備上,例如Raid設備、本地磁盤設備。
  • LVM 或文件系統 放置在VDO層之上,也可以將VDO放在LVM層之上
  • VDO工具需要用戶手動安裝,安裝完成后即可使用vdo命令創建、添加、刪除、激活、停止等操作

VDO安裝

[root@localhost ~]# yum install -y vdo

vdo常用的選項

  1. vdo語法:vdo command
  2. command常用的參數:
create              創建一個VDO卷及其關聯索引使其可用。

remove              刪除一個或多個已停止的VDO卷和相關卷索引。

modify              修改一個或所有VDO的配置參數卷。更改將在VDO下次運行時生效設備啟動;已經運行的設備則不是受到影響。

list                顯示已啟動VDO卷的列表。如果,所有指定它同時顯示已啟動和未啟動卷。

start               啟動一個或多個已停止、激活的VDO卷相關的服務。

status              以YAML格式報告VDO系統和卷狀態。但是,這個命令不需要root特權如果沒有,信息將是不完整的。

stop                停止一個或多個正在運行的VDO卷和相關卷服務。

activate            激活一個或多個VDO卷。激活卷可以使用“開始”命令啟動。

deactivate          使一個或多個VDO卷失效。停用不能通過“start”命令啟動卷。停用當前正在運行的卷不會停止它。

growLogical         增加VDO卷的邏輯大小。卷必須存在並且必須正在運行。

growPhysical        增加VDO卷的物理大小。卷必須存在並且必須正在運行。

使用vdo示例

目標: 使用為分區的磁盤,創建名為vdoname的vdo卷,並掛載到/vdodir目錄下,並且能開機自定掛載

  1. 查看已經存在的未分區的磁盤
[root@localhost ~]# lsblk /dev/sda 
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda    8:0    0  80G  0 disk 

使用磁盤分區創建vdo卷(也可以使用整個未分區的磁盤創建分區,在分區上創建vdo卷;如果使用這種方法,首先要清除磁盤上的簽名)

  1. 創建一個磁盤分區
[root@localhost ~]# fdisk /dev/sda 

Welcome to fdisk (util-linux 2.32.1).
Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Command (m for help): n
Partition type
   p   primary (0 primary, 0 extended, 4 free)
   e   extended (container for logical partitions)
Select (default p): p
Partition number (1-4, default 1): 1
First sector (2048-167772159, default 2048): 
Last sector, +sectors or +size{K,M,G,T,P} (2048-167772159, default 167772159): +20G

Created a new partition 1 of type 'Linux' and of size 20 GiB.

Command (m for help): w
The partition table has been altered.
Calling ioctl() to re-read partition table.
Syncing disks.
  1. 查看該磁盤分區
[root@localhost ~]# lsblk 
NAME          MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda             8:0    0   80G  0 disk 
└─sda1          8:1    0   20G  0 part         //這是剛剛創建的磁盤分區
sr0            11:0    1  7.3G  0 rom  /mnt
nvme0n1       259:0    0   80G  0 disk 
├─nvme0n1p1   259:1    0    1G  0 part /boot
└─nvme0n1p2   259:2    0   79G  0 part 
  ├─rhel-root 253:0    0   50G  0 lvm  /
  ├─rhel-swap 253:1    0    2G  0 lvm  [SWAP]
  └─rhel-home 253:2    0   27G  0 lvm  /home
  1. /dev/sda1上創建vdo卷(注意:創建的分區不需要格式化)
[root@localhost ~]# vdo create --name=vdoname --device=/dev/sda1 --vdoLogicalSize=8G
Creating VDO vdoname
Starting VDO vdoname
Starting compression on VDO vdoname
VDO instance 2 volume is ready at /dev/mapper/vdoname
  1. 查看已經創建的vdo
[root@localhost ~]# vdo list
vdoname

或者使用lsblk查看

[root@localhost ~]# lsblk /dev/sda
NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda           8:0    0  80G  0 disk 
└─sda1        8:1    0  20G  0 part 
  └─vdoname 253:3    0   8G  0 vdo  
  1. 格式化vdo卷,格式化類型xfs
[root@localhost ~]# mkfs.xfs /dev/mapper/vdoname 
meta-data=/dev/mapper/vdoname    isize=512    agcount=4, agsize=524288 blks
         =                       sectsz=4096  attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=2097152, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=4096  sunit=1 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
  1. 查看已經創建的vdo卷的屬性
[root@localhost ~]# vdostats --si
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdoname      21.5G      4.3G     17.2G  20%           99%

或者使用blkid查看

[root@localhost ~]# blkid /dev/mapper/vdoname 
/dev/mapper/vdoname: UUID="e7bf09bb-1203-4eef-8837-cd802ef11ded" TYPE="xfs"
  1. 掛載vdo卷,掛載目錄/vdodir
[root@localhost ~]# mkdir /vdodir
[root@localhost ~]# mount /dev/mapper/vdoname /vdodir/
[root@localhost ~]# df -h
Filesystem             Size  Used Avail Use% Mounted on
devtmpfs               886M     0  886M   0% /dev
tmpfs                  903M     0  903M   0% /dev/shm
tmpfs                  903M  8.6M  894M   1% /run
tmpfs                  903M     0  903M   0% /sys/fs/cgroup
/dev/mapper/rhel-root   50G  2.2G   48G   5% /
/dev/mapper/rhel-home   27G  225M   27G   1% /home
/dev/nvme0n1p1        1014M  171M  844M  17% /boot
tmpfs                  181M     0  181M   0% /run/user/0
/dev/sr0               7.4G  7.4G     0 100% /mnt
/dev/mapper/vdoname    8.0G   90M  8.0G   2% /vdodir         //已經掛載成功
  1. 查看掛載后的vdo卷的信息
[root@localhost ~]# vdostats --si
Device                    Size      Used Available Use% Space saving%
/dev/mapper/vdoname      21.5G      4.3G     17.2G  20%           98%
[root@localhost ~]# lsblk /dev/mapper/vdoname 
NAME    MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vdoname 253:3    0   8G  0 vdo  /vdodir
  1. 開機自動掛載vdo
[root@localhost ~]# echo "/dev/mapper/vdoname    /vdodir      xfs     defaults  0 0" >> /etc/fstab 
[root@localhost ~]# cat /etc/fstab 
#
# /etc/fstab
# Created by anaconda on Wed Aug 26 03:25:38 2020
#
# Accessible filesystems, by reference, are maintained under '/dev/disk/'.
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info.
#
# After editing this file, run 'systemctl daemon-reload' to update systemd
# units generated from this file.
#
/dev/mapper/rhel-root   /                       xfs     defaults        0 0
UUID=234365dc-2262-452e-9cbb-a6acfde04385 /boot                   xfs     defaults        0 0
/dev/mapper/rhel-home   /home                   xfs     defaults        0 0
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
/dev/mapper/vdoname    /vdodir      xfs     defaults  0 0
  1. 刪除一個vdo
[root@localhost ~]# umount /vdodir/        //如果已經掛載,需要先卸載
[root@localhost ~]# vdo remove --name=vdoname
Removing VDO vdoname
Stopping VDO vdoname
[root@localhost ~]# vdo list

或者使用lsblk查看

[root@localhost ~]# lsblk /dev/sda
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda      8:0    0  80G  0 disk 
└─sda1   8:1    0  20G  0 part 

RHEL高級磁盤管理—Stratis

Stratis簡介

  • RHEL8.0本地存儲管理工具
  • 通過Stratis可以便捷的使用精簡配置(Thin Provisioning)、快照(Snapshots)、基於池(Pool-based)的管理和監控等高級存儲功能
  • Stratis 基於xfs文件系統格式,創建filesystem后不需要格式化;例如:在pool池中創建file文件系統,則file文件系統的類型已經是xfs格式,不需要在去格式化
  • 守護進程:stratisd

安裝Stratisd服務

[root@localhost ~]# yum install -y stratisd stratis-cli
[root@localhost ~]# systemctl enable --now stratisd

使用Stratis的整體操作流程

  1. 選擇完好的塊設備(磁盤或者分區)
  2. 創建pool
  3. pool中創建文件系統(filesystem

使用Stratis創建pool示例

  1. 創建完好的磁盤分區
[root@localhost ~]# lsblk /dev/sda
NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda      8:0    0  80G  0 disk 
└─sda1   8:1    0  20G  0 part 
  1. 創建pool前查看塊設備是否存在簽名認證,如果有則必須先清除塊設備上的簽名認證,才能繼續使用
[root@localhost ~]# fdisk -l /dev/sda
Disk /dev/sda: 80 GiB, 85899345920 bytes, 167772160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x4b000bc8

Disklabel部分就是塊設備的簽名認證,需要清除該簽名認證

[root@localhost ~]# wipefs -a /dev/sda
/dev/sda: 2 bytes were erased at offset 0x000001fe (dos): 55 aa
/dev/sda: calling ioctl to re-read partition table: Success

清除后,再次查看塊設備的簽名信息

[root@localhost ~]# fdisk -l /dev/sda
Disk /dev/sda: 80 GiB, 85899345920 bytes, 167772160 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
  1. 創建pool池,一般是要求1G以上大小的塊設備,才能創建pool
[root@localhost ~]# stratis pool create pool-one /dev/sda1       //pool-one是pool的名稱;/dev/sda1是拿來使用的塊設備
[root@localhost ~]# stratis pool list
Name      Total Physical Size  Total Physical Used
pool-one               20 GiB               52 MiB
  1. 向已經存在的pool池中添加塊設備
[root@localhost ~]# lsblk /dev/sda2
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda2   8:2    0  20G  0 part 
[root@localhost ~]# stratis pool add-data pool-one /dev/sda2
[root@localhost ~]# stratis pool list
Name      Total Physical Size  Total Physical Used
pool-one               40 GiB               72 MiB       //容量比原來擴大了
  1. 同時將兩塊塊設備添加到同一個pool池中
[root@localhost ~]# lsblk /dev/sda3 /dev/sda4
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda3   8:3    0  10G  0 part 
sda4   8:4    0  20G  0 part 
[root@localhost ~]# stratis pool create pool-two /dev/sda3 /dev/sda4
[root@localhost ~]# stratis pool list
Name      Total Physical Size  Total Physical Used
pool-one               40 GiB               72 MiB
pool-two               30 GiB               56 MiB
  1. 查看pool-one池和pool-two池中所使用的塊設備
[root@localhost ~]# stratis blockdev list pool-one
Pool Name  Device Node  Physical Size  State  Tier
pool-one   /dev/sda1           20 GiB  InUse  Data
pool-one   /dev/sda2           20 GiB  InUse  Data
[root@localhost ~]# stratis blockdev list pool-two
Pool Name  Device Node  Physical Size  State  Tier
pool-two   /dev/sda3           10 GiB  InUse  Data
pool-two   /dev/sda4           20 GiB  InUse  Data
  1. 查看pool-one池和pool-two池中塊設備的信息
[root@localhost ~]# lsblk /dev/sda
NAME                                                                      MAJ:MIN RM SIZE RO TYPE    MOUNTPOINT
sda                                                                         8:0    0  80G  0 disk    
├─sda1                                                                      8:1    0  20G  0 part    
│ └─stratis-1-private-2a5d0ca4266540b889057f37816c7423-physical-originsub 253:3    0  40G  0 stratis 
│   ├─stratis-1-private-2a5d0ca4266540b889057f37816c7423-flex-thinmeta    253:4    0  32M  0 stratis 
│   │ └─stratis-1-private-2a5d0ca4266540b889057f37816c7423-thinpool-pool  253:7    0  40G  0 stratis 
│   ├─stratis-1-private-2a5d0ca4266540b889057f37816c7423-flex-thindata    253:5    0  40G  0 stratis 
│   │ └─stratis-1-private-2a5d0ca4266540b889057f37816c7423-thinpool-pool  253:7    0  40G  0 stratis 
│   └─stratis-1-private-2a5d0ca4266540b889057f37816c7423-flex-mdv         253:6    0  16M  0 stratis 
├─sda2                                                                      8:2    0  20G  0 part    
│ └─stratis-1-private-2a5d0ca4266540b889057f37816c7423-physical-originsub 253:3    0  40G  0 stratis 
│   ├─stratis-1-private-2a5d0ca4266540b889057f37816c7423-flex-thinmeta    253:4    0  32M  0 stratis 
│   │ └─stratis-1-private-2a5d0ca4266540b889057f37816c7423-thinpool-pool  253:7    0  40G  0 stratis 
│   ├─stratis-1-private-2a5d0ca4266540b889057f37816c7423-flex-thindata    253:5    0  40G  0 stratis 
│   │ └─stratis-1-private-2a5d0ca4266540b889057f37816c7423-thinpool-pool  253:7    0  40G  0 stratis 
│   └─stratis-1-private-2a5d0ca4266540b889057f37816c7423-flex-mdv         253:6    0  16M  0 stratis 
├─sda3                                                                      8:3    0  10G  0 part    
│ └─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-physical-originsub 253:8    0  30G  0 stratis 
│   ├─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-flex-thinmeta    253:9    0  16M  0 stratis 
│   │ └─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-thinpool-pool  253:12   0  30G  0 stratis 
│   ├─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-flex-thindata    253:10   0  30G  0 stratis 
│   │ └─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-thinpool-pool  253:12   0  30G  0 stratis 
│   └─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-flex-mdv         253:11   0  16M  0 stratis 
└─sda4                                                                      8:4    0  20G  0 part    
  └─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-physical-originsub 253:8    0  30G  0 stratis 
    ├─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-flex-thinmeta    253:9    0  16M  0 stratis 
    │ └─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-thinpool-pool  253:12   0  30G  0 stratis 
    ├─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-flex-thindata    253:10   0  30G  0 stratis 
    │ └─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-thinpool-pool  253:12   0  30G  0 stratis 
    └─stratis-1-private-31db6de0edb64a3f8721a33f55922b69-flex-mdv         253:11   0  16M  0 stratis 

使用Stratis創建filesystem示例

  1. pool-one池中創建filesystem(一次只能創建一個filesystem
[root@localhost ~]# stratis filesystem create pool-one file-one      //pool-one是pool的名稱;file-one是filesystem的名稱
[root@localhost ~]# stratis filesystem list          //列出已經擁有的filesystem
Pool Name  Name      Used     Created            Device                      UUID                            
pool-one   file-one  546 MiB  Sep 20 2020 20:33  /stratis/pool-one/file-one  deeb42ce571542cab33afcbfece6dd0a
  1. 查看指定pool池中擁有的filesystem
[root@localhost ~]# stratis filesystem list pool-one
Pool Name  Name      Used     Created            Device                      UUID                            
pool-one   file-one  546 MiB  Sep 20 2020 20:33  /stratis/pool-one/file-one  deeb42ce571542cab33afcbfece6dd0a
  1. 掛載filesystem,掛載點:/fsdir
[root@localhost ~]# mkdir /fsdir
[root@localhost ~]# mount /stratis/pool-one/file-one /fsdir/
[root@localhost ~]# df -h
Filesystem                                                                                       Size  Used Avail Use% Mounted on
devtmpfs                                                                                         886M     0  886M   0% /dev
tmpfs                                                                                            903M     0  903M   0% /dev/shm
tmpfs                                                                                            903M  8.7M  894M   1% /run
tmpfs                                                                                            903M     0  903M   0% /sys/fs/cgroup
/dev/mapper/rhel-root                                                                             50G  1.7G   49G   4% /
/dev/nvme0n1p1                                                                                  1014M  173M  842M  17% /boot
/dev/mapper/rhel-home                                                                             27G  225M   27G   1% /home
tmpfs                                                                                            181M     0  181M   0% /run/user/0
/dev/sr0                                                                                         7.4G  7.4G     0 100% /mnt
/dev/mapper/stratis-1-2a5d0ca4266540b889057f37816c7423-thin-fs-deeb42ce571542cab33afcbfece6dd0a  1.0T  7.2G 1017G   1% /fsdir
  1. 寫入到/etc/fstab配置文件中,建議使用UUID,因為使用name的話,每次更新name,都要對配置文件進行刷新
[root@localhost ~]# blkid /stratis/pool-one/file-one 
/stratis/pool-one/file-one: UUID="deeb42ce-5715-42ca-b33a-fcbfece6dd0a" TYPE="xfs"
[root@localhost ~]# echo "UUID=deeb42ce-5715-42ca-b33a-fcbfece6dd0a     /fsdir   xfs   defaults    0 0" >> /etc/fstab  
[root@localhost ~]# tail -3 /etc/fstab 
/dev/mapper/rhel-home   /home                   xfs     defaults        0 0
/dev/mapper/rhel-swap   swap                    swap    defaults        0 0
UUID=deeb42ce-5715-42ca-b33a-fcbfece6dd0a     /fsdir   xfs   defaults    0 0


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM