前言:
第一篇筆記僅僅是安裝了pve,並且添加了cockpit和docker,這篇配置存儲部分。
我的服務器目前是1塊120G固態,上次已經裝了系統。
還有2塊320G機械盤、2塊500G機械盤。
oracle關於ZFS的文檔
https://docs.oracle.com/cd/E26926_01/html/E25826/preface-1.html#scrolltoc
一:添加ZFS存儲池(磁盤陣列)
1.初始化磁盤
我們未必每次都能使用全新的硬盤,那么舊硬盤在ZFS或PVE當中都是不能直接發現的,它怕你誤操作把有數據的硬盤給洗白白。
看我現在就是掛的幾塊用過的硬盤。
這些硬盤PVE默認不作為新硬盤給你用。需要重新初始化一下。
dd if=/dev/zero of=/dev/sd[X] bs=1M count=200
X代表你要初始化的磁盤。
根據我的情況:
root@pve01:/dev# dd if=/dev/zero of=/dev/sdb bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.53484 s, 137 MB/s root@pve01:/dev# dd if=/dev/zero of=/dev/sdc bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.6981 s, 123 MB/s root@pve01:/dev# dd if=/dev/zero of=/dev/sdd bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 2.16789 s, 96.7 MB/s root@pve01:/dev# dd if=/dev/zero of=/dev/sde bs=1M count=200 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 2.1021 s, 99.8 MB/s root@pve01:/dev#
PVE可以看到 硬盤了。
2.創建ZFS
反正我也不知道為什么PVE的web界面無法創建ZFS。
我們用命令行來創建:
先看幫助:

root@pve01:/dev# zpool --help usage: zpool command args ... where 'command' is one of the following: version create [-fnd] [-o property=value] ... [-O file-system-property=value] ... [-m mountpoint] [-R root] <pool> <vdev> ... destroy [-f] <pool> add [-fgLnP] [-o property=value] <pool> <vdev> ... remove [-nps] <pool> <device> ... labelclear [-f] <vdev> checkpoint [--discard] <pool> ... list [-gHLpPv] [-o property[,...]] [-T d|u] [pool] ... [interval [count]] iostat [[[-c [script1,script2,...][-lq]]|[-rw]] [-T d | u] [-ghHLpPvy] [[pool ...]|[pool vdev ...]|[vdev ...]] [[-n] interval [count]] status [-c [script1,script2,...]] [-igLpPstvxD] [-T d|u] [pool] ... [interval [count]] online [-e] <pool> <device> ... offline [-f] [-t] <pool> <device> ... clear [-nF] <pool> [device] reopen [-n] <pool> attach [-f] [-o property=value] <pool> <device> <new-device> detach <pool> <device> replace [-f] [-o property=value] <pool> <device> [new-device] split [-gLnPl] [-R altroot] [-o mntopts] [-o property=value] <pool> <newpool> [<device> ...] initialize [-c | -s] <pool> [<device> ...] resilver <pool> ... scrub [-s | -p] <pool> ... trim [-d] [-r <rate>] [-c | -s] <pool> [<device> ...] import [-d dir] [-D] import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile] [-D] [-l] [-f] [-m] [-N] [-R root] [-F [-n]] -a import [-o mntopts] [-o property=value] ... [-d dir | -c cachefile] [-D] [-l] [-f] [-m] [-N] [-R root] [-F [-n]] [--rewind-to-checkpoint] <pool | id> [newpool] export [-af] <pool> ... upgrade upgrade -v upgrade [-V version] <-a | pool ...> reguid <pool> history [-il] [<pool>] ... events [-vHf [pool] | -c] get [-Hp] [-o "all" | field[,...]] <"all" | property[,...]> <pool> ... set <property=value> <pool> sync [pool] ...
我們只關心creat

create [-fnd] [-o property=value] ... [-O file-system-property=value] ... [-m mountpoint] [-R root] <pool> <vdev> ...
3.查看磁盤
fdisk -l

root@pve01:/dev# fdisk -l Disk /dev/sda: 118 GiB, 126701535232 bytes, 247463936 sectors Disk model: Lenovo SSD ST510 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disklabel type: gpt Disk identifier: E893CBE0-FA66-448B-A718-33EB51C6DD96 Device Start End Sectors Size Type /dev/sda1 34 2047 2014 1007K BIOS boot /dev/sda2 2048 1050623 1048576 512M EFI System /dev/sda3 1050624 247463902 246413279 117.5G Linux LVM Disk /dev/sdb: 465.8 GiB, 500107862016 bytes, 976773168 sectors Disk model: ST500DM002-1BD14 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disk /dev/sdc: 465.8 GiB, 500107862016 bytes, 976773168 sectors Disk model: WDC WD5000AAKX-0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sdd: 298.1 GiB, 320072933376 bytes, 625142448 sectors Disk model: WDC WD3200AAJS-0 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/sde: 294.9 GiB, 316616827392 bytes, 618392241 sectors Disk model: WDC WD3200AAJS-2 Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/pve-swap: 8 GiB, 8589934592 bytes, 16777216 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk /dev/mapper/pve-root: 29.3 GiB, 31406948352 bytes, 61341696 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes root@pve01:/dev#
可以看出有
sda 120G 固態
sdb 500G 機械
sdc 500G 機械
sdd 320G 機械
sde 320G 機械
4.創建raid0
示例以兩塊320來做
root@pve01:/dev# zpool create -f Storage sdd sde
5.創建raid1
示例以兩塊500來做
root@pve01:/dev# zpool create -f Virtual mirror sdb sdc
6.校驗結果

root@pve01:/dev# zpool status pool: Storage state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM Storage ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 errors: No known data errors pool: Virtual state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM Virtual ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 errors: No known data errors root@pve01:/dev#

root@pve01:/dev# df -lh Filesystem Size Used Avail Use% Mounted on udev 7.7G 4.0K 7.7G 1% /dev tmpfs 1.6G 9.1M 1.6G 1% /run /dev/mapper/pve-root 30G 2.2G 28G 8% / tmpfs 7.8G 43M 7.7G 1% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup /dev/fuse 30M 16K 30M 1% /etc/pve tmpfs 1.6G 0 1.6G 0% /run/user/0 Storage 574G 128K 574G 1% /Storage Virtual 450G 128K 450G 1% /Virtual
zfs會自動創建以zfs存儲池為名的目錄,並掛載存儲池到根目錄。很方便的了。
二:導入ZFS存儲池
我會告訴你 最近幾天我裝了幾遍PVE么?不會。
但是我會告訴你,我在使用UEFI引導XFS格式的6.2版PVE,會出現關機時拔掉硬盤電源線或數據線,再插上,會出現無法引導的故障。
現在我在使用傳統邏輯bios引導的PVE。
1.查看可以導入的ZFS存儲池
zpool import

1 root@pve01:/# zpool import 2 pool: Docker 3 id: 2962035019617846026 4 state: ONLINE 5 action: The pool can be imported using its name or numeric identifier. 6 config: 7 8 Docker ONLINE 9 mirror-0 ONLINE 10 sdd ONLINE 11 sde ONLINE 12 13 pool: Virtual 14 id: 16203971317369074769 15 state: ONLINE 16 action: The pool can be imported using its name or numeric identifier. 17 config: 18 19 Virtual ONLINE 20 mirror-0 ONLINE 21 sdb ONLINE 22 sdc ONLINE
2.按原來的名字導入
zpool import -f 老名字
zpool import -f Docker
3.換個名字導入
zpool import -f 老名字 新名字

root@pve01:/# zpool import -f Docker cannot import 'Docker': a pool with that name already exists use the form 'zpool import <pool | id> <newpool>' to give it a new name root@pve01:/# cannot import 'Docker': a pool with that name already exists -bash: cannot: command not found root@pve01:/# use the form 'zpool import <pool | id> <newpool>' to give it a new name -bash: use: command not found root@pve01:/# zpool import -f Virtual Storage root@pve01:/#
4.測試
原來的文件還都在的。
5.遷移ZFS
其實這是遷移ZFS當中的導入操作。可以完整的將一組ZFS存儲池,通過變更接線,甚至搬遷硬盤的方式將一組ZFS存儲池變更到另一個主機上。
6.導出ZFS存儲池
zpool export 存儲池名字

root@pve01:/# zpool status pool: Docker state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM Docker ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 errors: No known data errors pool: Storage state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM Storage ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdb ONLINE 0 0 0 sdc ONLINE 0 0 0 errors: No known data errors root@pve01:/# zpool export Storage root@pve01:/# zpool status pool: Docker state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM Docker ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sdd ONLINE 0 0 0 sde ONLINE 0 0 0 errors: No known data errors root@pve01:/#
7.導入操作請從1開始看
三:配置docker存貯路徑,鏡像服務器
1.創建或修改 /etc/docker/daemon.json 文件
{ "registry-mirrors": ["https://******.mirror.aliyuncs.com"], "graph": "/Storage/docker" }
我配置DOCKER的主存儲路徑為/Storage/docker,畢竟Storage就是給docker准備的。
鏡像服務器,用的阿里雲鏡像,自己去注冊一個就好了。https://cr.console.aliyun.com
2.重啟docker
systemctl restart docker
3.驗證docker配置
docker info
可以使用docker info命令查看
也可以進入/Storage/docker查看,文件一大堆就對了。
4.配置其他文件夾
我還要創建build,存儲docker-compose使用的yaml文件。
創建images,存儲離線傳輸的鏡像文件。
創建date,存儲容器的永久儲存文件。
四:配置pve存儲路徑
1.簡介
PVE的存儲有幾個作用:
磁盤映像:虛擬機的虛擬磁盤文件
ISO鏡像:安裝虛擬機操作系統時使用的ISO文件
容器模板:應該是下載的容器模板,不是docker,應該是LXC容器
VZDUMP備份文件:未知
容器:還是LXC容器的東西
片段:更不知道是什么了
2.添加目錄
ISO:專門存儲各種安裝鏡像。
virtual_disk:存儲各虛擬機的硬盤鏡像。
container:存儲容器。
這些都存在一個zfs存儲池當中,分開目錄,是為了方便管理以及遷移。
又太監了
創建條帶鏡像的VDEV(RAID 10)
語法是:sudo zpool create NAME mirror VDEV1 VDEV2 mirror VDEV3 VDEV4
或: VDE可以是原始磁盤,文件/映像或分區。sudo zpool create NAME mirror VDEV1 VDEV2
sudo zpool add NAME mirror VDEV3 VDEV4
這兩天嘗試了一下mdadm做linux軟raid,速度比zfs慢的太多了。
還是換回zfs吧。
我的命令是 zpool create storage mirror sdb sdc mirror sdd sde 最后給了780G的zfs存儲池。