pve虛擬化-ceph分布式管理設置-pve虛擬機鏡像和配置文件-集群其他主機vnc無法訪問


1.把15加入集群

2.安裝ceph組件

2.1網關和監視器

2.2 osd創建 需要gpt分區

https://www.cnblogs.com/EasonJim/p/9583268.html
parted /dev/sdb
 mklabel
gpt
y

然后在web界面重置驅動

 

3.手動創建一個文件系統

4.第一個是手動創建的 后面兩個是自動創建的

5.塊存儲掛載

 

 6.文件系統掛載 注意選擇內容的選項

 

 

 

https://blog.51cto.com/yuweibing/2306831
刪除pv
https://blog.csdn.net/qq_39626154/article/details/90477803
掛載目錄

https://blog.51cto.com/kerry/2287648
ceph

https://www.jianshu.com/p/9a38408654b7
ceph2


M98WF-NY2PP-73243-PC8R6-V6B4Y Retail Key


報錯

Degraded data redundancy: 8154/38763 objects degraded (21.036%), 89 pgs degraded, 89 pgs undersized

降級就是在發生了一些故障比如OSD掛掉之后,Ceph 將這個 OSD 上的所有 PG 標記為 Degraded。
降級的集群可以正常讀寫數據,降級的 PG 只是相當於小毛病而已,並不是嚴重的問題。
Undersized的意思就是當前存活的PG 副本數為 2,小於副本數3,將其做此標記,表明存貨副本數不足,也不是嚴重的問題

/bin/ceph osd pool set test_pool min_size 1 #例子
set pool 1 min_size to 1

ceph osd pool set deyi min_size 1 #設置deyi池最小份數
set pool 1 min_size to 1

 

  • 降級就是在發生了一些故障比如OSD掛掉之后,Ceph 將這個 OSD 上的所有 PG 標記為 Degraded。
  • 降級的集群可以正常讀寫數據,降級的 PG 只是相當於小毛病而已,並不是嚴重的問題。
  • Undersized的意思就是當前存活的PG 副本數為 2,小於副本數3,將其做此標記,表明存貨副本數不足,也不是嚴重的問題。

作者:Lucien_168
鏈接:https://www.imooc.com/article/43575
來源:慕課網
本文原創發布於慕課網 ,轉載請注明出處,謝謝合作
  • 降級就是在發生了一些故障比如OSD掛掉之后,Ceph 將這個 OSD 上的所有 PG 標記為 Degraded。
  • 降級的集群可以正常讀寫數據,降級的 PG 只是相當於小毛病而已,並不是嚴重的問題。
  • Undersized的意思就是當前存活的PG 副本數為 2,小於副本數3,將其做此標記,表明存貨副本數不足,也不是嚴重的問題。

作者:Lucien_168
鏈接:https://www.imooc.com/article/43575
來源:慕課網
本文原創發布於慕課網 ,轉載請注明出處,謝謝合作

故障參考

https://www.imooc.com/article/43575

官方文檔

http://docs.ceph.org.cn/rbd/rbd/

Haswell >QEMU
1162       726
Haswell, no TSX
1165.7

yum install -y wget
yum install -y openssh-server
systemctl restart sshd
systemctl enable sshd

 Config locked migrate

pve遷移失敗解鎖


qm unlock 120
pct unlock 120
虛擬機解鎖

 

移除集群

First, make a backup of the cluster:

cp -a /etc/pve /root/pve_backup

Stop cluster service:

/etc/init.d/pve-cluster stop

Umount /etc/pve if it is mounted:

umount /etc/pve

Stop corosync service:

/etc/init.d/cman stop

Remove cluster configuration:

rm /etc/cluster/cluster.conf
rm -rf /var/lib/pve-cluster/*

Start again cluster service:

/etc/init.d/pve-cluster start

Now, you can create new cluster:

pvecm create newcluster 

Restore cluster and virtual machines configuration from the backup:
cp /root/pve_backup/*.cfg /etc/pve/
cp /root/pve_backup/qemu-server/*.conf /etc/pve/qemu-server/
cp /root/pve_backup/openvz/* /etc/pve/openvz/
UPDATE: This post is also valid to change the hostname of a node in a cluster or to move a node between two clusters. When you have removed a node from the cluster, it still appears in the proxmox nodes tree, to remove it from the tree you have to delete the node directory from another node in the cluster:
  rm -rf /etc/pve/nodes/HOSTNAME
https://blog.csdn.net/xiangrublog/article/details/42006465 Corosync Cluster Engine Authentication key generator. Gathering 2048 bits for key from /dev/urandom. Writing corosync key to /etc/corosync/authkey. Writing corosync config to /etc/pve/corosync.conf Restart corosync and cluster filesystem TASK OK systemctl stop pve-cluster systemctl stop corosync pmxcfs -l rm /etc/pve/corosync.conf rm /etc/corosync/* killall pmxcfs systemctl start pve-cluster
有時候集群web可能無法打開重啟服務后正常
systemctl restart pve-cluster
pvecm delnode oldnode pvecm expected 1 rm /var/lib/corosync/*

如果遷移失敗也需要重啟集群服務還有sshd服務
一般是/etc/ssh/ssh_known_hosts文件內的指紋有問題
免密授權
ssh-copy-id root@192.168.1.15
systemctl restart pve-cluster
systemctl restart sshd
pve11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/vdAfztZLQ8BwndORjsvMB0jrBx1wMcMCGUsdJm/zef3qznxGhN2nVo4aOge/JR22xWRDfue34k+rGq0EPyCBSQXeCuAUQXcLJOt9xh8NNd/Hto0QuSkSvicCxTVMSxs/7idm4dKL+V3eELnoL+k9mKKYa+qWY3oda5AezToI3Tu8FcGf/gOOyEVvHUyb16u7ZFP14Y9KVDNY4SP80Fxp/eRICOL3DCsjARLyTb5HfHy6FDwyX0U60US0gYtsNS1lcg6IHY8X9OjvAsMuvVo2Y6YjmHzySXWdJINjzuaNPc9FplA+HQ5pMkB1eg3slbaUPLDb3JFyKGUJi2WcHQ/Z
192.168.1.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC/vdAfztZLQ8BwndORjsvMB0jrBx1wMcMCGUsdJm/zef3qznxGhN2nVo4aOge/JR22xWRDfue34k+rGq0EPyCBSQXeCuAUQXcLJOt9xh8NNd/Hto0QuSkSvicCxTVMSxs/7idm4dKL+V3eELnoL+k9mKKYa+qWY3oda5AezToI3Tu8FcGf/gOOyEVvHUyb16u7ZFP14Y9KVDNY4SP80Fxp/eRICOL3DCsjARLyTb5HfHy6FDwyX0U60US0gYtsNS1lcg6IHY8X9OjvAsMuvVo2Y6YjmHzySXWdJINjzuaNPc9FplA+HQ5pMkB1eg3slbaUPLDb3JFyKGUJi2WcHQ/Z
pve13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1w0Zob1ZZyzDjdPH4c5cm0rjhILVcQ1/KcA8JSXRLL2w5GrFbxEB8hvk+MTHug7CJcj7GsS/EY0I3YKA3wRdWVyG2LTKzCprILK/cdfVbSj7zGMLAP/iXLD0iKsNEZIIkto9acLgRBWNCb4P7Lz3vAdvYx04SZQschY7kxs4X8JTSboIfcV4xA8ACdy6JH46MXhicBTssdiU2GD/SSXis+uosaBcaoXElgrAnuuMcZaPp02fsrMgnOSeJ0mivZz4Biu2jDDWIAweWyupJimh3hUa8922hyhCF3s12h0ScZcfg9kcGw/twRp1h8JVTGrQHlJeSXwFIVSk0t6xOdkOd
192.168.1.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1w0Zob1ZZyzDjdPH4c5cm0rjhILVcQ1/KcA8JSXRLL2w5GrFbxEB8hvk+MTHug7CJcj7GsS/EY0I3YKA3wRdWVyG2LTKzCprILK/cdfVbSj7zGMLAP/iXLD0iKsNEZIIkto9acLgRBWNCb4P7Lz3vAdvYx04SZQschY7kxs4X8JTSboIfcV4xA8ACdy6JH46MXhicBTssdiU2GD/SSXis+uosaBcaoXElgrAnuuMcZaPp02fsrMgnOSeJ0mivZz4Biu2jDDWIAweWyupJimh3hUa8922hyhCF3s12h0ScZcfg9kcGw/twRp1h8JVTGrQHlJeSXwFIVSk0t6xOdkOd
pve15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5netAIihYgPT3tEk0oVQfzuNMHx3N12u59J9D8AHHMFlpxaQCxs98izSwGpVNcrSzy0hfJ1q4NJ3Ni8n1Er6Wiikr4heFcChPW2s14skg3fRnEj06msoRnZLBDP+2QTuG3gKX1mINhSotqa7v7KXLYLwLRzvvH2XZcUKT6YV32gLpUT7XruXlEdvjqGxkDiWhAUrJPRlhQXMy50L3R0tVC2ZhfHBc+kBwkC4han3d7Qtq7utwN9tloJg+nzuN/+HmZMli2oZjpwZEdbWx5Pd1Te9ImQShMivbUkbUnS69q4VA+cQlfnwgHTAUgMpQhe0/OTPrWnQRzsfI0wA/ES5h
192.168.1.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC5netAIihYgPT3tEk0oVQfzuNMHx3N12u59J9D8AHHMFlpxaQCxs98izSwGpVNcrSzy0hfJ1q4NJ3Ni8n1Er6Wiikr4heFcChPW2s14skg3fRnEj06msoRnZLBDP+2QTuG3gKX1mINhSotqa7v7KXLYLwLRzvvH2XZcUKT6YV32gLpUT7XruXlEdvjqGxkDiWhAUrJPRlhQXMy50L3R0tVC2ZhfHBc+kBwkC4han3d7Qtq7utwN9tloJg+nzuN/+HmZMli2oZjpwZEdbWx5Pd1Te9ImQShMivbUkbUnS69q4VA+cQlfnwgHTAUgMpQhe0/OTPrWnQRzsfI0wA/ES5h

 

 

 

用nfs做共享遷移
qcow2遷移速度快可以用
raw格式遷移慢 會有問題


#查看集群資源狀況
#pvesh get /cluster/resources
#取得虛擬機當前狀態
#pvesh get /nodes/<節點id>/qemu/<虛擬機id>/status/current
#關閉虛擬機
#pvesh create /nodes/<節點id>/qemu/<虛擬機id>/status/stop
 
參考
pvesh get /nodes/pve11/qemu/150/status/current
pvesh create /nodes/pve11/qemu/150/status/stop

 

共享存儲移除 導致的虛擬機無法刪除問題處理

刪除虛擬機文件就可以了

rm -rf /etc/pve/nodes/pve15/lxc/105.conf #刪除ct虛擬機

普通虛擬機是這個目錄

/etc/pve/nodes/pve15/qemu-server/

 幫助參考路徑

https://192.168.1.xx:8006/pve-docs/chapter-pvecm.html#_remove_a_cluster_node

vi /etc/pve/nodes/pve13/qemu-server/105.conf

硬件設備異常可以手動刪除

 

虛擬機配置文件和鏡像文件

 

根據遷移后的位置修改硬盤路徑應該就遷移成功了

 

 

 

ls /mnt/pve/bgdata/images
ls /etc/pve/nodes/pve11/qemu-server/
cat /etc/pve/nodes/pve11/qemu-server/118.conf

root@pve11:~# ls /mnt/pve/
bgdata    cephfs    deyi  dydir  nfs

 另一種nfs遷移的方式

http://blog.sina.com.cn/s/blog_14b674edd0102xwc0.html

集群中剔除節點

1.集群節點查看

root@pve31:~# pvecm nodes

Membership information
----------------------
    Nodeid      Votes Name
         1          1 pve33
         2          1 pve32
         3          1 pve31 (local)
2.關閉被刪除的pve33節點電源或者關機后刪除節點

root@pve31:~# pvecm delnode pve33


Killing node 1
3.查看集群信息pev33已經被移除集群

root@pve31:~# pvecm status


Quorum information
------------------
Date:             Sat Oct 12 09:13:27 2019
Quorum provider:  corosync_votequorum
Nodes:            2
Node ID:          0x00000003
Ring ID:          2/32
Quorate:          Yes

Votequorum information
----------------------
Expected votes:   2
Highest expected: 2
Total votes:      2
Quorum:           2  
Flags:            Quorate

Membership information
----------------------
    Nodeid      Votes Name
0x00000002          1 192.168.130.32
0x00000003          1 192.168.130.31 (local)

 

集群其他主機vnc無法訪問

原因是免密認證失敗
ssh-keygen -f "/etc/ssh/ssh_known_hosts" -R "192.168.130.31"#刪除認證重新認證就可以了
ssh 192.168.130.31

 

esxi集群管理文檔

https://pubs.vmware.com/vsphere-51/index.jsp?topic=%2Fcom.vmware.vsphere.vcenterhost.doc%2FGUID-F14212C4-94D1-4DE0-B4B1-B9B6214AF055.html

 虛擬機嵌套

https://blog.51cto.com/kusorz/1925172?cid=718307

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM