集群 luci+ricci


配置環境:

1.selinux     Enforcing           vim /etc/sysconfig/selinux
2.date        時間同步             ntpdate  
3.iptables    關閉火牆             iptables -F
4.NetwortManger 關閉              

                                                              集群管理
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

 被管理機: 192.168.2.149
           192.168.2.243
 管理主機: 192.168.2.1

 訪問網址: server1.example.com:8084


查看    clustat
停服務  clusvcadm -s www
開啟服務clusvcadm -e www
切換服務所在位置   clusvcadm -r www -m server243.example.com ( 數據遷移 )

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~



HA( 高可用,雙機熱備,對外只有一個主機,但是兩個主機都活着 )


                luci
        /  \
           /    \
    (主)ricci-HA-ricci(副)

資源:VIP(ip)  web(應用)  filesystem(文件系統)


Create New Cluster
名字是需要不一樣的,< 15字符



(一) 添加節點( 被管理主機就是節點,需要集群管理的主機 )

   * 1. ricci  被管理主機配置 ( 兩台149.243都需要配置 )
         (1)selinux  vim /etc/sysconfig/selinux
                      reboot                            
                    ( 在配置文件里修改selinux狀態,enforcing permissive之間互換不需要重啟,但是與disabled互換時需要重啟)
         (2)date   ( 時間同步 )ntpdate 192.168.2.251
                    ( 設置時間 ) date -s 11:12:38        ( 以上兩種方式選擇一種即可 )
         (3)火牆     iptables -F
         (4)yum源    rm -fr /etc/yum.repos.d/rhel-source.repo
                      lftp i
                      get dvd.repo
                      yum clean all
                     ( 我們在配置yum源時 寫的路徑如:baseurl=http://192.168.2.251/pub/rhel6.5,它只能訪問到鏡像里server目錄還有一些目錄訪問不到)
                     ( 訪問不到的目錄:HighAvailability , LoadBalancer , ResilientStorage , ScalableFileSystem 我們需要安裝的ricci和luic
                        在server目錄里訪問不到,所以需要添加這幾個目錄,dvd.repo里是寫好的,詳細內容可見文檔最后 )
         (5)安裝ricci  yum install -y ricci
               密碼    passwd ricci ( 給予日此次密碼,在創建是需要填寫 )
               開啟    /etc/init.d/ricci start
             自啟動    chkconfig ricci on
        (6)查看端口   netstat -antlp  ( 11111 )
                      tcp        0      0 :::11111       :::*              LISTEN      1330/ricci
        (7)解析      vim /etc/hosts
                      192.168.2.243   server243.example.com
                      192.168.2.149   server149.example.com
                      192.168.2.1     server1.example.com
 
  * 2. 管理主機 ( 192.168.2.1 )
         (1)selinux  vim /etc/sysconfig/selinux
                      reboot
         (2)date   ( 時間同步 )
         (3)火牆     iptables -F
         (4)yum源    rm -fr /etc/yum.repos.d/rhel-source.repo
                      lftp i
                      get dvd.repo
                      yum clean all
         (5)安裝luci yum install -y luci
                      rpm -q luci
                      luci-0.26.0-48.el6.x86_64
         (6)解析     vim /etc/hosts
                      192.168.2.243   server243.example.com
                      192.168.2.149   server149.example.com
                      192.168.2.1     server1.example.com
         (7)開啟服務 /etc/init.d/luci start
                     ( 有一個網址鏈接,右鍵選擇鏈接,open lins )
                     ( 登錄名:root 密碼:主機root密碼 )
             網頁添加Nodes
                     ( Create --> Name( 隨意 )  --> Use the Same Password.. )
                     ( 添加管理用戶:create -- server149.example.com(主機名)-- 密碼( ricci的密碼 ) --  server149.example.com -- 11111 )
                                        ( -- server243.example.com(主機名)-- 密碼( ricci的密碼 ) --  server243.example.com -- 11111 )
                     ( Download ..., Reboot ..., Enable ...)
                     ( Create Cluster  大概等待5分鍾 ,可以用ps ax查看 )
         
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

ricci1 主機為主,管理資源,但是如果出現問題,如硬盤損壞,ricci2主機接管ricci1主機上的資源,但是ricci1主機會抓住資源不放,當ricci1主機好了以后會繼續管理主機上的資源,現在ricci1和ricci2主機都在管理同一資源,同時查看資源沒有問題,但是同時寫入會出現問題,這種現象稱為腦裂,fence可以解決這個問題。fence設備屬於第三方,如果ricci1和ricci2主機同時在管理資源,fence會讓ricci1主機斷電,重啟。當 ricci1 主機再次開啟時,發現資源被ricci2主機接管,ricci1主機就稱為備機


(二)添加fence等操作

  * 1. 獲取fence_xvm.key ( 獲取key必須在真機上進行,管理主機 192.168.2.1 )
                   ( 當fence配置好以后,如果沒有設置開機自啟動,再次使用集群是必須先開啟fence再開啟需要管理的主機(虛擬機) )
    (1)安裝fence   yum search fence
                    yum install -y fence-virtd-libvirt.x86_64 fence-virtd-multicast.x86_64 fence-virtd.x86_64
                    rpm -qa | grep fence
                    fence-virtd-libvirt-0.2.3-15.el6.x86_64
                    fence-virtd-multicast-0.2.3-15.el6.x86_64
                    fence-virtd-0.2.3-15.el6.x86_64
    (2)獲取fence_xvm.key     
                    fence_virtd -c  ( 需要填寫的如下,其他的空格即可 )
                    Interface [none]: br0  ( 物理機O(真機) )
                    Backend module [checkpoint]: libvirt
                    Replace /etc/fence_virt.conf with the above [y/N]? y
    (3)開啟服務     /etc/init.d/fence_virtd start
                    Starting fence_virtd:                                      [  OK  ]
    (4)建立存放目錄  mkdir /etc/cluster
    (5)獲取key      cd /etc/cluter/
                     dd if=/dev/urandom of=fence_xvm.key bs=128 count=1
                     運行結果如下:
                     1+0 records in
                     1+0 records out
                     128 bytes (128 B) copied, 0.000301578 s, 424 kB/s
         查看key     ll /etc/cluter/fence_xvm.key
                     -rw-r--r--. 1 root root 128 Jul 24 13:20 /etc/cluter/fence_xvm.key
     (6)key復制給被管理主機
                     scp /etc/cluster/fence_xvm.key 192.168.2.149:/etc/cluster/
                     scp /etc/cluster/fence_xvm.key 192.168.2.243:/etc/cluster/
    (7)重啟服務      /etc/init.d/fence_virtd restart
    (8)查看端口      netstat -anulp | grep 1229
                     udp        0      0 0.0.0.0:1229        0.0.0.0:*             12686/fence_virtd   


  * 2.  圖形Fence等操作
    (1)找一個沒有使用的ip
                     ping 192.168.2.233
                     PING 192.168.2.233 (192.168.2.233) 56(84) bytes of data.
                     From 192.168.2.1 icmp_seq=2 Destination Host Unreachable
    (2)在被管理機里安裝httpd
                     yum install -y httpd
                     vim /var/www/html/index.html ( 兩個被管理主機寫的不同,僅僅為了測試,在真是環境里是相同的 )
    (3)圖形添加Fence Device  

        * Fence Device:Add --> Fence Virt( Muilicast Mode ) --> Name ( vmfence ) --> Submit

        * Nodes: server149.example.com --> Fence Method to Node --> server149-vmfence --> Submit
                                      --> Fence Device --> vmdence( xvm Virtual Machine Fencing ) --> Domain
                                      ( 可以是虛擬機的名字,也可是uid(在虛擬機的!信息可以查看))  
                                      ( vm2 , abb71331-39a0-16cc-6fe2-11f5ebfb9689 ) --> Submit     
                server243.example.com --> Fence Method to Node --> server243-vmfence --> Submit
                                      --> Fence Device --> vmdence( xvm Virtual Machine Fencing ) --> Domain
                                      ( vm3 , 5a306666-7fef-164d-8072-09279e429725 ) --> Submit     
    
        * Failover Domains  -->  Add  --> Name ( webfile(此名字隨意)) -->
                              ( 打勾 )   Prioritized     Order the nodes to which services failover.  ( 優先級,服務的故障轉移 )
                          ( 打勾 )   Restricted     Service can run only on nodes specified.     ( 限制,服務只能運行在指定節點 )
                              ( 不打 )   No Failback     Do not send service back to 1st priority node when it becomes available again.
                                                       ( 沒有恢復, 不要發送服務回到第一優先級節點時,重新變得可用)
                                                                       Member     Priority  ( 優先級 )
                                     server149.example.com          ( 打勾 )        1
                                     server240.example.com          ( 打勾 )        2
                              --> create

                      建立完成頁面: Name     Prioritized     Restricted
                                   webfile          *                 *
 
        *  Resources   -->   Add   -->   IP Address ( 選擇 )                                          
                                         IP Address              ( 此ip就是之前測的ip )             192.168.2.233
                                         Netmask Bits (optional) ( 子網掩碼位(可選))               24
                                         Monitor Link              ( 監控環節 )                     ( 打勾 )
                                         Disable Updates to Static Routes (禁用更新的靜態路由)      ( 打勾 )
                                         Number of Seconds to Sleep After Removing an IP Address   10
                            -->  submit

                       -->   Add   -->   Script ( 腳本 )
                                         Name                          httpd
                                         Full Path to Script File    /etc/init.d/httpd
                            -->  submit

                             建立完成頁面: Name/IP             Type             In Use
                                      192.168.2.36/24     IP Address     Yes
                                      httpd             Script             Yes

       * Service Groups  -->  Add  -->  Service Name                            WWW
                                        Automatically Start This Service      ( 打勾 )    ( 開啟ufwu )
                                        Run Exclusive                           ( 打勾 ) ( 運行獨占 )
                                        Failover Domain                   ( webfile )
                                        Recovery Policy                       ( relocate )
                            -->  submit
                        Add Resource--> 192.168.2.233/24
                        Add Resource--> httpd

                            建立完成頁面: Name     Status                                    Autostart        Failover Domain
                                 www     Running on server149.example.com       ( 打勾 )           webfile

                     ......
    (4)最后成功測行( 在被管理主機上進行檢測 )
                     網頁訪問:192.168.2.233
                             server149.example.com
                     clustat ( 被管理機查看 )
                     結果:
                     Cluster Status for wjx @ Thu Jul 24 14:50:23 2014
                     Member Status: Quorate
                     Member Name                                                     ID   Status
                     ------ ----                                                     ---- ------
                     server149.example.com                                               1 Online, rgmanager
                     server243.example.com                                               2 Online, Local, rgmanager
                     Service Name                   Owner (Last)                         State         
                     ------- ----                   ----- ------                         -----         
                     service:www                    server149.example.com                started

                    /etc/init.d/httpd stop  ( 在192.168.2.149上停http )
                    網頁訪問:192.168.2.233
                             server243.example.com
                    clustat ( 被管理機查看 )
                     結果:
                     Member Name                             ID   Status
                     ------ ----                             ---- ------
                     server149.example.com                       1 Online, Local, rgmanager
                     server243.example.com                       2 Online, rgmanager
                     Service Name                   Owner (Last)                   State         
                     ------- ----                   ----- ------                   -----         
                     service:www                    server243.example.com          started

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
(三)數據儲存
  第一種存儲方式  (  mkfs.ext4  )( 只可以在149和243上其中一個在同一個掛載點操作,但是支持多個不同的掛載點 )
 * 1. 管理機:( luci )
       (1)安裝scsi  yum install -y scsi-target-utils.x86_64
       (2)建lvs     lvcreate -L 2G -n iscsi vol0
                     lvs
                     iscsi vol0 -wi-a---   2.00g
       (3)添加節點   vi /etc/tgt/targets.conf
                      <target iqn.2008-09.com.example:server.target1>
                          backing-store /dev/vol0/iscsi
                          initiator-address 192.168.2.149
                          initiator-address 192.168.2.240
                      </target>
       (4)開啟服務  /etc/init.d/tgtd start
       (5)查看信息  tgt-admin -s
                    /etc/init.d/tgtd restart

 * 2. 被管理機 ( ricci主機 )
       (1)安裝iscsi yum install -y iscsi*
       (2)導入管理機 iscsiadm -m discovery -t st -p 192.168.2.1
       (3)初始化    iscsiadm -m node -l
                     fdisk -l ( 沒有sda )
       (4)建lvm     fdisk -cu /dev/sda ( n p 1 空 空 t 8e p w )
           查看       cat /proc/partitions   (partprobe 在另一個機子上看不到需要執行此命令才可以)
                      8        1    2096128 sda1
       (5)查看clvmd狀態   /etc/init.d/clvmd status ( 運行 )
       (6)集群      lvmconf --enable-cluster
       (7)lvm配置    vi /etc/lvm/lvm.conf
                       locking_type = 3  ( 一般默認是3,表示使用內置的叢生的鎖 )
                     /etc/init.d/clvmd restart
       (8)建立lvm   
               pv: pvcreate /dev/sda1    
                    pvs
                    結果:/dev/sda1  clustervg lvm2 a--  2.00g 764.00m
               vg: vgcreate  clustervg /dev/sda1
                    vgdisplay clustervg
                    結果:Clustered             yes  
                    vgs
                    結果:clustervg   1   1   0 wz--nc 2.00g 764.00m
               lv: lvcreate -L 1G -n clusterlv clustervg
                    lvs
                    結果:clusterlv clustervg -wi-------   1.25g
       (9)格式化   mkfs.ext4 /dev/clustervg/clusterlv
             掛載   mount /dev/clustervg/clusterlv /var/www/html
       (10)安全上下文( selinux是強制的 )
                   restorecon -Rv /var/www/html/
                   ll -dZ /var/www/html/
                   結果:drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/
       (11)測試   vim /var/www/html/index.html
                   westos
                   umount /var/www/html/ ( 在149 )
                   mount /dev/clustervg/clusterlv /var/www/html/ (在243)
                   開啟服務clusvcadm -e www
                   cat /var/www/html/index.html
                   westos
                   umount /var/www/html/

          圖形:Filesystem 函數允許您訪問和操作文件系統
                  Resources --> Add --> Filesystem -->
                                              Name                                          webdate
                                              Filesystem Type     ( 文件系統類型 )              ext4
                                              Mount Point     ( 掛載點 )                   /var/www/html
                                              Device, FS Label, or UUID  ( 需要掛載的設備 )   /dev/clustervg/clusterlv
                                              Mount Options                                            掛載選項
                                              Filesystem ID (optional)                                    文件編號(可選)
                                              Force Unmount                                 (打勾)    強制卸載
                                              Force fsck                                 (打勾)  
                                              Enable NFS daemon and lockd workaround                    啟用NFS守護進程和上鎖的解決方法
                                              Use Quick Status Checks                         (打勾)    使用快速的狀態檢查
                                              Reboot Host Node if Unmount Fails         (打勾)    如果取消失敗重新啟動主機節點

    當執行完圖形界面的操作時,在df查會看到存儲設備自動掛載,只掛載在一個主機上
        df
        /dev/mapper/clustervg-clusterlv   1032088   34056    945604   4% /var/www/html

        clustat
         service:www                    server149.example.com           started  

        網絡頁面檢測: http://192.168.2.233/
        westos


*3.數據遷移
(1)圖形操作
        Service Groups -- 點擊www( 服務組 ) --
        Status Running on server149.example.com( start on node  )-- 點擊 start on node --選擇server243.example.com -- 點擊 start標志( 小三角型 )
        查看:clustat
              service:www                    server243.example.com           started
        網絡頁面檢測:( 沒有變化,在前端,也就是客戶端沒有變化,不知道數據在遷移 )
        http://192.168.2.233/
        westos    

(2)命令
        clusvcadm -r www -m server149.example.com
            查看:clustat
              service:www                    server149.example.com           started


~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

數據儲存第二種方式 ( 在149和243上都可以同時操作同一個掛載點操作,支持多點寫入 )

首先需要停止資源
    clusvcadm -s www
     service:www                    (server41.example.com)         stopped

    df ( 當df查看時,自動卸載了 )



   2.(  mkfs.gfs2  )
*1.格式化成gfs2
    (1)查看狀態   /etc/init.d/gfs2 status ( 沒有開啟 )
                   man mkfs.gfs2 ( 手冊 )
    (2)卸載     ( 149,243都要卸載 )
                  umount /var/www/html/      
    (3)格式化     mkfs.gfs2 -p lock_dlm -t wjx:mygfs2 -j 3 /dev/clustervg/clusterlv (149)
                 ( wjx是集群創建的時候的名字,-j 3 代表有三份日志 )

           This will destroy any data on /dev/clustervg/clusterlv.
           It appears to contain: symbolic link to `../dm-2'

           Are you sure you want to proceed? [y/n] y         ( 輸入y,表示進行格式化 )

            信息如下:
            Device:                    /dev/clustervg/clusterlv
            Blocksize:                 4096
            Device Size                1.00 GB (262144 blocks)
            Filesystem Size:           1.00 GB (262142 blocks)
            Journals:                  3
            Resource Groups:           4
            Locking Protocol:          "lock_dlm"
            Lock Table:                "wjx-c:mygfs2"
            UUID:                      0ced770a-afc4-50d5-7224-9a06cea2415f

    (4)掛載      mount /dev/clustervg/clusterlv /var/www/html/ (149)
    (5)測試網絡頁面 vim /var/www/html/index.html ( 149 )
                內容:www
    (6)安全上下文( selinux是強制的時候需要修改安全上下文,在一個主機上修改即可,但是必需掛載着,另一個主機再操作的時候不用在修改了 )
                   restorecon -Rv /var/www/html/ ( 149 )
                   ll -dZ /var/www/html/      ( 149 )
                   結果:drwxr-xr-x. root root system_u:object_r:httpd_sys_content_t:s0 /var/www/html/
    (7)測試       touch /var/www/html/file     ( 243 )
                   mount /dev/clustervg/clusterlv /var/www/html/ ( 在243掛載,看是否有在149上建立的文件,有就代表成功 )
    (8)卸載       umount  /var/www/html/ ( 149和243主機 )
    (9)圖像刪除原來的文件系統
            Service Groups -- 點擊www( 服務組,此時是紅色的字體 )--  Filesystem ( remove )
    (8)永久掛載    ( 149和243主機 兩個被管理主機都需要寫 )
         查看uid    blkid
                   /dev/mapper/clustervg-clusterlv: LABEL="wjx-a:mygfs2" UUID="1364ecd2-0c36-5e76-a506-253dcc7c8fc0" TYPE="gfs2"
                   vim /etc/fstab
                    UUID=1364ecd2-0c36-5e76-a506-253dcc7c8fc0       /var/www/html   gts2    _netdev 0 0
                   mount -a ( 檢測掛載 )
           df  ( 掛載成功 )
           /dev/mapper/clustervg-clusterlv   1048400  397168    651232  38% /var/www/html   


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM