環境准備
實驗使用的虛擬機配置
Vmware Workstation 虛擬機系統2個 系統版本:centos6.6 x86_64 內存:4GB 網絡:兩台機器都是nat 磁盤:裝完系統后額外添加個50GB的磁盤 額外:勾選vt-x
相關知識介紹
Cloudstack模仿亞馬遜的雲
glusterfs模仿谷歌的分布式文件系統
hadoop也是模仿谷歌的大數據系統產生的
cloudstack是java開發的
openstack是python開發
Cloudstack它的架構類似於saltstack
下載軟件包
從官網可以下載下面rpm包
http://cloudstack.apt-get.eu/centos/6/4.8/
master和agent都需要rpm包,注意路徑,這里的6是centos6系統,4.8是Cloudstack的版本號
usage這個包用於計費監控的,這里用不到
cli是調用亞馬遜aws接口之類的包,這里用不到
實驗這里只用到了management,common,agent這3個包
下載如下包 cloudstack-agent-4.8.0-1.el6.x86_64.rpm cloudstack-baremetal-agent-4.8.0-1.el6.x86_64.rpm cloudstack-cli-4.8.0-1.el6.x86_64.rpm cloudstack-common-4.8.0-1.el6.x86_64.rpm cloudstack-management-4.8.0-1.el6.x86_64.rpm cloudstack-usage-4.8.0-1.el6.x86_64.rpm
操作命令如下
[root@master1 ~]# mkdir /tools [root@master1 ~]# cd /tools/ wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-agent-4.8.0-1.el6.x86_64.rpm wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-baremetal-agent-4.8.0-1.el6.x86_64.rpm wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-cli-4.8.0-1.el6.x86_64.rpm wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-common-4.8.0-1.el6.x86_64.rpm wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-management-4.8.0-1.el6.x86_64.rpm wget http://cloudstack.apt-get.eu/centos/6/4.8/cloudstack-usage-4.8.0-1.el6.x86_64.rpm
下載kvm模板,這里只有master需要下載這個模板,它是系統虛擬機使用的
systemvm64template-2016-05-18-4.7.1-kvm.qcow2.bz2
http://cloudstack.apt-get.eu/systemvm/4.6/ http://cloudstack.apt-get.eu/systemvm/4.6/systemvm64template-4.6.0-kvm.qcow2.bz2
正式開始
關閉iptables和selinux
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config setenforce 0 chkconfig iptables off /etc/init.d/iptables stop
兩台機器配置IP地址為靜態的
[root@master1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static IPADDR=192.168.145.151 NETMASK=255.255.255.0 GATEWAY=192.168.145.2 DNS1=10.0.1.11 [root@master1 ~]# [root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static IPADDR=192.168.145.152 NETMASK=255.255.255.0 GATEWAY=192.168.145.2 DNS1=10.0.1.11 [root@agent1 ~]#
配置主機名分別為master1和agent1
配置hosts文件 cat >>/etc/hosts<<EOF 192.168.145.151 master1 192.168.145.152 agent1 EOF
配置ntp
yum install ntp -y chkconfig ntpd on /etc/init.d/ntpd start
檢查 hostname --fqdn
[root@master1 ~]# hostname --fqdn master1 [root@master1 ~]# [root@agent1 ~]# hostname --fqdn agent1 [root@agent1 ~]#
兩台機器安裝epel源,默認的163的源可以下載epel源
yum install epel-release -y
master端安裝nfs
它也會自動把依賴的rpcbind安裝上
nfs 作為二級存儲,給agent提供虛擬機iso文件,存儲快照的地方
yum install nfs-utils -y
master端配置nfs,給agent宿主機當二級存儲使用
[root@master1 ~]# cat /etc/exports /export/secondary *(rw,async,no_root_squash,no_subtree_check) [root@master1 ~]#
master創建掛載目錄
[root@master1 ~]# mkdir /export/secondary -p [root@master1 ~]#
agent上也如下操作,注意agent新建primary目錄
[root@agent1 ~]# mkdir /export/primary -p [root@agent1 ~]#
格式化磁盤
master上操作,這里不分區了
[root@master1 ~]# mkfs.ext4 /dev/sdb mke2fs 1.41.12 (17-May-2010) /dev/sdb is entire device, not just one partition! Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 3276800 inodes, 13107200 blocks 655360 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 400 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 35 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@master1 ~]#
agent上操作
[root@agent1 ~]# mkfs.ext4 /dev/sdb mke2fs 1.41.12 (17-May-2010) /dev/sdb is entire device, not just one partition! Proceed anyway? (y,n) y Filesystem label= OS type: Linux Block size=4096 (log=2) Fragment size=4096 (log=2) Stride=0 blocks, Stripe width=0 blocks 3276800 inodes, 13107200 blocks 655360 blocks (5.00%) reserved for the super user First data block=0 Maximum filesystem blocks=4294967296 400 block groups 32768 blocks per group, 32768 fragments per group 8192 inodes per group Superblock backups stored on blocks: 32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 4096000, 7962624, 11239424 Writing inode tables: done Creating journal (32768 blocks): done Writing superblocks and filesystem accounting information: done This filesystem will be automatically checked every 37 mounts or 180 days, whichever comes first. Use tune2fs -c or -i to override. [root@agent1 ~]#
掛載磁盤
master上操作 [root@master1 ~]# echo "/dev/sdb /export/secondary ext4 defaults 0 0">>/etc/fstab [root@master1 ~]# mount -a [root@master1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.3G 31G 7% / tmpfs 931M 0 931M 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/secondary [root@master1 ~]# agent上操作 [root@agent1 ~]# echo "/dev/sdb /export/primary ext4 defaults 0 0">>/etc/fstab [root@agent1 ~]# mount -a [root@agent1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.1G 32G 7% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary [root@agent1 ~]#
配置nfs和iptables
先打開官方文檔配置nfs和iptables等
http://docs.cloudstack.apache.org/projects/cloudstack-installation/en/4.9/qig.html
有些企業可以關閉iptables,有些企業需要開啟,這里我們按照開啟的配置,來配置nfs
centos6.x 配置nfs添加如下參數,默認有這些參數,只是被注釋了。這里我們直接添加到文件最后即可
這個工作在master上操作
LOCKD_TCPPORT=32803 LOCKD_UDPPORT=32769 MOUNTD_PORT=892 RQUOTAD_PORT=875 STATD_PORT=662 STATD_OUTGOING_PORT=2020
這里我們直接添加到文件最后即可
[root@master1 ~]# vim /etc/sysconfig/nfs [root@master1 ~]# tail -10 /etc/sysconfig/nfs # # To enable RDMA support on the server by setting this to # the port the server should listen on #RDMA_PORT=20049 LOCKD_TCPPORT=32803 LOCKD_UDPPORT=32769 MOUNTD_PORT=892 RQUOTAD_PORT=875 STATD_PORT=662 STATD_OUTGOING_PORT=2020 [root@master1 ~]#
前后對比
master上操作的
[root@master1 tools]# cat /etc/sysconfig/iptables # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT [root@master1 tools]# vim /etc/sysconfig/iptables
添加如下
多加了一條80的。為了避免以后再配置,這個80個nfs無關,是后面提供http方式的鏡像源使用
-A INPUT -m state --state NEW -p udp --dport 111 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 111 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 2049 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 32803 -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 32769 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 892 -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 892 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 875 -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 875 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 662 -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 662 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT
添加后結果如下
[root@master1 tools]# cat /etc/sysconfig/iptables # Firewall configuration written by system-config-firewall # Manual customization of this file is not recommended. *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT -A INPUT -p icmp -j ACCEPT -A INPUT -i lo -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 111 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 111 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 2049 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 32803 -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 32769 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 892 -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 892 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 875 -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 875 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 662 -j ACCEPT -A INPUT -m state --state NEW -p udp --dport 662 -j ACCEPT -A INPUT -m state --state NEW -p tcp --dport 80 -j ACCEPT -A INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT -A INPUT -j REJECT --reject-with icmp-host-prohibited -A FORWARD -j REJECT --reject-with icmp-host-prohibited COMMIT [root@master1 tools]#
master開啟nfs服務,開啟iptables
[root@master1 tools]# service iptables restart iptables: Applying firewall rules: [ OK ] [root@master1 tools]# service rpcbind start [root@master1 tools]# service nfs start Starting NFS services: [ OK ] Starting NFS quotas: [ OK ] Starting NFS mountd: [ OK ] Starting NFS daemon: [ OK ] Starting RPC idmapd: [ OK ] [root@master1 tools]# chkconfig rpcbind on [root@master1 tools]# chkconfig nfs on [root@master1 tools]#
agent上檢查
如果沒有showmount命令需要安裝nfs包
yum install nfs-utils -y
查看對方是否提供了nfs服務
[root@agent1 ~]# showmount -e 192.168.145.151 Export list for 192.168.145.151: /export/secondary * [root@agent1 ~]#
測試下可否掛載
[root@agent1 ~]# mount -t nfs 192.168.145.151:/export/secondary /mnt [root@agent1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.3G 31G 7% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary 192.168.145.151:/export/secondary 50G 52M 47G 1% /mnt [root@agent1 ~]#
測試成功,卸載即可。上面僅僅測試
[root@agent1 ~]# umount /mnt -lf [root@agent1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/sda3 35G 2.3G 31G 7% / tmpfs 1.9G 0 1.9G 0% /dev/shm /dev/sda1 380M 33M 328M 9% /boot /dev/sdb 50G 52M 47G 1% /export/primary [root@agent1 ~]#
安裝和配置Cloudstack
管理服務器端安裝,master上操作
[root@master1 tools]# ls cloudstack-agent-4.8.0-1.el6.x86_64.rpm cloudstack-baremetal-agent-4.8.0-1.el6.x86_64.rpm cloudstack-cli-4.8.0-1.el6.x86_64.rpm cloudstack-common-4.8.0-1.el6.x86_64.rpm cloudstack-management-4.8.0-1.el6.x86_64.rpm cloudstack-usage-4.8.0-1.el6.x86_64.rpm systemvm64template-4.6.0-kvm.qcow2.bz2 [root@master1 tools]# yum install -y cloudstack-management-4.8.0-1.el6.x86_64.rpm cloudstack-common-4.8.0-1.el6.x86_64.rpm [root@master1 tools]# rpm -qa | grep cloudstack cloudstack-common-4.8.0-1.el6.x86_64 cloudstack-management-4.8.0-1.el6.x86_64 [root@master1 tools]#
在master上安裝mysql-server
[root@master1 tools]# yum install mysql-server -y
修改mysql配置文件添加參數
在[mysqld]模塊下
添加下面參數
innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600 max_connections=350 log-bin=mysql-bin binlog-format = 'ROW'
最終結果如下
[root@master1 tools]# vim /etc/my.cnf [root@master1 tools]# cat /etc/my.cnf [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock user=mysql # Disabling symbolic-links is recommended to prevent assorted security risks symbolic-links=0 innodb_rollback_on_timeout=1 innodb_lock_wait_timeout=600 max_connections=350 log-bin=mysql-bin binlog-format = 'ROW' [mysqld_safe] log-error=/var/log/mysqld.log pid-file=/var/run/mysqld/mysqld.pid [root@master1 tools]#
啟動mysql服務並設置開機啟動
[root@master1 tools]# service mysqld start Initializing MySQL database: Installing MySQL system tables... OK Filling help tables... OK To start mysqld at boot time you have to copy support-files/mysql.server to the right place for your system PLEASE REMEMBER TO SET A PASSWORD FOR THE MySQL root USER ! To do so, start the server, then issue the following commands: /usr/bin/mysqladmin -u root password 'new-password' /usr/bin/mysqladmin -u root -h master1 password 'new-password' Alternatively you can run: /usr/bin/mysql_secure_installation which will also give you the option of removing the test databases and anonymous user created by default. This is strongly recommended for production servers. See the manual for more instructions. You can start the MySQL daemon with: cd /usr ; /usr/bin/mysqld_safe & You can test the MySQL daemon with mysql-test-run.pl cd /usr/mysql-test ; perl mysql-test-run.pl Please report any problems with the /usr/bin/mysqlbug script! [ OK ] Starting mysqld: [ OK ] [root@master1 tools]# chkconfig mysqld on [root@master1 tools]# [root@master1 tools]# ls /var/lib/mysql/ ibdata1 ib_logfile0 ib_logfile1 mysql mysql.sock test [root@master1 tools]#
給mysql設置密碼,第一個是localhost的,第二個是可以遠程登錄的
[root@master1 tools]# /usr/bin/mysqladmin -u root password '123456' [root@master1 tools]# mysql -uroot -p123456 -e "grant all on *.* to root@'%' identified by '123456';" [root@master1 tools]#
master端初始化Cloudstack的數據庫
這個命令其實是導入數據到mysql庫(在master上操作),執行腳本創建庫和表
[root@master1 tools]# cloudstack-setup-databases cloud:123456@localhost --deploy-as=root:123456 Mysql user name:cloud [ OK ] Mysql user password:****** [ OK ] Mysql server ip:localhost [ OK ] Mysql server port:3306 [ OK ] Mysql root user name:root [ OK ] Mysql root user password:****** [ OK ] Checking Cloud database files ... [ OK ] Checking local machine hostname ... [ OK ] Checking SELinux setup ... [ OK ] Detected local IP address as 192.168.145.151, will use as cluster management server node IP[ OK ] Preparing /etc/cloudstack/management/db.properties [ OK ] Applying /usr/share/cloudstack-management/setup/create-database.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-schema.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-database-premium.sql [ OK ] Applying /usr/share/cloudstack-management/setup/create-schema-premium.sql [ OK ] Applying /usr/share/cloudstack-management/setup/server-setup.sql [ OK ] Applying /usr/share/cloudstack-management/setup/templates.sql [ OK ] Processing encryption ... [ OK ] Finalizing setup ... [ OK ] CloudStack has successfully initialized database, you can check your database configuration in /etc/cloudstack/management/db.properties [root@master1 tools]#
初始化完畢
下面文件是初始化之后,自動改動的。我們可以查看下,里面的東西不需要改動了
[root@master1 tools]# vim /etc/cloudstack/management/db.properties [root@master1 tools]#
啟動master,輸入cl按tab鍵可以看到很多命令
[root@master1 tools]# cl clean-binary-files cloudstack-set-guest-sshkey clear cloudstack-setup-databases clock cloudstack-setup-encryption clockdiff cloudstack-setup-management cloudstack-external-ipallocator.py cloudstack-sysvmadm cloudstack-migrate-databases cloudstack-update-xenserver-licenses cloudstack-sccs cls cloudstack-set-guest-password [root@master1 tools]# cloudstack-setup-management Starting to configure CloudStack Management Server: Configure Firewall ... [OK] Configure CloudStack Management Server ...[OK] CloudStack Management Server setup is Done! [root@master1 tools]#
你的master防火牆配置好了,這個啟動它會再起來,加入一些自己的端口,比如下面。多了9090,8250,8080端口
[root@master1 tools]# head -10 /etc/sysconfig/iptables # Generated by iptables-save v1.4.7 on Sat Feb 11 20:07:43 2017 *filter :INPUT ACCEPT [0:0] :FORWARD ACCEPT [0:0] :OUTPUT ACCEPT [0:0] -A INPUT -p tcp -m tcp --dport 9090 -j ACCEPT -A INPUT -p tcp -m tcp --dport 8250 -j ACCEPT -A INPUT -p tcp -m tcp --dport 8080 -j ACCEPT -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT -A INPUT -p icmp -j ACCEPT [root@master1 tools]#
下面是它的日志,因為它底層是tomcat,日志是一致的
master最好是16GB內存。提供足夠內存給jvm,這樣服務啟動才快。
[root@master1 tools]# tail -f /var/log/cloudstack/management/catalina.out INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-2f0e1bd5) (logid:4db872cc) Begin cleanup expired async-jobs INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-2f0e1bd5) (logid:4db872cc) End cleanup expired async-jobs INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-fc8583d4) (logid:8724f870) Begin cleanup expired async-jobs INFO [o.a.c.f.j.i.AsyncJobManagerImpl] (AsyncJobMgr-Heartbeat-1:ctx-fc8583d4) (logid:8724f870) End cleanup expired async-jobs
IE瀏覽器打開下面地址
http://192.168.145.151:8080/client/
打開管理頁面,說明master端安裝完畢
導入系統虛擬機鏡像
CloudStack通過一系列系統虛擬機提供功能,如訪問虛擬機控制台,如提供各類
網絡服務,以及管理輔助存儲中的各類資源。
下面是導入系統虛擬機模板,並把這些模板部署於剛才創建的輔助存儲中:管理服務器
包含一個腳本可以正確的操作這些系統虛擬機模板
先找到虛擬機模板路徑
[root@master1 tools]# ls /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 /tools/systemvm64template-4.6.0-kvm.qcow2.bz2
master上執行下面命令
/usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \ -m /export/secondary \ -f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \ -h kvm -F
這個步驟的作用就是把虛擬機模板導入到二級存儲,執行過程如下
[root@master1 tools]# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt \ > -m /export/secondary \ > -f /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 \ > -h kvm -F Uncompressing to /usr/share/cloudstack-common/scripts/storage/secondary/0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2.tmp (type bz2)...could take a long time Moving to /export/secondary/template/tmpl/1/3///0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2...could take a while Successfully installed system VM template /tools/systemvm64template-4.6.0-kvm.qcow2.bz2 to /export/secondary/template/tmpl/1/3/ [root@master1 tools]#
導入成功后,模板會在這里,一個模板和一個模板配置文件
[root@master1 tools]# cd /export/secondary/ [root@master1 secondary]# ls lost+found template [root@master1 secondary]# cd template/tmpl/1/3/ [root@master1 3]# ls 0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2 template.properties [root@master1 3]# ls 0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2 template.properties [root@master1 3]# pwd /export/secondary/template/tmpl/1/3 [root@master1 3]#
這個是模板的配置,不用修改
[root@master1 3]# cat template.properties filename=0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2 description=SystemVM Template checksum= hvm=false size=322954240 qcow2=true id=3 public=true qcow2.filename=0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b.qcow2 uniquename=routing-3 qcow2.virtualsize=322954240 virtualsize=322954240 qcow2.size=322954240 [root@master1 3]#
agent安裝Cloudstack包
agent端如下操作
[root@agent1 tools]# yum install cloudstack-common-4.8.0-1.el6.x86_64.rpm cloudstack-agent-4.8.0-1.el6.x86_64.rpm -y
它會依賴qemu-kvm,libvirt和glusterfs 這些包都會自動安裝上
glusterfs已經默認作為kvm的后端存儲了
[root@agent1 ~]# rpm -qa | egrep "cloudstack|gluster|kvm|libvirt" glusterfs-client-xlators-3.7.5-19.el6.x86_64 cloudstack-common-4.8.0-1.el6.x86_64 cloudstack-agent-4.8.0-1.el6.x86_64 glusterfs-libs-3.7.5-19.el6.x86_64 glusterfs-3.7.5-19.el6.x86_64 glusterfs-api-3.7.5-19.el6.x86_64 libvirt-python-0.10.2-60.el6.x86_64 libvirt-0.10.2-60.el6.x86_64 libvirt-client-0.10.2-60.el6.x86_64 qemu-kvm-0.12.1.2-2.491.el6_8.3.x86_64 [root@agent1 ~]#
Agent端虛擬化配置
配置KVM
KVM中我們有兩部分需要進行配置,libvirt和qemu
配置qemu
KVM的配置相對簡單,只需要配置一項,編輯/etc/libvirt/qemu.conf
取消vnc_listen=0.0.0.0的注釋,同時注釋掉security_driver="none"
由於security_driver默認是如下注釋狀態。#security_driver = "selinux"
這里就不改它了。只取消vnc_listen=0.0.0.0的注釋
(有人說加入主機會自動修改這些參數。我未驗證,直接手動修改)
配置Libvirt
Cloudstack通過調用libvirt管理虛擬機。
為了實現動態遷移,libvirt需要監聽使用非加密的TCP連接。還需要關閉嘗試
使用組播DNS進行廣播。這些都在/etc/libvirt/libvirtd.conf文件中進行配置
(有人說加入主機會自動修改這些參數。我未驗證,直接手動修改)
設置下列參數
這些參數它是讓我們取消注釋,我們直接加入到最后就行了(agent上操作)
listen_tls = 0 listen_tcp = 1 tcp_port = "16059" auth_tcp = "none" mdns_adv = 0
修改命令如下
cat>>/etc/libvirt/libvirtd.conf<<EOF listen_tls = 0 listen_tcp = 1 tcp_port = "16059" auth_tcp = "none" mdns_adv = 0 EOF
操作過程如下
[root@agent1 tools]# cat>>/etc/libvirt/libvirtd.conf<<EOF > listen_tls = 0 > listen_tcp = 1 > tcp_port = "16059" > auth_tcp = "none" > mdns_adv = 0 > EOF [root@agent1 tools]# tail -5 /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 tcp_port = "16059" auth_tcp = "none" mdns_adv = 0 [root@agent1 tools]#
還要取消下面文件中注釋
/etc/sysconfig/libvirtd
#LIBVIRTD_ARGS="--listen"
文檔上是取消注釋,我們這里改成-l 注意這里是listen的l,是字母,不是數字1
LIBVIRTD_ARGS="-1"
(有人說加入主機會自動修改這些參數。我未驗證,直接手動修改)
查看檢驗
[root@agent1 tools]# grep LIBVIRTD_ARGS /etc/sysconfig/libvirtd # in LIBVIRTD_ARGS instead. LIBVIRTD_ARGS="-1" [root@agent1 tools]#
重啟libvirt服務,查看kvm模塊是否有加載
[root@agent1 tools]# /etc/init.d/libvirtd restart Stopping libvirtd daemon: [ OK ] Starting libvirtd daemon: [ OK ] [root@agent1 tools]# lsmod | grep kvm kvm_intel 55496 0 kvm 337772 1 kvm_intel [root@agent1 tools]#
KVM部分配置完成
用戶界面操作
默認密碼是admin/password
用戶界面語言可以選擇簡體中文
IE瀏覽器打開下面地址
http://192.168.145.151:8080/client/
CloudStack提供一個基於Web的UI,管理員和終端用戶能夠使用這個界面。用戶界面版本
依賴於登錄時使用的憑證不同而不同。用戶界面是適用於大多數流行的瀏覽器。包括IE7
IE8,IE9,Firefox,chrome等
登錄地址如下
http://management-server-ip:8080/client/
選擇基本網絡

啟動資源域后系統會創建兩個虛擬機
console proxy vm是你虛擬機的vnc的代理服務的機器
secondary storage vm是你虛擬機鏡像的機器,通過它取到鏡像
啟動資源域之后
登錄agent上查看2個系統虛擬機
[root@agent1 cloudstack]# virsh list --all Id Name State ---------------------------------------------------- 1 s-1-VM running 2 v-2-VM running [root@agent1 cloudstack]#
宿主機多了很多vnet
[root@agent1 cloudstack]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:feab:d5a9/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff 8: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ff inet 192.168.145.152/24 brd 192.168.145.255 scope global cloudbr0 inet6 fe80::20c:29ff:feab:d5a9/64 scope link valid_lft forever preferred_lft forever 10: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether fe:00:a9:fe:00:64 brd ff:ff:ff:ff:ff:ff inet 169.254.0.1/16 scope global cloud0 inet6 fe80::f810:caff:fe2d:6be3/64 scope link valid_lft forever preferred_lft forever 11: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:00:a9:fe:00:64 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:a9ff:fefe:64/64 scope link valid_lft forever preferred_lft forever 12: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:db:22:00:00:07 brd ff:ff:ff:ff:ff:ff inet6 fe80::fcdb:22ff:fe00:7/64 scope link valid_lft forever preferred_lft forever 13: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:20:e4:00:00:13 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc20:e4ff:fe00:13/64 scope link valid_lft forever preferred_lft forever 14: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:00:a9:fe:00:6e brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:a9ff:fefe:6e/64 scope link valid_lft forever preferred_lft forever 15: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:92:68:00:00:08 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc92:68ff:fe00:8/64 scope link valid_lft forever preferred_lft forever 16: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:84:42:00:00:0f brd ff:ff:ff:ff:ff:ff inet6 fe80::fc84:42ff:fe00:f/64 scope link valid_lft forever preferred_lft forever [root@agent1 cloudstack]#
同時發現eth0的地址配到了cloudbr0上
[root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg- ifcfg-cloudbr0 ifcfg-eth0 ifcfg-lo [root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-cloudbr0 DEVICE=cloudbr0 TYPE=Bridge ONBOOT=yes BOOTPROTO=static IPADDR=192.168.145.152 NETMASK=255.255.255.0 GATEWAY=192.168.145.2 DNS1=10.0.1.11 NM_CONTROLLED=no [root@agent1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 TYPE=Ethernet ONBOOT=yes BOOTPROTO=static IPADDR=192.168.145.152 NETMASK=255.255.255.0 GATEWAY=192.168.145.2 DNS1=10.0.1.11 NM_CONTROLLED=no BRIDGE=cloudbr0 [root@agent1 ~]# ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:0C:29:AB:D5:A9 inet6 addr: fe80::20c:29ff:feab:d5a9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:9577 errors:0 dropped:0 overruns:0 frame:0 TX packets:5449 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1804345 (1.7 MiB) TX bytes:646033 (630.8 KiB) [root@agent1 ~]# ifconfig cloudbr0 cloudbr0 Link encap:Ethernet HWaddr 00:0C:29:AB:D5:A9 inet addr:192.168.145.152 Bcast:192.168.145.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:feab:d5a9/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2652 errors:0 dropped:0 overruns:0 frame:0 TX packets:1343 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:277249 (270.7 KiB) TX bytes:190930 (186.4 KiB) [root@agent1 ~]#
系統VM是不同於主機上創建的普通虛擬機,它們是CloudStack雲平台自帶的用於完成自身的一些任務的虛擬機
1、Seondary Storage VM:簡稱SSVM,用於管理二級存儲的相關操作,如模板根鏡像文件的
上傳與下載,快照,volumes的存放,第一次創建虛擬機時從二級存儲拷貝模板到一級存儲並且
自動創建快照,每一個資源域可以有多個SSVM,當SSVM被刪除或停止,它會自動被重建並啟動
2、ConsolePorxy VM:用於web界面展示控制台
3、虛擬路由器:將會在第一個實例啟動后自動創建
[root@master1 3]# /etc/init.d/cloudstack-management restart Stopping cloudstack-management: [FAILED] Starting cloudstack-management: [ OK ] [root@master1 3]# [root@master1 3]# [root@master1 3]# /etc/init.d/cloudstack-management restart Stopping cloudstack-management: [ OK ] Starting cloudstack-management: [ OK ] [root@master1 3]#
以上修改,它就自動下載自帶的模板了
模板可以通過本地上傳,也可以添加
注意本地上傳暫時有bug。
對於一些情況,資源域資源告警時可能無法創建虛擬機。我們可以調整參數,讓其超配
mem.overprovisioning.factor
內存超配倍數,內存可用量=內存總量*超配倍數;類型:整數;默認:1(不超配)
下面兩個網站關於全局設置的意思可以看看
http://www.chinacloudly.com/cloudstack%E5%85%A8%E5%B1%80%E9%85%8D%E7%BD%AE%E5%8F%82%E6%95%B0/
http://blog.csdn.net/u011650565/article/details/41945433
找到下面,
告警設置
制作模板和創建自定義虛擬機
CloudStack模板支持兩種模式
1、通過kvm制作的qcow2或者raw文件
2、直接上傳iso文件作為模板文件
由於國內nginx比較流行,我們這里使用nginx搭建鏡像源
[root@master1 ~]# yum install nginx -y
防火牆我們前期做了,其實也可以把它關閉了
實際環境中,最好另外搭建一台nginx服務器,盡量減輕master的壓力
[root@master1 ~]# /etc/init.d/iptables stop
啟動nginx
[root@master1 ~]# /etc/init.d/nginx start Starting nginx: [ OK ] [root@master1 ~]#
瀏覽器輸入master的地址,也就是nginx安裝的服務器的地址
http://192.168.145.151/
編輯配置文件,讓它成為目錄服務器
這個access_log下面添加 access_log /var/log/nginx/access.log main; autoindex on; #顯示目錄 autoindex_exact_size on; #顯示文件大小 autoindex_localtime on; #顯示文件時間
確認下,可以把漢字刪除,防止一些可能的報錯
[root@master1 ~]# sed -n '23,26p' /etc/nginx/nginx.conf access_log /var/log/nginx/access.log main; autoindex on; autoindex_exact_size on; autoindex_localtime on; [root@master1 ~]#
檢查語法,重啟nginx
[root@master1 ~]# /etc/init.d/nginx configtest nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful [root@master1 ~]# /etc/init.d/nginx restart Stopping nginx: [ OK ] Starting nginx: [ OK ] [root@master1 ~]#
到/usr/share/nginx/html 目錄下,刪除所有文件,
上傳iso文件到/usr/share/nginx/html 下面
這里我們上傳CentOS-6.5-x86_64-minimal.iso
[root@master1 tools]# cd /usr/share/nginx/html/ [root@master1 html]# ls 404.html 50x.html index.html nginx-logo.png poweredby.png [root@master1 html]# rm -rf * [root@master1 html]# mv /tools/CentOS-6.5-x86_64-minimal.iso . [root@master1 html]# ls CentOS-6.5-x86_64-minimal.iso [root@master1 html]#
再次刷新nginx首頁
http://192.168.145.151/

virtio已經是頁面標識了,紅帽現在全力支持kvm了
c6已經集成到內核了
點擊reboot之后,這里取消附件iso,防止重復安裝
機器裝好之后,reboot,同時盡快取消iso,否則又從iso創建了
設置網卡onboot=yes
安裝新的實例后,虛擬路由器這里也變了
生產中,一般一個集群是8-16台或者24台主機
2個機櫃的服務器,這么划分比較合理
超過24台你可以添加另一個集群,cluster2
agent機器上輸入ip a
看到cloudbr0就是我們創建的網橋
vnet0就是虛擬設備
虛擬機連接到vnet設備,vnet又接到網橋上
[root@agent1 cloudstack]# ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ff inet6 fe80::20c:29ff:feab:d5a9/64 scope link valid_lft forever preferred_lft forever 3: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0 4: virbr0-nic: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN qlen 500 link/ether 52:54:00:ea:87:7d brd ff:ff:ff:ff:ff:ff 8: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether 00:0c:29:ab:d5:a9 brd ff:ff:ff:ff:ff:ff inet 192.168.145.152/24 brd 192.168.145.255 scope global cloudbr0 inet6 fe80::20c:29ff:feab:d5a9/64 scope link valid_lft forever preferred_lft forever 10: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether fe:00:a9:fe:00:64 brd ff:ff:ff:ff:ff:ff inet 169.254.0.1/16 scope global cloud0 inet6 fe80::f810:caff:fe2d:6be3/64 scope link valid_lft forever preferred_lft forever 11: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:00:a9:fe:00:64 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:a9ff:fefe:64/64 scope link valid_lft forever preferred_lft forever 12: vnet1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:db:22:00:00:07 brd ff:ff:ff:ff:ff:ff inet6 fe80::fcdb:22ff:fe00:7/64 scope link valid_lft forever preferred_lft forever 13: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:20:e4:00:00:13 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc20:e4ff:fe00:13/64 scope link valid_lft forever preferred_lft forever 14: vnet3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:00:a9:fe:00:6e brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:a9ff:fefe:6e/64 scope link valid_lft forever preferred_lft forever 15: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:92:68:00:00:08 brd ff:ff:ff:ff:ff:ff inet6 fe80::fc92:68ff:fe00:8/64 scope link valid_lft forever preferred_lft forever 16: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:84:42:00:00:0f brd ff:ff:ff:ff:ff:ff inet6 fe80::fc84:42ff:fe00:f/64 scope link valid_lft forever preferred_lft forever 17: vnet6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:ba:5c:00:00:12 brd ff:ff:ff:ff:ff:ff inet6 fe80::fcba:5cff:fe00:12/64 scope link valid_lft forever preferred_lft forever 18: vnet7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:00:a9:fe:01:4d brd ff:ff:ff:ff:ff:ff inet6 fe80::fc00:a9ff:fefe:14d/64 scope link valid_lft forever preferred_lft forever 19: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UNKNOWN qlen 500 link/ether fe:b1:ec:00:00:10 brd ff:ff:ff:ff:ff:ff inet6 fe80::fcb1:ecff:fe00:10/64 scope link valid_lft forever preferred_lft forever [root@agent1 cloudstack]#
網絡流量走向:虛擬機到vnet---cloudbr0---eth0
cloudbr0橋接了很多設備
[root@agent1 cloudstack]# brctl show bridge name bridge id STP enabled interfaces cloud0 8000.fe00a9fe0064 no vnet0 vnet3 vnet7 cloudbr0 8000.000c29abd5a9 no eth0 vnet1 vnet2 vnet4 vnet5 vnet6 vnet8 virbr0 8000.525400ea877d yes virbr0-nic [root@agent1 cloudstack]#
master到那個虛擬機網絡不通
[root@master1 html]# ping 192.168.145.176 PING 192.168.145.176 (192.168.145.176) 56(84) bytes of data. ^C --- 192.168.145.176 ping statistics --- 23 packets transmitted, 0 received, 100% packet loss, time 22542ms [root@master1 html]#

修改安全組之后通了
[root@master1 html]# ping 192.168.145.176 PING 192.168.145.176 (192.168.145.176) 56(84) bytes of data. 64 bytes from 192.168.145.176: icmp_seq=1 ttl=64 time=3.60 ms 64 bytes from 192.168.145.176: icmp_seq=2 ttl=64 time=1.88 ms 64 bytes from 192.168.145.176: icmp_seq=3 ttl=64 time=1.46 ms ^C --- 192.168.145.176 ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 2087ms rtt min/avg/max/mdev = 1.463/2.316/3.605/0.927 ms [root@master1 html]#
搭建自己的私有雲,安全組這里是全放開,和剛才設置類似,
再加個udp的全放開
登錄到6.5的實例上,配置dns
[root@localhost ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE=eth0 HWADDR=06:B1:EC:00:00:10 TYPE=Ethernet UUID=5f46c5e2-5ac6-4bb9-b21d-fed7f49e7475 ONBOOT=yes NM_CONTROLLED=yes BOOTPROTO=dhcp DNS1=10.0.1.11 [root@localhost ~]# /etc/init.d/network restart [root@localhost ~]# ping www.baidu.com PING www.a.shifen.com (115.239.211.112) 56(84) bytes of data. 64 bytes from 115.239.211.112: icmp_seq=1 ttl=128 time=4.10 ms ^C --- www.a.shifen.com ping statistics --- 1 packets transmitted, 1 received, 0% packet loss, time 815ms rtt min/avg/max/mdev = 4.107/4.107/4.107/0.000 ms [root@localhost ~]#
6.5的實例上安裝openssh
[root@localhost ~]# yum install -y openssh
回顧下,查看下二級存儲的東西
[root@master1 html]# cd /export/secondary/ [root@master1 secondary]# ls lost+found snapshots template volumes [root@master1 secondary]# cd snapshots/ [root@master1 snapshots]# ls [root@master1 snapshots]# cd .. [root@master1 secondary]# cd template/ [root@master1 template]# ls tmpl [root@master1 template]# cd tmpl/2/201/ [root@master1 201]# ls 201-2-c27db1c6-f780-35c3-9c63-36a5330df298.iso template.properties [root@master1 201]# cat template.properties # #Sat Feb 11 15:18:59 UTC 2017 filename=201-2-c27db1c6-f780-35c3-9c63-36a5330df298.iso id=201 public=true iso.filename=201-2-c27db1c6-f780-35c3-9c63-36a5330df298.iso uniquename=201-2-c27db1c6-f780-35c3-9c63-36a5330df298 virtualsize=417333248 checksum=0d9dc37b5dd4befa1c440d2174e88a87 iso.size=417333248 iso.virtualsize=417333248 hvm=true description=centos6.5 iso=true size=417333248 [root@master1 201]#
agent上查看
[root@agent1 cloudstack]# cd /export/primary/ [root@agent1 primary]# ls 0cc65968-4ff3-4b4c-b31e-7f1cf5d1959b cf3dac7a-a071-4def-83aa-555b5611fb02 1685f81b-9ac9-4b21-981a-f1b01006c9ef f3521c3d-fca3-4527-984d-5ff208e05b5c 99643b7d-aaf4-4c75-b7d6-832c060e9b77 lost+found
這里面有兩個系統創建的虛擬機 ,加上自己創建的虛擬機,還有一個虛擬路由
[root@agent1 primary]# virsh list --all Id Name State ---------------------------------------------------- 1 s-1-VM running 2 v-2-VM running 3 r-4-VM running 4 i-2-3-VM running [root@agent1 primary]#