在网络安全领域中,发现和管理攻击面绝对是一项必须的任务,而对域名的寻找和分析是发现攻击面的重要步骤。今天我们将与您分享关于域名发现的四种方法,并附带Python示例代码来帮助您更好的理解和掌握这些方法。
1. 主域名链式证书提取域名信息(Chain of Trust from Root Domain)
1 2 3 4 5 6 7 8 9 | import ssl import OpenSSL def get_cert_chain(domain): cert = ssl.get_server_certificate((domain, 443 )) x509 = OpenSSL.crypto.load_certificate(OpenSSL.crypto.FILETYPE_PEM, cert) return [value for value in x509.get_subject().get_components()] print (get_cert_chain( 'example.com' )) |
2. 证书透明度日志(Certificate Transparency Logs)
1 2 3 4 5 6 7 8 9 10 11 | import requests def query_crt_sh(domain): response = requests.get(url) try : return [result[ 'name_value' ] for result in response.json()] except : return [] print (query_crt_sh( 'example.com' )) |
3. 站长工具(Webmaster Tools)
1 2 3 4 5 6 7 8 9 10 | import requests from bs4 import BeautifulSoup def query_webmaster_tools(domain): page = requests.get(base_url) bs_obj = BeautifulSoup(page.text, "html.parser" ) return [pre.text for pre in bs_obj.find_all( 'pre' )] print (query_webmaster_tools( 'example.com' )) |
4. 子域名爆破(Subdomain Enumeration)
对实际环境中常见的子域名前缀进行枚举。
1 2 3 4 5 6 7 8 9 10 11 12 13 14 | import socket def enum_subdomains(domain): common_subdomains = [ 'www' , 'ftp' , 'mail' , 'webmail' , 'admin' ] for subdomain in common_subdomains: full_domain = f "{subdomain}.{domain}" try : # if the subdomain resolves, it exists socket.gethostbyname(full_domain) print (f "Discovered subdomain: {full_domain}" ) except socket.gaierror: pass enum_subdomains( 'example.com' ) |
根据目标和环境选择适合的工具进行深入挖掘总能帮助我们更好的发现攻击面。希望以上的信息会对你有所帮助。
写在最后
云图极速版支持包含上述几种在内的 20 余种域名发现方式,通过智能编排的方式动态调用以实现域名发现覆盖度的最大化。除此之外,云图极速版还支持 IP 发现、端口、服务、网站、组件、漏洞、安全风险等多种企业资产信息的全自动发现与监控。实现攻击面发现与攻击面管理的自动化。
方法补充
除了上文的方法,小编为大家整理了其他Python实现子域名收集的方法,希望对大家有所帮助
实现代码
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 | # 导入模块 import sys from threading import Thread from urllib.parse import urlparse import requests from bs4 import BeautifulSoup # bing搜索子域名 def bing_search(site, page): headers = { 'User-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/' '85.0.4183.102 Safari/537.36' , 'Accept-Encoding' : 'gzip,deflate' , 'Accept-Language' : 'en-US,en;q=0,5' , '&sc=0-14&sk=&cvid=852BA524E035477EBE906058D68F4D70' , 'cookie' : 'SRCHD=AF=WNSGPH; SRCHUID=V=2&GUID=D1F8852A6B034B4CB229A2323F653242&dmnchg=1; _EDGE_V=1; ' 'MUID=304D7AA1FB94692B1EB575D7FABA68BD; MUIDB=304D7AA1FB94692B1EB575D7FABA68BD; ' '_SS=SID=1C2F6FA53C956FED2CBD60D33DBB6EEE&bIm=75:; ipv6=hit=1604307539716&t=4; ' '_EDGE_S=F=1&SID=1C2F6FA53C956FED2CBD60D33DBB6EEE&mkt=zh-cn; SRCHUSR=DOB=20200826&T=1604303946000;' ' SRCHHPGUSR=HV=1604303950&WTS=63739900737&CW=1250&CH=155&DPR=1.5&UTC=480&DM=0&BZA=0&BRW=N&BRH=S' } for i in range ( 1 , int (page) + 1 ): url = "https://cn.bing.com/search?q=site:" + site + "&go=Search&qs=ds&first=" + str (( int (i) - 1 ) * 10 + 1 ) html = requests.get(url, headers = headers) soup = BeautifulSoup(html.content, 'html.parser' ) job_bt = soup.findAll( 'h2' ) for j in job_bt: link = j.a.get( 'href' ) domain = str (urlparse(link).scheme + "://" + urlparse(link).netloc) if domain in Subdomain: pass else : Subdomain.append(domain) # 百度搜索 def baidu_search(site, page): headers = { 'User-agent' : 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/' '85.0.4183.102 Safari/537.36' , } for i in range ( 1 , int (page) + 1 ): # 拼接搜索链接 ( int (i) - 1 ) * 10 ) + "&oq=site:" + site + "&ie=utf-8" conn = requests.session() resp = conn.get(baidu_url, headers = headers) soup = BeautifulSoup(resp.text, 'lxml' ) tagh3 = soup.findAll( 'h3' ) for h3 in tagh3: href = h3.find( 'a' ).get( 'href' ) resp_site = requests.get(href,headers = headers) # 获取url链接地址 domain = str (urlparse(resp_site.url).scheme + "://" + urlparse(resp_site.url).netloc) # 将子域名追加到列表中 if domain in Subdomain: pass else : Subdomain.append(domain) # 从保存的文件中读取内容 def read_file(): with open (r 'c:usersxxxxdesktopxxx.txt' , mode = 'r' ) as f: for line in f.readlines(): print (line.strip()) # 将结果写入文件 def write_file(): with open (r 'c:usersxxxdesktopxxx.txt' , mode = 'w' ) as f: for domain in Subdomain: f.write(domain) f.write( 'n' ) if __name__ = = '__main__' : # 需要用户传入需要查询的站点域名及希望查询的页数 if len (sys.argv) = = 3 : domain = sys.argv[ 1 ] num = sys.argv[ 2 ] else : print ( "Usage: %s baidu.com 10" % sys.argv[ 0 ]) sys.exit( - 1 ) Subdomain = [] # 多行程执行子域名查找 bingt = Thread(target = bing_search, args = (domain, num,)) bait = Thread(target = baidu_search, args = (domain, num,)) bingt.start() bait.start() bingt.join() bait.join() # 写入文件 write_file() |
到此这篇关于基于Python实现子域名收集工具的文章就介绍到这了,更多相关Python子域名收集内容请搜索IT俱乐部以前的文章或继续浏览下面的相关文章希望大家以后多多支持IT俱乐部!