Mastering Subdomain Enumeration

10 months ago 69
BOOK THIS SPACE FOR AD
ARTICLE AD

h0tak88r

“In the Name of Allah, the Beneficent, the Merciful”

Mastering the art of subdomain enumeration is a crucial skill for those seeking to unlock the full potential of web architecture. By systematically discovering and mapping subdomains, researchers, security professionals, and curious enthusiasts can gain valuable insights, unravel digital footprints, and fortify cyber defenses. The process entails employing various techniques and tools to navigate through the labyrinthine structures of the internet, connecting the dots that form the intricate tapestry of online presence.

This is undetailed version of My write-up, The detailed one: https://h0tak88r.github.io/posts/Deep-Subdomains-Enumeration/

When talking about subdomain enumeration we have two phases Vertical/Horizontal domain correlation

Vertical VS HorizontalDiscovering the IP space# get the ASN from websites like
https://bgp.he.net/

# find out the IP ranges that reside inside that ASN
apt-get install whois
whois -h whois.radb.net -- '-i origin AS8983' | grep -Eo "([0-9.]+){4}/[0-9]+" | uniq -u > ip_ranges.txt

2. PTR records (Reverse DNS)

cat ip_anges.txt | mapcidr -silent | dnsx -ptr -resp-only -o ptr_recrds.txt

3. Favicon Search

cat urls.txt | python3 favfreak.py -o output
http.favicon.hash:-<hash>

4. Finding related domains/acquisitions

use CHATGPT, Google, wikipedia,https://tools.whoisxmlapi.com/reverse-whois-searchSubfinder [ subfinder -d test.com -o passive2.txt -all ] (it is important to add apis to get better results go to the detailed blog if u diddn`t git itInternet Archive → district → waybackurlsGithub Scraping → github-subdomainsGitLab Scraping → gitlab-subdomainshttps://chaos.projectdiscovery.io/#/ → it is like database or something here u can get all subdomains for public bug bounty programs, yeah it is useless when you work in a private one.DNS Brute Forcing [ using puredns]#Prerequisites
git clone https://github.com/blechschmidt/massdns.git
cd massdns
make
sudo make install

#Installing the tool
go install github.com/d3mondev/puredns/v2@latest

# Download Resolvers List
wget https://raw.githubusercontent.com/trickest/resolvers/main/resolvers-trusted.txt

# You even can make yours
git clone https://github.com/vortexau/dnsvalidator.git
cd dnsvalidator/
pip3 install -r requirements.txt
pip3 install setuptools==58.2.0
python3 setup.py install
dnsvalidator -tL https://public-dns.info/nameservers.txt -threads 100 -o resolvers.txt

# Download dns wordlist
wget https://wordlists-cdn.assetnote.io/data/manual/best-dns-wordlist.txt

# Brute Forcing
puredns bruteforce best-dns-wordlist.txt example.com -r resolvers.txt -w dns_bf.txt

2. Permutations

# Permutation words Wordlist
wget https://gist.githubusercontent.com/six2dez/ffc2b14d283e8f8eff6ac83e20a3c4b4/raw
# Run
gotator -sub subdomains.txt -perm dns_permutations_list.txt -depth 1 -numbers 10 -mindup -adv -md | sort -u > perms.txt
# DNS resolve them and check for valid ones.
puredns resolve permutations.txt -r resolvers.txt > resolved_perms
# Hint: Collect subdomains that is not valid and make compinations then resolve them u may git valid unique subdomains that is hard to find
gotator -sub not_vali_subs.txt -perm dns_permutations_list.txt -depth 1 -numbers 10 -mindup -adv -md | sort -u > perms.txt

3. Google Analytics

git clone https://github.com/Josue87/AnalyticsRelationships.git
cd AnalyticsRelationships/Python
sudo pip3 install -r requirements.txt
python3 analyticsrelationships.py -u https://www.example.com

4. TLS, CSP, CNAME Probing

go install github.com/glebarez/cero@latest
#tls
cero in.search.yahoo.com | sed 's/^*.//' | grep -e "\." | sort -u
#cls
cat subdomains.txt | httpx -csp-probe -status-code -retries 2 -no-color | anew csp_probed.txt | cut -d ' ' -f1 | unfurl -u domains | anew -q csp_subdomains.txt
# cname
dnsx -retry 3 -cname -l subdomains.txt

5. Scraping(JS/Source code)

# Web probing subdomains
cat subdomains.txt | httpx -random-agent -retries 2 -no-color -o probed_tmp_scrap.txt

# Now, that we have web probed URLs, we can send them for crawling to gospider
gospider -S probed_tmp_scrap.txt --js -t 50 -d 3 --sitemap --robots -w -r > gospider.txt

#Cleaning the output
sed -i '/^.\{2048\}./d' gospider.txt
cat gospider.txt | grep -Eo 'https?://[^ ]+' | sed 's/]$//' | unfurl -u domains | grep ".example.com$" | sort -u scrap_subs.txt

# Resolving our target subdomains
puredns resolve scrap_subs.txt -w scrap_subs_resolved.txt -r resolvers.txt

#!/bin/bash

go install -v github.com/tomnomnom/anew@latest
subdomain_list="subdomains.txt"

for sub in $( ( cat $subdomain_list | rev | cut -d '.' -f 3,2,1 | rev | sort | uniq -c | sort -nr | grep -v '1 ' | head -n 10 && cat subdomains.txt | rev | cut -d '.' -f 4,3,2,1 | rev | sort | uniq -c | sort -nr | grep -v '1 ' | head -n 10 ) | sed -e 's/^[[:space:]]*//' | cut -d ' ' -f 2);do
subfinder -d $sub -silent -max-time 2 | anew -q passive_recursive.txt
assetfinder --subs-only $sub | anew -q passive_recursive.txt
amass enum -timeout 2 -passive -d $sub | anew -q passive_recursive.txt
findomain --quiet -t $sub | anew -q passive_recursive.txt
done

cd subs/
cat horizontal/ptr_records.txt | sort -u > horizontal.txt
cat Vertical/Active/* | sort -u > active.txt
cat Vertical/Pssive/* | sort -u > passive.txt
cat * | sort -u > all_subs.txt
cat all_subs.txt | httpx -random-agent -retries 2 -no-color -o filtered_subs.txt
Read Entire Article