BOOK THIS SPACE FOR AD
ARTICLE ADIn modern-day web applications, critical application development has been done on javascript files. Java script files contain some sensitive information such as API secret key and other internal domain URL values.
When the attackers get hold of such information, they will be able to talk to API and make changes to them. In one of my target application, API secret was found to be very sensitive. Let us check out how it was identified.
Here, I would like to share the enumeration method of how the key was found. Below is the bash script which I have created to enumerate from waybackurl tool
What is waybackurl tool:
This tool will fetch all archived URL list for the target domain from internet.Please check below link to download the tool
Similarly, gau tool also can be downloaded from this link
#!/bin/bashsource ~/.bash_profileif [[ -z $1 ]];then
echo “Please provide the root domain file name!”
exit
firm -f allfiles.txt uniq_files.txt wayback_only_html.txt wayback_js_files.txt wayback_httprobe_file.txt wayback_json_files.txt important_http_urls.txt aws_s3_files.txt
echo “Currently waybackurls extract is in progress!!”
for i in $(cat $1)
do
waybackurls $i >> allfiles.txt
gau $i >> allfiles.txt
done
echo “Waybackurls extraction is complete!!”
sort -ru allfiles.txt >> uniq_files.txt
echo “Uniq file also created. please check [uniq_files.txt]”
echo “Now, we need to extract only html files from the list”
grep -iv -E — ‘.js|.png|.jpg|.gif|.ico|.img|.css’ uniq_files.txt >> wayback_only_html.txt
echo “We have extracted all html files.Please check [wayback_only_html.txt]”
echo “Next is to extracct js files from the list”
cat uniq_files.txt | grep “\.js” | uniq | sort >> wayback_js_files.txt
cat uniq_files.txt | grep “\.json” | uniq | sort >> wayback_json_files.txt
echo “Js files have been successfully extracted **************[wayback_js_files.txt]**************”
echo “Json files have been successfully extracted **************[wayback_json_files.txt]**************”
echo “Now extracting important urls from **************[wayback_only_html.txt]**************”
grep — color=always -i -E — ‘admin|auth|api|jenkins|corp|dev|stag|stg|prod|sandbox|swagger|aws|azure|uat|test|vpn|cms’ wayback_only_html.txt >> important_http_urls.txt
echo “Please check file **************[important_http_urls.txt]*************”
grep — color=always -i -E — ‘aws|s3’ uniq_files.txt >> aws_s3_files.txt
echo “Please check file **************aws_s3_files.txt]*************”
echo “Process is complete”
echo “Now start takin screensots selectively”
echo “The command:”
echo “ — — — — — — — “
echo “cat wayback_only_html.txt | aquatone -threads 20”
This script will get all URL list from archived sources initially. Next, I have tried to move the URL list based on JS files,Json files,CSS,images and HTML files.
grep — color=always -i -E — ‘admin|auth|api|jenkins|corp|dev|stag|stg|prod|sandbox|swagger|aws|azure|uat|test|vpn|cms’ wayback_only_html.txt >> important_http_urls.txtAbove grep statement will search for sensitive string values such admin/api/jenkins in the html file and store it in important_http_urls.txt file. I was checking for any api secret value in this file but no luck.Then, I performed same logic in wayback_js_files.txt file.Surprisingly, I was able to idenitfy secret token API of the target application.
I would like to suggest that people who hunt for bugs can check out this process and try to identify secret keys from JS files.
if you like this post,please like and share.