BOOK THIS SPACE FOR AD
ARTICLE ADAt the first, let me talk about a small part of my methodology to discover s3 buckets. [You can check my simple methodology from here]
I'm collecting s3 buckets from 2 files depending on regex, the first file is subdomains and the second is JS files.
1. Subdomains: Use subfinder, amass, assetfinder, crt.sh, findomain and archive.org to collect all subdomains and then pass the output to httpx to filter the results and return the live hosts only.
2. JS Files: Use waybackurls, gau-plusto collect all JS files using grep command grep -F '.js' and subjs for collecting more files.
3. Regex: At the first, I’ll send a request to each JS file and subdomain using requests library in python and search for the buckets using re library and this regex r”[\w\-\.]+\.s3\.?(?:[\w\-\.]+)?\.amazonaws\.com|(?<!\.)s3\.?(?:[\w\-\.]+)?\.amazonaws\.com\\?\/[\w\-\.]+”
After collecting all s3 buckets from subdomains and JS files source code, you need to know what the permission you have read/write/delete, and I recommend you to take a look at this blog — Abdullah Mohamed
By following the steps above, I’ve found a bucket let’s call it random-bucket.com and after opening it through the browser, I found that it’s accessible for anyone to read its content.
I collect almost 992 important documents and almost 20 directories.
All of these files are PNG, PDF, DOCX, and Excel files.
The content of these files are related to construction and building projects of UK government containing the agent data like email, fax, phone number, address and project drawings and maps.
And also some photos from the locations
And other top secret docs contains detailed information about every project.