Event Driven Bug Bounty on AWS

1 week ago 15
BOOK THIS SPACE FOR AD
ARTICLE AD

Hussein Ayoub

Couple of weeks ago, i was setting up a web application for a friend on his private VPS, as he requested his site to be equipped with SSL , i went to issue SSL certificates with CertBot. When I first ran the CertBot command to start the certificates generation process, I immediately noticed in my NGINX server logs that some directory brute-forcing activity occurred.

Certificate Transparency Logs are a key component of the Certificate Transparency system. They are publicly accessible, append-only logs that record information about issued SSL/TLS certificates.

The first thing that came to my mind is some sort of Certificate Transparency Log monitoring that is targeting Let’s Encrypt CA and was motivated to build a similar workflow on AWS that will give us a leverage in the Recon process for targeted bug bounty programs domains.

The below architecture diagram can be modified to use private subnets along with a NAT Gateway to provide internet connectivity for our fargate tasks.

The above diagram shows the architecture I followed when setting up a demo for this article. It consists of the following components:

EC2 Instance: hosts our Python Script that uses certstream python package and has an Instance Profile allowing message publishing to SNS.Multiple SQS queues subscribed to the same SNS topic following the fanout pattern where the SNS topic receive the new (sub)domains and send them to SQS.Lambda function acts as a trigger that consumes items from the SQS queues containing the (sub)domains and runs the fargate tasks with the ECS boto3 client

Each SQS queue will be basically used for a different recon tool ( For the sake of simplicity, I’ll use dirsearch only in this demo ).

For the Python Script, the following code snippet was used to connect to certstream WebSocket and publish (sub)domains to the SNS topic. Make sure you install certstream and boto3 using pip.

import certstream
import boto3

sns_client = boto3.client('sns')

def connect_to_certstream():
certstream.listen_for_events(print_callback, on_error=on_error, url='wss://certstream.calidog.io/')
return

def print_callback(message, context):
if message['message_type'] == "heartbeat":
return
if message['message_type'] == "certificate_update":
all_domains = message['data']['leaf_cert']['all_domains']

if len(all_domains) > 0:
try:
for domain in all_domains:
response = sns_client.publish(
TopicArn='arn:aws:sns:eu-central-1:accountid:CertStream',
Message=domain
)
print(response)
except Exception as e:
print(e.with_traceback)

def on_error(instance, exception):
# Instance is the CertStreamClient instance that barfed
print("Exception in CertStreamClient! -> {}".format(exception))
connect_to_certstream()

if __name__ == "__main__":
connect_to_certstream()

For the Lambda Trigger function, make sure you have the proper IAM permissions attached to your IAM role to be able to interact with SQS and ECS. You can use the AWS managed policy AWS Lambda SQS Queue Execution Role for the SQS part. The below code will get the domain value from the SNS message and trigger the ECS Task ( dirsearch docker container ) and passing to it the parameter value in the containerOverrides section.

import json
import boto3

client = boto3.client('ecs')

def lambda_handler(event, context):
try:
domain = ''
for record in event["Records"]:
payload = record["body"]
message = json.loads(str(payload))
domain = message["Message"]
response = client.run_task(
cluster='my-cluster',
count=1,
launchType='FARGATE',
networkConfiguration={
'awsvpcConfiguration': {
'subnets': [
'subnet-xxxxx',
],
'securityGroups': [
'sg-xxxxxx',
],
'assignPublicIp': 'ENABLED'
}
},
overrides={
'containerOverrides': [
{
'name': 'dirsearch',
'command': [
'-u ' + domain, # Not the most secure solution to exist :')
]
}
]
},
platformVersion='LATEST',
taskDefinition='arn:aws:ecs:eu-central-1:xxxxxxx:task-definition/dirsearch'
)
return "Success"
except Exception, e:
raise e

Make sure you already created your ECS Cluster and your task definition along with the corresponding IAM / Task execution role for your ECS containers. I built the dirsearch docker image and pushed it to ECR and granted access to my ECR repo to ECS with IAM Task execution role.

You can test the whole flow by running the certstream Python Script on your EC2 and observe a batch of Fargate containers starting in your ECS cluster.

Once the dirsearch jobs are done, you can inspect the logs in CloudWatch or ship them to S3 ( by specifying the argument in dirsearch & tweaking dirsearch scripts ) to query them with Athena : )

Certificate Transparency logs are used by many subdomains enumeration tools to widen the attack surface for your target. The above workflow can be scaled to use multiple recon tools to scan targets with AWS in real time and further gather assets for your bug bounty data lake :)

Thanks for tuning in !

Read Entire Article