Best bug hunting methodology for EZ money

7 months ago 56
BOOK THIS SPACE FOR AD
ARTICLE AD

Tom

# ALOT OF CREDIT GOES TO “lostsec” ON YOUTUBE FOR THIS. big thanks to him you should definitely check his videos out, they’re a big help. You can find his video here: https://www.youtube.com/watch?v=Ifo1vIdfyhg

Here is my preferred methodology for recon when it comes to Bug Bounty

Required tools for the methodology

gau
uro
curl
naabu
nuclei
subfinder
assetfinder
httpx-toolkit
secretfinder.py

To begin, to collect subdomains my preffered method is to use OSINT tools to pull them off the internet from sources like the wayback machine.

For the rest of the time I will be using “domain.com” as a lab rat

subfinder -d domain.com >> subdomains.txtassetfinder -subs-only domain.com >> subdomains.txt

Now I have a curl request oneliner to crt.sh, which is a public repository for certificate fingerprints, we then have to use some regex to only have our domain names pulled from the response

curl -s https://crt.sh/\?q\=\domain.com\&output\=json | jq -r '.[].name_value' | grep -Po '(\w+\.\w+\.\w+)$' >> subdomains.txtcrt.sh can be down from time to time so this may not work*

Now, during this recon phase we might pull some duplicates so use the following one liners to filter the duplicates and sort them in alphabetic order.

cat subdomains.txt | sort | uniq >> 1.txtrm subdomains.txt && mv 1.txt subdomains.txt

Now that we have all of our subdomains we just have to do one final thing before we can start our hunting process. As you may or may not know, sometimes subdomains can be “discontinued” and may not be up, so we can use my preferred tool to check our subdomains.txt and check if they’re not up.

cat subdomains.txt | httpx-toolkit -ports 80,443,8080,8000,8888 -threads 200 > alive.txt

Now with our “alive.txt” you *could* go through them individually and comb through it, which I would recommend after we finish what I’m about to show you, but you can still collect some useful information that can get you some very easy bounties.

Chapter 2: nmap, URLS, parameters and jsfiles.

The following syntax will use naabu, to take a list of hosts and scan all ports running on the machine, just in case we missed a potential admin portal etcetera on a higher port. Also extra information on the target is always a big yes when it comes to bug hunting.

naabu --list alive.txt -c 50 -nmap-cli 'nmap -sV -sC' -o naabuports.txt

The “naabuports.txt” will contain all hosts and ports that are open.

Next, we will use a tool called gau, which pulls all indexed URLS from sources like AlienVault, you can check out the GitHub here and install it.

https://github.com/lc/gau

To run gau, all you have to do is provide it with input.

cat alive.txt | gau --o dirs.txt

This might take a while and depending on how much hosts you have, you could pull upwards of 100,000 URLs from gau, so be careful if you don’t have a lot of processing power / storage.

My preferred thing to do is to sort parameters and jsfiles.

cat dirs.txt | grep "?" > params.txt cat dirs.txt | grep ".js"$ > jsfiles.txt

One final thing to do is to check “params.txt” for duplicates, for example domain.com/?id=1 and we might have upwards of a thousand URLs of ?id=2,3,4,5 and so on, so I use the tool called “uro” developed by s0md3v

https://github.com/s0md3v/urocat params.txt | uro filteredparams.txt

Chapter 3: Automation and bug hunting

Once we have all of our necessary URLS, subdomains, jsfiles and parameters. We can start combing through them for bugs.

My first decision is actually to manually go through the “filteredparams.txt” and look for something that catches my eye, for example. I have a found a bug that was transporting, emails, phone numbers, addresses and full names in the URLs in the form of parameters. Which you can submit as “Cleartext Transmission Of Sensitive Data” for example.

But anyway moving on from manual checking, we can start to run nuclei, a popular vulnerability scanner on the filteredparamters.txt or the dirs.txt which you can run with the following syntax

nuclei --list dirs.txt -c 70 -rl 200 -fhr -lfa -o nuclei.txt -es info

At the end of the command, I am excluding any info returns for example DNS info on the targets that would be returned by nuclei. You are welcome to include your own nuclei templates and I recommend to do so.

Last bug not least is to run a tool called secretfinder.py on the “jsfiles.txt” file we have that looks for stuff like API keys that might be hardcoded.

You first need to download the tool which you can find here

https://github.com/m4ll0k/SecretFinder

*DISCLAIMER: I had a little bit of an issue when downloading some python3 libraries but it shouldn't be that much a problem, if you get any errors just look at the output and see if you need to download anything that you might have missing*

To run it you have to use a small bash script as secretfinder doesn't take a list of input, but that’s fine and I will show you the necessary command

cat jsfiles.txt | while read url; do python3 SecretFinder.py -i $url -o cli >> secret.txt; done

The output of secret.txt might be a bit much, so you can comb through it using grep to only collect anything that has to do with API keys,

cat secret.txt | grep API >> apikeys.txt

We can now run apikeys.txt through a script called keyhacks.sh

https://github.com/gwen001/keyhacks.sh

I will use an EXAMPLE API KEY for the keyhacks example

./keyhacks.sh Heroku_API_KEY b2868348-d2812-e2q28-e2002ed6630d

Thanks to lostsec on youtube once again, his content is a very big help for anyone looking to get into bug hunting and thanks for reading, happy hunting

¬¬Tom

Read Entire Article