BOOK THIS SPACE FOR AD
ARTICLE ADWell, well, the article is here — ohh! Sorry, I mean the series of articles — where we will do penetration testing or bug bounty hunting. That’s what children say nowadays. We’ll go through it practically, step by step, and we’re not doing it in any vulnerability labs or on intentionally vulnerable websites. We’re doing it on a real-life website that also has a vulnerability reporting program.
So, from finding each vulnerability to reporting every one individually, and also making a final report like a professional pentester would, we’ll cover it all.
Here we are, the step-by-step guide to Bug Bounty Hunting. The first step of our methodology is the IDENTIFICATION phase. Not sure what step or phase I’m talking about? You should check out and read my previous article, where I discuss the best methodology for bug bounty hunting. You can find it here: Methodlogy.
Now that you’re aware of the methodology, let’s start with the identification step. Make sure your attack machine is set up and you have the necessary tools. Let’s go hunting!
So, the question is: where to start and how to start? First, you will need some tools. I want you to install the following tools:
Wappalyzer extension,
Retire.js,
FoxyProxy,
Shodan extension.
Once you’ve installed everything, you’ll need your domain — your target.
Now, you may be wondering: how do you choose a target domain for testing? I’ve written a detailed article on that, and you can check it out here: HERE. But since we’re just starting out, you should focus on choosing a target domain with a Responsible Disclosure Program (RDP). I mean, don’t choose from Bugcrowd or HackerOne just yet. Start with sites that have their own Responsible Disclosure Program.
You can try searching Google Dorks like intitle: "Responsible Disclosure".
For this article series, we’re going to test a real, active government website that also has a Responsible Disclosure program. So, enough talking — let’s gear up and really get started!
In our identification step, we need to find all the services and there versions like CMS, APIs, and technologies used in the web application each and every thing used . So, our first task is to identify the server the web application is using. There are many methods to do this, but we’ll start with the basics.
With Nmap
nmap domain -p 443,80 -sVCSo, we found out that the server is nginx. We didn’t get the exact version, but that’s okay — we’re happy to start with this.
Now, many times, you won’t get the server name; instead, you might see something like HTTP, HTTP proxy, or Cloudflare. These are indications that the web server is using a proxy server or a firewall. When you encounter this, I recommend using an IP changer tool because firewalls often block most of your requests. It’s better to use an IP changer tool to bypass this, and there are many tutorials available online.
Going back to the topic — if you discover that the server name is hidden behind a proxy server or firewall, head over to Censys, input your domain, and go to the History tab. You can then nmap all the listed IPs because there’s a chance that Censys recorded the actual IP of the web server before they implemented a proxy server. This method often works well for bypassing proxy servers to find the actual server name.
Ohh too much!!
The break was important. Now, we’ve found the server version as well, with the help of Censys.
Next, let’s visit the website. I won’t show the domain or IP of the site I’m testing due to legal consent.
While exploring the site, for 3–4minutes, I checked my Wappalyzer extension and what i found was very interesting:
No CMS was identified. I tested manually using tools like CMSeeK and performed a manual source code inspection, but I didn’t find any clues indicating a CMS. So, it’s confirmed that the website is fully built with React, hand-coded by the developer. This made me very happy because websites that are hand-coded are often more prone to misconfigurations and common vulnerabilities.
Woah, woah, woah! Don’t go off topic! So, here’s what we’ve found so far:
Server: nginx, version 1.14No CMSReact is used for developmentNow, what’s next? Let’s go deeper boy!
Now lets do a directory brute forcing and a subdomain brute forcing for main domain
Oh boy, oh boy! Look at what we’ve got! I’m practically dancing right now — we found some very interesting directories! But, my excitement only lasted a few minutes as same as she takes to block me because most of them ended up being 301 redirects to 400s.
In simple words: Forbidden.
Something you won’t believe — and I still can’t believe myself — is that I didn’t find any subdomains! But no worries, I won’t stress over subdomains. I’ll focus on testing the main domain instead.
Since the website is built using React, the best thing to do now is create a sitemap. I’ll do this using Burp Suite (I’m using the Pro edition) to perform content discovery. This tool will automatically map out the site.It’s important to note that it’s not recommended to run an active crawler on just any site. In my case, I’m only crawling a single directory of the domain.
I know you didn’t fully understand the Burp Suite part I just mentioned, so let me explain. What I did was, I brute-forced some directories and found a files directory during brute force. On that URL, I ran a content discovery scan using Burp Suite, which crawled all the directories and pages within that files directory.
From that i find out that directory listing is anable that is exposing some Sensitive files
Misconfiguration Found :1
Now that we’ve found out that directory listing is enabled, this falls under the Misconfiguration phase as per our methodology, if you remember. I’ll leave it for now and move forward to identify which APIs the website is using for data transmission.
Finding APIs
To find out which APIs the website uses, the best place to start is the login and registration pages, where data is obviously pushed and pulled through APIs. So, I did just that. By intercepting some requests during the registration and login process, I was able to identify the API being used.
Take a look at this:
i was able to find out that GraphQL was used for data transmition.
So what we have found till now
server:nginx 1.14
CMS: no (made with react)
api: graphQL
found some files like pakage.json and also a testing api that exicute command of server that is hell of findings will go deeper in this on 3 phase for now read the below
We’ve identified many things, and you might think No we don’t— we just found GraphQL and nginx. But no, we’ve uncovered much more than that. The fact that directory listing was enabled, exposing sensitive data, already tells us that the website is highly vulnerable. We also found directories like admins, administrator, etc.
We haven’t gone deeper into that yet, but we will explore more in the Misconfiguration phase. In this phase, we only gathered information about the target, and now we already know it’s weak, as json file and testing apis are exposed with a lot of scope for testing.
Now, the really interesting part of the hunt begins — the Misconfiguration phase. Here, we’ll not only identify vulnerabilities but also start reporting each one we find. We’ll test them one by one. The next article will be published soon, within a few days — good things take time!
In the meantime, try doing all these steps on your own target. I haven’t shown you beginner-level things like how to use Burp Suite or install extensions — you should be able to handle that if you’re on this level.
So, keep learning, keep struggling,
till then,
Keep Hunting!