BOOK THIS SPACE FOR AD
ARTICLE AD
Hey! I’m Michael, but more commonly known as “codingo”. By night, I’m at YouTube making content on bug bounties for fun, and by day I work as the Global Head of Security Operations and Researcher Enablement for Bugcrowd, the original and one of the largest bug bounty platforms.
Recon has always been a passion and love of mine, and although it’s never been more documented than it is now, with more tools than we’ve ever had, it is, ironically, also the most misunderstood part of the bug hunting journey.
That’s potentially unfair, but the basis of my statement is that I see recon too often discussed as a generic term that suits all asset discovery activity. As an alternative, I’d like to propose that we would be better served breaking recon down into two core ideas:
Mapping of an asset space, or point of time recon (POT) Understanding of an asset space, or recon over time (ROT)Further to that, taking a leaf from a recent shubs CrikeyCon 2021 talk, we should also think about the breadth/depth of recon in two distinct elements:
Narrow recon Wide reconUnderstanding recon vs. time
Point of time (POT) recon
I believe POT to be the most well documented and understood, but also the least valuable way to map out and understand a target space. The reasoning for this can be summed up simply as you not knowing what the company is doing, at least not deeply. If a program is new, by doing a single point in time recon you can often come across different findings, but those diminish significantly as a program ages, and only performing recon of this nature significantly increases your likelihood of duplicate findings.
The reason for this is simple: when you’re mapping out an asset space for the first time, you still have the potential to find bugs, but so does everybody else mapping out that asset space with similar approaches. In such a competitive landscape as bug hunting, that puts you on the same playing field as everybody else. And with a huge range of people specialising in different target types, there’s a good chance that whatever easy wins you’re finding have already been understood and mapped by others.
That’s not to say you won’t find bugs, there’s still plenty to find, but this lacks the nuance of regularly performing the same recon on a target space and comparing results. In many ways, you’re playing a game of luck, hoping that what you’re finding is new and undiscovered, instead of increasing your chances of identifying new items by monitoring over time.
Recon over time (ROT)
Conversely, ROT allows you to understand how a target changes over time. Open scope programs (which I affectionately refer to as yolo-scope programs) are typically found with companies who have hundreds, if not thousands of endpoints, and large development teams making regular changes. By continuing to map the available asset space, you can identify where the business you’re working is making changes, and what kind of changes they tend to make. You can map technologies, learn them more deeply, and gain an intuitive understanding of the kinds of mistakes the business behind the program tends to make (or has made). You can then use that knowledge over other assets to find vulnerabilities that others with more of a surface-level understanding will miss.
Whilst this idea isn’t a novel one, tooling for this is still rather immature, and most of the ideal techniques for this kind of monitoring can be found in private (not yet public) projects. Personally, I found the most valuable approach to tracking change to be a combination of keeping good notes, comparing hashed page content (for static endpoints and DNS entries), and the Levenshtein distance algorithm for dynamic page content. Notably, I don’t think this is the best approach possible, and over time I expect others to solve this problem much more elegantly. If you’re reading this months (or years) after publication, I strongly advise looking for tooling that aids in or supports ROT tracking.
That’s not to say that tooling is required to do this kind of monitoring either. As mentioned earlier, understanding of a program’s landscape is key. Keeping detailed notes and monitoring company social media and news channels can be novel ways to find new and changing attack surfaces, and the unique approaches that are specific to them. Overall, taking these approaches is the most likely way to yield positive results when bug hunting, and the most likely to lead to unique first to find findings (non-duplicates). When an asset space is new, those who are monitoring and understanding a program’s landscape are the ones most likely to hunt against it, while others are playing a game of luck, hoping to find it among an array of point of time recon data.
Understanding recon from a point of depth
Just as important as understanding the need to perform recon over time is the need to discuss recon from a point of breadth, or depth. The most elegant way I’ve heard was discussed recently by shubs, in his CrikeyCon 2021 talk where he laid these out as narrow and wide recon.
Narrow recon
Narrow recon considers the pieces that make up an asset. This can include API endpoints, specific features or functions of a website, and items that are specific to those features. You can identify narrow recon through page brute forcing (using a tool like ffuf), or walking the application and mapping the features, functions and API endpoints available as you use it. Narrow recon doesn’t lend itself to monitoring as directly, but can be incredibly valuable to monitor over time if you’re specifically trying to understand and hunt over an asset that you know a company is actively working on and making changes to.
Wide recon
Wide recon essentially dictates the mapping of a company’s external perimeter, but not to a depth of those assets themselves. This can constitute top-level domain mapping and subdomain mapping, but also open ports and services. Additional data points could include CDNs, buckets, or credentials found on services such as Github. The limitation here tends to be when you get into specific page content, API endpoints, or items specific to a target itself that you may not want to track in your initial (wide recon) pass of a target. Given the broader nature of these elements, these do serve the purpose of mapping over time better than narrow recon elements do.
Where automation should stop, and end
Automation is another highly misunderstood rabbit hole of bug bounties. If your automation consists of running popular ethical hacking tools and looking for what everybody else is (such as default subdomain takeovers), your returns are going to be low and you’ll experience a high number of duplicates. Even as recently as two years ago, automation would show value anywhere that you pointed it; these days, however, most common vulnerabilities are claimed within minutes, not hours.
That’s not to say automation doesn’t have value—it most certainly does. However, when it comes to automation today, unless you’re already well established and have a depth of knowledge to draw upon, it should be used to track points of change and identify new attack surfaces. Given that speed is important, it’s also important to understand what is touching a target (active), and what isn’t (passive).
Active recon
Active recon consists of running tools where your own IP (or VPS IP) will make a request to the target. This takes many forms, ranging from directory brute forcing, crawling the website, or DNS brute forcing. The challenge with this type of recon is that you’ll often be unable to perform it as regularly as you’d like and catch points of change. It’s more important to throttle and time targeted requests (a tailored wordlist of 50 items is far superior to one of 100k+ items) and use those to expand upon your hunting. If you want to track more regular change, monitoring of passive sources should be where you put your focus.
Passive recon
Passive recon is where you use data sources that monitor asset spaces for you. SecurityTrails API™ and SurfaceBrowser™ are excellent examples of this. SecurityTrails monitors target spaces for you, allowing you to identify new attack surfaces when they appear in your queries. Performing these queries is extremely flexible, ranging from Subfinder, AMASS, Microsubs, and more.
The key benefit of monitoring targets in this manner is that you’re not risking a program ban or escalation that could see you receiving penalties from a bounty platform. Ultimately, SecurityTrails is making the requests on your behalf, collating it, and you can query from and monitor it directly.
Conclusion
There’s a variety of ways to think about and approach recon, and changing the nature of dialog to be more specific is both important and useful. As a bug bounty hunter, identifying what type of recon you want to do, and when, is a great way to look at your own workflow for points of improvement.
In reading this, think about what you’re doing to monitor programs over time. Do you have a process for it? Are you tailoring your wide recon to be more niche, with targeted wordlists, and your narrow recon to be more expansive? Do you have an approach that helps you work out when to be passive, and when to be active?
Asking and challenging yourself with these questions allows you to expand upon your own methodology, and become a better bug bounty hunter.