How to Get into the Bug-Bounty Biz: The Good, Bad and Ugly

3 years ago 146
BOOK THIS SPACE FOR AD
ARTICLE AD

Experts from Intel, GitHub and KnowBe4 weigh in on what you need to succeed at security bug-hunting.

Zero-day disclosures, those known bugs without a fix, can have potentially catastrophic results. One of the best ways to combat them is by discovering them before the bad guys do.

Some of the biggest tech brands on the planet have been pummeled by a rash of high-profile zero-day exploits. In the past handful of weeks, Apple announced a patch for its MacOS bypass bug and rushed four out-of-band fixes for zero-days under active attack; Chrome’s zero-day was posted on Twitter in mid-April; and of course the Microsoft Exchange zero-day attack is still fresh.

Threatpost invited zero-day experts to dig beyond the headlines, including Katie Trimble-Noble, the former DHS official who runs Intel’s bug-bounty program; Greg Ose, who runs GitHub’s bug-bounty program, and James McQuiggan, a security awareness advocate for KnowBe4.

During the conversation, the panel discussed the lifecycle of a zero-day vulnerability, the inner workings of bug-bounty programs and tips for researchers looking to break into big-time bug hunting. They even threw in a couple of predictions for good measure.

The entire conversation, on the Economics of 0-Day Disclosures: The Good, Bad and Ugly, was recorded and can be viewed in its entirety on-demand for free. What follows is a lightly edited transcript.

Also visit the Threatpost Webinar Section for all on-demand and upcoming events. 

Becky Bracken: Welcome to the March edition of the Threatpost webinar series. I’m Becky Bracken, a journalist with Threatpost and I will be moderating our discussion today.

This month we are taking a deep dive on zero-day vulnerabilities. Attackers are looking for these sorts of vulnerabilities.  So we’ve gathered the best experts we can find to talk about what that actually means.

Let’s go ahead and meet our panel. First, we have James McQuiggan, a security awareness advocate and a college instructor at Valencia College in the Engineering Computer Programming and Technology Division.

We also have Katie Trimble-Noble and she currently runs Intel’s bug-bounty program. But prior to this, she was a section chief of vulnerability management, working with both the Department of Homeland Security and the Cyber and Infrastructure Security Agency (CISA).

Finally we have Greg Ose. Welcome, Greg. He is the director of product security engineering at GitHub, and he also runs the bug-bounty program at GitHub.

So welcome to all of you. Thanks for being here.

Threatpost’s ‘Webinar’ Series

Before we get started, I want to explain that the idea of these webinars. The idea of these discussions is to really dig down into the headlines, and we want these to work for you. There’s a widget that should be in the upper right-hand corner of your screen, where you can submit your questions.

We’ve got an hour with some of the best in the business, so let’s make sure and get your questions answered. Send those early and send those often. My editor, Tom Spring, is on the other end of those moderating and he’s ready to help get your questions answered.

Poll Question

Before we get into all the details I wanted to take a quick poll question of the audience, just to see where you guys are.

Poll question: If a vulnerability was discovered in your system, does your organization have something in place to receive reports and respond?

As the answers come in, it looks like a nice bell curve. We’ve got 17 percent of you right now that are saying, “totally, we are ready to rock.”  Another 21 percent are not really sure, but 63 percent of you, you’re in the middle, and you say we’ve got something, but it could use some improvement.”

So, I think that’s a great place to start. James, let’s start with you, and maybe you can give us a good overview of the lifecycle economy that’s out there that’s driving an interest in these vulnerabilities.

0-Day Disclosure Lifecycle: What is a Zero Day?

James McQuiggan:  Definitely, and thank you, Becky, for the invite and coming here to talk to everybody today. I’m really excited.

It’s interesting to see from the poll that a lot of people are in that middle area. I think as security professionals were always wanting to improve, we never feel it’s perfect. I don’t think we can ever get perfect, but the fact that we’re always looking to strive and improve, that’s good. So, looking at a zero day and I don’t have any slides is just me talking, so you guys are all stuck, just looking at this joyous face.

So, as Becky introduced, I teach college as a side hustle, it’s the evening job here in Orlando. And, one of the things I always do when I’m educating teaching the students new topics concepts is, I like to baseline everybody, I have a feeling everybody in the audience probably knows what a zero day is, but I’m going to give the definition anyway.

We’re going to talk about some basic concepts just to make sure that everybody’s on the same page. And come at me if you don’t like the definition.

Zero day, essentially, it’s that software vulnerability where there’s no patch or fix when it’s disclosed publicly available. If somebody discovers it and there’s no patch, you’ve got yourself a zero-day exploit on your hands. It’s useful, of course, for attacking organizations with penetration testing, if organizations are going out looking for them or they’re discovering them on their own.

I tried to go back and see if I could find out when the first zero day was discovered, and the date that I found I didn’t like and it said it was 2003, but I think there’s probably an earlier date out there somewhere. The date that I found was March 2003, and it was a Windows DLL in Windows 2000, that was the first zero day.

So when we’re dealing with vulnerabilities, it’s a flaw, and either hardware or software allows an attacker to gain access. The exploit is the attack that’s used to leverage that vulnerability. So, when we have a zero-day vulnerability, it’s an unknown or unpatched vulnerability.

And a zero-day exploit is that exploit, that programmer tool, that’s used to attack that unknown vulnerability. So, looking at the lifecycle, so we did some baselining when we look at the life cycle of a zero day.

In the beginning, hardware or software gets created. It’s created, it gets developed the developers, and maybe they’re rushing to get it out to market because sales wants to sell it. But they create the product and it gets put out there.

Once out, whether that’s an operating system, an application, it could be a driver, browser extension, something that gets put out there, where security researchers, whether they are good, bad, indifferent, white, gray, black hats, whatever they may be, they’re figuring out, OK, what does it take to pick apart this piece of software? What can they use? a variety of different tools. They’ll reverse engineer. They might even use assembly language, breaking it down to the bits and bytes low as it can go to have a look at that.

What makes up that software? What makes hackers tick in general? And I know we’ve probably got some DEF-CON folks out there. Hackers are good people too. And there are people that like to take things apart. I took my TV apart. Didn’t make my parents happy. But I took it apart when I was six years old and put it back together. Wanted to see how it worked.

That’s the definition of a hacker too.

But essentially, they’re going to go through and try to find vulnerabilities or throw all kinds of different tools at it. They’ll look to see if there’s a way to overload it, buffer overflows, those kinds of techniques to try and gain an access or a foothold or disrupt the operation of that. Find that vulnerability. Different types of zero days that are out there. You’ve got remote code execution or privilege escalation, rootkits, those kinds of things, denial of service. Anything they can do to disrupt or shut it down.

You might even have a bug within a web application, such as cross site scripting. But there’s a great database out there, Exploit Database. They have a whole variety of different exploits that are out there that have been collected that you can go through and work with.

So, looking at the lifecycle, when a security researcher discovers a vulnerability, they can then alert the vendor, and there are different ways that that can be done.

Usually, it’s through a security@the company name email address, or they’ll have some sort of process where you can alert them.

Once they alert you, they’re going to give you 90 days to go through and patch it and fix the issue that’s there. Other security researchers, they may email and not get anything. They may not get a response. So they might reach out to a journalist and go, “Hey look, I’ve got the zero day, but nobody’s doing anything about it.” Then the journalists calls. And the journalists have an awesome power, because they go, “Hi, we’ve discovered this vulnerability with your product, what do you want to do about it?”

Then, of course, they all scramble, and get it patched.

And I know that that Katie is an expert when it comes to CVEs, so I’m not going to dive in too much on those, but essentially, it’s assigned as the ID related to that vulnerability. Organizations can actually go in, and especially if the developers go in, and reserve chunks in blocks of those different CVE IDs as well. So that way they’re ready to go when they need to alert the public of the vulnerabilities.

Once the vulnerability becomes public, then off it goes slipping off over to the dark side a lot of the time. And in some cases those security researchers will find that they can get a lot more money selling it on the Dark Web.

Now, used to be years ago, they were very lucrative and there were a lot of them out there. A lot of hackers have shifted now, looking at going through brokers and different things, whether it’s a criminal organization, etc., that can basically buy it off them and then they’re going to go ahead and leverage it.

You’ve also got ones where it’s access-as-a-service where they’re leasing it. It’s kind of funny when you think about the business models that they’re taking with this, where years ago, you’d go buy a car and now, you can lease a car while here, this acts as a service where you pay to utilize the service.

They still hold it and you’re able to leverage that. That exploit to gain access into an organization to be able to attack it, launch ransomware, espionage theft,  all those different types of capabilities. But a lot of those Dark Web ones now are extremely difficult to get into, the very limited. Usually, it is like the Mob of years ago because they don’t want themselves to get caught.  If you have something to sell, you’ve got to be vetted.

So a lot of times, people end up utilizing legitimate third-party companies that will pay for those different zero days that are available out there. You’ve got the Zero Day Initiative, HackerOne, a lot of crowdsourcing groups that are up there, and usually with them, what you’re doing is going through a process.

Then they disclose it to the organization, so they give them enough time to fix it. Then the security researcher will get some type of payment for that, depending on the vulnerability, depending on the organization that’s impacted.

Same thing with the cybercriminals, if you’ve got a Facebook and iOS zero-day, those are really, really expensive ones to get and a lot of folks like paying for those and will pay top dollar for those. But upwards of millions of dollars is what you could get if you were able to find one of those lucrative ones selling it on the Dark Web. If you’re going through some of the other ones, there’s still going to pay out really well, it just may not be as well on the Dark Web. But, again, that’s a lot harder and extremely difficult.

Greg Ose: As I look at my role to protect GitHub the company, the first step from my point of view is asking, “how do we make it the easiest way possible for someone externally, outside of the company, to come to us with a security vulnerability?”

BB: That leads us so perfectly into our next area of discussion, vulnerability detection, because I think this is where a lot of people get hung up, because there are so many ways to attack this problem. And I know, Katie, you said that over at Intel that you have different teams of people looking at this from different phases and from different angles. So maybe talk about how people with the biggest pockets are able to tackle this.

How to Hunt for Security Bugs

Katie Trimble-Noble: The security ecosystem is really about defense in depth, sometimes called the Swiss Cheese Model. There are multiple different ways to crack the egg. And, in fact, you really don’t want to just use one way.

There are bug-bounty programs, but also at Intel, we have quite a few different programs that are available.

There are also vulnerability-disclosure programs, that’s your security reporting that we’ve talked about a little bit before. It’s kind of “see something, say something.” If you see something, please report it, so it can be fixed.

But then there are also agreements with academic institutions. We have a thing called Intel Labs, where we work with academic institutions all over the world, trying to make sure that we’re providing the best product that we can back to our customers. We also have open assessments or pen tests before a product goes out on the market.

A lot of places have Offensive Security Research teams that are internally trying to find these bugs, and then you also have the bug-bounty programs, which is your crowdsourced, open to the public approach.

Oftentimes bug bounties come in lots of different flavors. I hate to put any into one particular bucket. There are private bounties, there’s public bounties, there are a lot challenges, time box challenges. You also have static programs. So there’s a lot of different ways that these bug-bounty programs exist.

I run the bug-bounty program here at Intel. We look at it as a co-operative relationship with the research community, right. We want to incentivize and reward positive research behavior. When somebody comes to us and they say they’ve put all this time and effort into researching this vulnerability, and that took time, we want to acknowledge that. So we pay oftentimes for the vulnerabilities that are submitted to us, we go through and have a scope that’s publicly available.

People can see what we are looking for, what we are really interested in, and then focus their research on that and then report it back to us. A lot of companies do this. This is becoming more and more prevalent.

And there are a lot of great ways to respond back to get at the vulnerability identification process. So, Greg, talk us through what the best practices are for setting that up. What is the mechanism that needs to exist?

Best Practices for Vulnerability Disclosure

GO: Yeah, I think it all depends. I think it’s about the scope and the maturity of your underlying program. I was like OK; where do we want people to report vulnerabilities to us. We want to get those in the door. And we want to be able to not only say, “contact us here at our security at email,” or “submit this to the bug-bounty program.” But what you also need to have in place is the other end of that conversation.

And I think a lot of times it’s easy to say “We’ll send it to our support inbox and our support will triage that out and get it in front of engineers.” But really, what we’ve found to be successful at GitHub is to have people matching that same skill set that’s used to find these vulnerabilities to be the ones on the other end of that conversation. So, when we get a report and we can really quickly identify, like, OK, is this valid? What’s the impact of this?

And using our internal relationships that we have for the internal security assessment we do at GitHub, we’re able to then say, “Oh, we know the engineering team that worked on this, and it really provides is very clean pipeline between us. It doesn’t have to pass through too many hands to go from a researcher who’s interested in reporting this vulnerability to the hands of an engineer who’s actually in the position to fix it.

So I think that’s one of the most important things, whether it’s a bug-bounty program, or if it’s a vulnerability-disclosure program — is to have that pipeline figured out. Figure out what you’re going to do when you get a vulnerability in.

And just having those resources, and setting expectations, is important. Because as a security team it’s easy to sell the idea of a bug-bounty program. We need to fix these vulnerabilities. We need to get them in.

But if you’re stuck with a pile of vulnerabilities with no engineers — like, oh, now we have the all this work that we need to prioritize, because these are high-priority vulnerabilities. It just setting that expectation. So, setting up that pipeline and then also making sure you have the resources throughout that pipeline to handle the work.

BB: Anything to add to that? I’m sure you do, Katie, considering this is your jam. Creating a program? How do you initiate that? A lot of times, this is a sales job for security teams, to go in and say, “Hey, guess what? You need to make another investment, in addition to software, into manpower and all that. So how do you take that seed and start to grow it in a business that’s not as big as Intel?

Setting Up a Bug-Bounty Program

KN: Yeah. So. There’s been a lot of interest recently in bug-bounty programs in the technology-vendor space. But if you’re not a technology vendor, and you’re still an industry partner, if you want to create a bug bounty, you can absolutely do that.

There are a lot of companies that do that. There are several in the finance industry that have bug-bounty programs. They’re not technology vendors. There are some in the aviation sector that have bug-bounty programs. So you don’t have to be a tech vendor to have a bug-bounty program.

It does take a little bit of work to set up. And we always recommend, as Greg was saying, that you have some of those kinks worked out before you turn it on. I always liken this to the idea that you bought a house that was built in 1858 and it has no indoor plumbing, and the county keeps telling you they’re going to turn on water to your house. They’ve dredged up the street, installed the pipe and they’ve told you they’re going to turn it on. And, if you do not have those internal pipes set up in your house, that water is just going to flow into your basement.

And that’s not a good look for anybody.

So having that partnership setup within the business is important. And you need to make sure that you have your communications folks there. You need to make sure that you have product engineers all lined up. We have a product security incident response team at Intel, that’s quite common within the industry. So engineers do that initial triage and they foster the vulnerabilities all the way through, through mitigation, and ultimately, through disclosure.

So, having those set up in advance is super important. It does take a little bit to change the mindset. And I like to start with the seed of, “Look, these issues are here, whether we know about them or not, so wouldn’t it be better to know about it?”

And then, how do we address the situation once we know about it? We want to be open and honest and transparent as possible. We want to build that trust within the community, and I find that happens when you walk into a conversation and you say, “hey, I’m looking to be better?”

When I worked for the government, and now when I work for the private sector, the customer or the taxpayer demands better from me tomorrow than they got from me today, and I have that obligation to deliver that.

BB: Well, James, what do you think? What do you see? How do we make this work internally, what are some of the best practices you’ve seen out in the wild?

JQ: Echoing what Katie was talking about regarding the product side, I spent 18 glorious years working over at Siemens prior to being at KnowBe4. And one of the — I might get shot for saying this — but one of the most interesting experiences during my 18 years there was back in 2010 in July when a little thing called Stuxnet was discovered and I got to work somewhat closely with our product cert team because I was responsible for monitoring systems for our gas turbines.

And so, this brought it all to the forefront and a lot of it came down to having the communication, having the transparency, with our product cert team. We had a small group of folks doing product security for Siemens and it pretty well exploded within the organization. Because of that, that one incident, it led to a whole product-security board.

But one of the key things that that I took away from a lot of it was the understanding of the importance of having community communication both externally and within the organization. Because you’re going to have people knocking on the door plus even internally. And having that communication, having that program in place, to be able to effectively alert the public and then have the mitigations and everything else, goes a long way.

BB: We have a good question that I think we should probably just take now. This person asks, “What skills should we learn to detect vulnerabilities that haven’t been identified, i.e., new malware being encoded? What do you guys think about that?

Greg, do you have any ideas on how you keep your skills up so that you are up-to-date on the latest?

What Skills Are Needed for the Bug Bounty Game?

GO: Yeah, I was going through and thinking about what skill sets someone would need to get into bug bounty, to start being a bug-bounty researcher and things.

It’s very close to the same skills we look for, internally, as part of our internal product security team, where we have people doing code testing, code review and things like this.

It differs on what your product might be, on what your attack surface as a company might be. But a lot of the skill sets that we see, both internally and externally, is a lot of web application security. Staying on top of the latest web application security trends and new vulnerabilities, and knowing the basics there, and digging in and understanding and application and how its authorization works, and how the pieces of a large application tie together.

If you’re looking at how you find these vulnerabilities, it’s about targeting. It is a core, deep understanding of web application security, and vulnerabilities, and how to find them. And I think also, really digging into the specific product or target you are looking at.

Some of the best researchers we have, and the best folks we have internally, are experts on GitHub, the product. They know all the features, how they work, how they interact together and it’s really in those areas where we see a lot of our great vulnerabilities being reported internally and externally. It’s not your basic cross-site scripting or SQL injection. It’s really like, oh, there’s these three different authorization things at play and they all come together to make this vulnerability.

So I think really, knowing your product that you’re trying to secure, or you’re trying to find vulnerabilities in. And then really understanding the technology and the cybersecurity vulnerabilities that typically exist in those products.

BB: And Katie, we’ve got a couple of questions here about how to break into bug-bounty programs. What advice would you give? And someone else wants to know how does he get on the right path or cybersecurity certification. And another asks you: How do I start participating, and where do I need to start? So, help them out, and help give them some good advice.

KN: The beauty of the cybersecurity ecosystem is, that it’s really diverse, It’s very open. And by its nature, it’s inclusive. Like, no one cares who you are, where you’re from. They care about the tech. And so I would say the thing to keep in mind is that you’ll be able to get experience from all kinds of stuff. Right?

Conferences or certifications, ongoing education, and in some cases, traditional education is great. Certifications are a great way to start. There are a lot of really great courses out there that you can take for little or nothing. There’s a lot of training that’s free and available.

If you do some research, I know there’s always the paid sort of boot camps that cost thousands of dollars. But you can study for those certifications on your own and take the test. I always tell people the biggest thing that you should do is stay curious and constantly be looking for opportunities.

So look for conferences or meet-up groups in your area, or, barring that, if that’s not something that’s open to you, there are meet-up groups online.  I know the crowdsource platforms like Bugcrowd and HackerOne have a lot of training that is available to people. Those things are often free and available. There’s hack-the-box practice games that you can play, and those things will really help develop your skill set.

So, as you go through, you just must remember to stay curious and stay motivated. Like you’re going to come up against a wall every once in a while. But keep with it because that motivation is what’s going to get you through in the end.

And I can’t harp on that enough. Pick up whatever you can from wherever you can but start maybe with looking at different products. HackerOne and BugCrowd, they list all of the companies that they’re working with. And those companies have scopes and different instructions that are available and maybe different challenges that are going on. And just see if any of those things meet your skill set currently and then move forward as you develop.

BB: James, what would you tell one of your students who came up to you with these sorts of questions? What advice would you give them?

A lot of the students that are graduating, getting out into the community, and looking for that job, I’ve talked to them, and they’ve put out hundreds of resumes and had interviews, and they’re just not getting the positions. They’re going to other people, because a lot of it comes down to the networking.

Katie mentioned about the different meet-up groups, whether it’s online or in person, and a lot of the things that are going on in cybersecurity, in general, as well as with these other programs, is getting connected with somebody. Finding somebody that can work either as a mentor, or somebody that you can talk with — take them out. Join a meet-up for a cup of coffee on the weekend.  I’ve got several people that I mentor, but then there are other people that mentor me. And then I’m always asking, what’s going on down river? What are you seeing? And then passing that along back.

But a lot of it is going to come down to, yeah, you are going to need to know the information, how to use the programs, and learn how to do those through those certifications or other courses or your own schoolwork. But also going out and networking, and I know for some folks having to talk to other people is not something they enjoy doing. It is very difficult, but I know through a lot of the different security groups that are out there that we’re always inviting, they’re always glad to have new people show up.

Now. What do you end up doing? In the meantime? Sometimes some of the students are off getting other jobs, or they get working in an helpdesk position or as a technician, but the journey into cybersecurity is not a sprint. It is a marathon. We’re in this for a while.

BB: All right, now, we’re going to get back to a couple other questions, and we’re going to circle back to those.

Security researchers, we’ve talked about this too, but let’s really go into it, because it seems to be of interest to our audience. What makes a good researcher? How do you build up a reputation in the researcher community? And Katie talked a lot about corporate investment and opportunities, which I think does dovetails nicely into a lot of the questions we’re getting.

You want to work for Intel, take Intel-sponsored training. It’s probably a good place to start. So maybe, Katie, you can kick us off with that and help us get a lay of the land.

How to Submit a Proof of Concept to Get Paid

KN: I’m going to diverge a little bit and say, I always tell people to make sure that you understand that the full cycle includes documentation.

So it’s super-important to be able to do the hack, you know? But you need to be able to document it as well. Because if you want to get paid, you need to be able to communicate that back to the company that you’re giving the research to, and the faster they can understand what you’re meaning, the faster they can pay you.

BB: Is there some sort of a template out there they can get started on?

KN: That one is tough, because every company has a different sort of flavor, and every product has got a different flavor. And sometimes the best thing you can do is take a screenshot and put it into a PDF. And other times, the best thing you can do is Python script or something along those lines. I’ve seen proof of concepts come as everything from a YouTube video to screenshots to code snippets come across.

BB: Greg, do you, have you seen any?

GO:  I know we have a template on our submission form. Laying out the details that we feel like is fleshed out makes good submission as far as proof of concepts.

I’m not a huge fan of videos with no other context because then it’s on the person triaging it to then transcribe the video.  So I guess I would say, I’ve seen it all. We’ve seen fully working scripts, someone standing up a fully working proof of concept.

I think if it’s easy for someone to understand and reproduce is key. I think it’s also for my team about that whole triaged pipeline, right? Like, I also then need to be able to clearly communicate the details to an engineer, so that the further along the submission gets, we need to actually then translate that into something meaningful for the engineering team. The more detail it gives us, the quicker we’re going to be able to triage that and really understand the impact.

BB: OK, James, what do you think about reporting best practices? Maybe there’s a checklist of information? I mean, is there anything people can start from and know they’re covering their bases?

JM: I wish there was a checklist, but I think it sounds like a good thing to write up as blog post. You’ve got to be able, with the different tools, to be able to break down and reverse engineer the software, then be able to reproduce that and deliver that. And your communication skills, your writing skills, those must be there as well. The soft skills that Katie was talking about.

BB: What about from the other perspective, I’m sorry to interrupt, just because I’m interested in knowing, how do you put the word out to researchers to say, hey, yes, we are on the lookout for this. I mean, do you publicly announce a bug bounty? L

KN: Yeah like most wanted poster kind of thing. Intel has different kinds of bug-bounty programs. We have what we consider a static program, which means that most products are in scope.  If you want to go check it out at Intel security that you can find the list of in-scope. But most products are in-scope.

But there are other kinds of challenges as well. Sometimes companies will launch a time-box challenge on a specific device and maybe they’ll tell you, “Hey, I’m looking for specifically things like voltage changes or Wi-Fi issues or Bluetooth issues and looking for these specific things. And if you can break the box and then put it back together, I’ll pay you more.

If you can give me, maybe it’s not a traditional zero day vulnerability; maybe it’s a change in attack vectors. If you can create that or create a tool, maybe, then I’ll give you more for it.

KN: And those things tend to be around challenges. Challenges are sometimes open to everybody, sometimes open only to specific researchers, and they tend to be time-boxed, right? So, we advertise if we’re going to run a challenge or an award, that’s something that we would put out on social media, that we would tell people about.

Because, ultimately, at the end of the day, we want to make our products better, and we do that by working with people. So, we need people to know that we’re interested in those things, and that’s kinda how we do it.

BB: How about you, Greg?

GO: If you look at your overall scope of your program and eventually you get to maturity, I think Katie mentioned with like a static program, right? It’s like, well, we have our scope and it covers our attack surface, and we’re good at this.

And I think something that we are always trying to come up with internally is, what are the highest risk things that we’re doing now to new product features, new product offerings. It’s all these kinds of new things that we’re launching as new features, right?

Focus, not just internally for the security work, but externally. And we utilize these one-off, capture-the-flag sort of bug-bounty programs. They are either public or we also have a group of private researchers that we work with, that we’ve worked with in the past. They get invited to the VIP Program.

And then, we can give them even earlier access to our products and features.  If, let’s say, it’s a private beta for a new feature that we’re opening up to get customer feedback at the same time. I want to get our security researchers’ feedback and attention on that. So, we’ll craft a capture the flag around this and some bonuses around that.

BB: How do you get to be a VIP researcher, is it just time of service, quality of bugs found?

GO: In GitHub, we can’t open up a private beta to everyone on the internet. And it will be a private beta at that point.

Crowdsourcing Bug Hunting

BB: Excellent So let us talk about trends and I know, Katie, you talked a lot about HackerOne and BugCrowd, and the crowdsourcing aspect of that. So maybe you can walk us through why that is a good solution. And what opportunities there are, both for business and for aspiring bug hunters.

KN: Yeah. So, crowdsourced security research is something that we’ve seen in the last maybe, I think, technically, the first one really was in 2004, I believe it was Netscape that launched the first big bug-bounty program, but it’s been sort of a trend that we’re seeing more and more.

In the last five years, I think there has really been a good acceptance of bug-bounty programs. And, like I said, initially it was mostly the big tech vendors who adopted bug-bounty programs, but we have been seeing a lot more of the nontraditional tech, just regular companies, having bug-bounty programs.

So I really love seeing it in the airline industry. Because some of those airlines will give you airline miles, instead of a payment. So I think that’s really awesome, but we’ve seen a big trend in that.

We’ve seen a change in culture as well. There is always this idea that hackers are bad kind of deal, and we don’t look at bad or good. I don’t look at motivation. I look at the tech, so I look at the proof of concept. I’m looking at the the information that you’ve provided to me. And I don’t get into people’s motivations. That’s not my jam.

So getting past some of those kinds of hurdles and just saying, look, I’m focused on the tech. If you’re co-operating and you’re in my program, then I think that you’re good enough for me. So that’s a big change that we’ve seen recently and the the mindset shift that crowdsource research, whether it’s bug-bounty programs or just vulnerability disclosure.

And it’s a different mindset, right? It’s a different kind of look. So the product engineer, the person who builds the product, they look at things from their perspective, and that perspective is great. But isn’t it great to also have an extra set of eyes on the product? And that acceptance. That’s been something new that’s changing.

What’s really cool is you’ve seen this big change in governments instituting vulnerability-disclosure processes. So I know the U.S. government has created this binding operational directive that directs all the federal executive agencies to create vulnerability disclosure programs, which is really awesome, because if you find a vulnerability in a U.S. government-owned website, it gives you a vector to report that so that they can fix it. And that’s to me, really, really awesome. So that sort of shift I think it’s really positive.

BB: What about you, Greg, What trends are you seeing?

GO: Yeah, looking at it from my perspective, internally, and the value we get out of the bug-bounty program, I look across our security development lifecycle for our products. Where does the bug bounty fit in? I think we have a lot of internal initiative developer awareness.

Even red-team internal testing, all of these different initiatives to try to get rid of vulnerabilities before they get introduced, and the bug-bounty case, that’s on the tail end of that.

Maybe we can get it introduced, and identified, in an early access or a beta, or maybe it’s something that’s in our public products are in our products, after they’ve shipped.

Third-Party Pen Testing

G.O. We would also typically try to do third-party penetration testing and things like that. So, I see there’s a lot of overlap before what we used to hire for external third-party penetration testing, and what we’re doing with the bug bounty program.

The difference is our customers. What you get out of a third-party penetration test is a nice report that says: Here are the things we dug into, this is what we looked at and these were the results. The future is how do we get that same collateral? How can we work with our bug-bounty program to get the same information to share with our customers. This is also an important part of our security development life cycle.

It’s like 365-day, penetration test done by who knows how many consultants, with a scoped assessment of our public attack surface.

So, for me, it’s bridging the gap. I can see that gap bridged from a risk perspective right there. Both looking at the same things, are both looking for the same sort of vulnerabilities, but how do we bridge that gap, just from like, a communication compliance perspective, of, like, hey, we’re running a bug-bounty program, and maybe now vendors are going to start asking, ” Where’s your bug-bounty program?” or “I want to see your bug-bounty program.”

Zero-Day Trends

BB: James, what do you think? What’s down the river? Mentor me. Tell me, tell me what we’re looking at.

JM: Well, we know that zero days aren’t going anywhere, as you noted at the top of the presentation.

I think, looking down the line, organizations want to make sure they’ve got platforms they can work with to help, with regard to vulnerability disclosure. They can help with those bug-bounty programs and work to make sure that they’ve got that communication, that transparency with their customers.

Whether you’re getting hit with ransomware or whether you’re having a zero day discovered about your flagship product, communication’s going to be the key. And I know it’s been changing over the years, but a lot of times, organizations are worried about the fallout from that.

But when there’s transparency and showing that there’s an effort being made in trying to resolve the issue and make less of an impact on the organizations that own your product or use your product, there’s a lot more forgiving that goes on.

I mean, I think back to early days when I was at Siemens, when we first would release some of the vulnerabilities and there was backlash but after a while a lot more organizations were more appreciative of it. They say, “OK, good, now we know to take the necessary steps to either implement some additional risk mitigations to protect that product.”

KN: There was a really great study, I think it’s the Ponemon Institute, that said that 73 percent of respondents said that they are more likely to purchase from a technology vendor that was proactive about finding and disclosing security vulnerabilities.

So if you’re if you’re an organization that is afraid of the backlash, history indicates that it’s really in everyone’s best interests to disclose those vulnerabilities and get them patched as quickly as possible and be transparent with your customers. Because you will bounce back.

I mean, historically, that’s how it works. You build that trust with your customer set, and they’re more likely to purchase from you, there’s hard statistics on it. They know that you take it seriously.

BB: OK, we have a lot of questions, you guys and let’s see here, let’s get through a few of these.

Question: OK. I’m new to cybersecurity. I’m going to enroll in a six-month course through a university, what certification is better to have?

JM: I’ll give you the answer I give to my students. And that is if you’re going to get a certification, and you’re getting education, and there’s a great debate about this already, what’s more important, experience, education or the certifications? And the favorite answer, unfortunately, is, it depends.

But if you’re looking to get into bug bounty, you’re looking to get into discovering those zero days and the vulnerabilities, you’re going to need to have the programming capabilities, and Katie and Greg can chat about it. But, you know, from my understanding, it would be the programming capabilities, understanding how the program works, being able to reverse engineer it. If there are any certifications that go along with that, to maybe consider those, your education.

If you’ve got courses at the school you’re attending that have any type of programming or reverse engineering, those are the kinds of things you want to aim for.

The other thing you might do is find people that do this for a living, and an easy place to do  that is to reach out on Twitter. Or other social media where you can find them. Twitter is really good for doing that, LinkedIn, of course, the business aspect of it. But, if you can reach out and communicate with them, and see, or even look at their profiles, what kind of certifications do they have? What kind of education?

I know for me, cybersecurity wasn’t what I was going in for when I went into college, but it’s where I ended up. Nowadays, a lot of the folks that have been more experienced, they may not have the degrees, but it’s certainly worth reaching out to them, and checking, and seeing what they’ve got, or what they recommend.

BB: That is good advice. Greg, we do have one specifically for you. Is that a Gibson (guitar) on the wall?

GO: Yes, I used to play a lot more guitar when I was in high school and with COVID, I thought I would give it another try, or at least put it up on the wall so, it could remind me, every day, my background of video calls, that I haven’t practiced and haven’t picked it up again, but at least it looks nice.

BB: We have a fan; I just want you to know that.

Question: You mentioned some free tools and training online, is there a tool bootcamp, software book, et cetera, that you would say is definitely worth the financial investment personally?

Bug-Hunter Training

KN: There are a lot of different things out there, and you don’t want to really push somebody in one direction or another. It’s really about your interest.

I would say, like, James was saying, the coding aspect of it can be really important. Having that basic Python at a minimum kind of deal, is important.

I will fully admit, my background is in psychology and religion, I have a degree in international relations. I do not have a tech background, and I seem to have done OK in this career field. So, you don’t need the tech backgrounds as much as, understanding the threat environment, thinking creatively, being able to have those good communication skills.

I think that’s going to be the difference between what makes some really successful as a researcher in this business. At the end of the day, you have to be able to communicate about what you’ve learned and what you found.

And if you’re not able to do that well, then you really need to put in the time for those communication skills and being able to talk to other people. It’s really, really important.

JM: Be friends with an English lit major, or married to one. Yeah, my background was in theater, so you know where I started from.

BB: Greg, I bet you have some fancy degree in computer programming.

GO: I do have a computer-science degree, that is correct. I mean, I guess my path is maybe a little more traditional, working as a software engineer and then turning that into a focus in security. So, yeah, maybe a little more typical.

BB: OK, so, somebody also asked about Python. Is Python a good programming language? There seems to be unanimous that it is. You want to talk about, why do you want to talk about maybe some other languages that you would also recommend?

GO: Well, I mean, I think you see a lot in the security space in Python, just because you’d write Python as you would think, to write, like, pseudo code. It flows really easily, I think it’s very popular in the security space. Again, I don’t think there’s a right answer to the language to learn.  It’s probably just the one you’re already learning.

I mean I like Ruby, but that’s because we do a lot of Ruby for GitHub. Whatever’s in front of you usually is the is the best thing.

BB: James or Katie, do you have any thoughts on that?

JM: Python is what I tell all the students for the same reasons Greg mentioned. It’s versatile, it’s widely used in security. You see it every week, seeing a lot of different applications. That for me, would be a starting point. Then there are some more specialized ones you can get into after that.

But, yeah, if you, I’ve even sat down with the book to start learning Python, and I think I’m still in chapter two, but it’s something that’s worth having, especially if you’re going to be looking at doing this as a career, you’re going to certainly need to know multiple languages, but Python is certainly one of the key ones.

Internet of Things

JM: If anybody asks me what keeps me up at night as far as the security world goes, it’s going to be the internet of things (IoT). It’s going to be the interconnectedness that we’re getting to these days. When you have connected teakettles, there is a whole new game.

And a lot of that stuff happens to be somebody throwing some Wi-Fi connectivity on there, and smacking an internet-connected label on it, and put it onto the market. And so, I think we see a lot of Python in the IoT world. And so, if you’re looking for where there is rich ground for vulnerability identification, I’d say the IoT world is pretty terrifying.

BB: I would say beyond that the medical field has connected-everything and nobody from my reporting seems to know why they’re connected, or what they’re connected to. There’s a lot of confusion, particularly in that space.

JM: We had a debate at one point at what point does a human become part of the internet of things. And so, if you have an internet-connected pacemaker, are you an internet of things? Because it’s keeping you alive, so at what point does it stop someone from hacking a pacemaker.

BB: Sounds bad to me, I mean, I’m not even a medical person.

Is there anything that we’ve missed, or that you guys think is important to bring out for our audience before we get out of here?

KN: One thing. It’s really easy to get very excited as a researcher, about the research that you’ve just done, but please keep in mind that there are people on the other end of that research. So when you submit a vulnerability to a company, to anybody, understand that there’s a person on the other end, and I would ask that you have empathy and sympathy for them, as well, because I know that sometimes it takes longer than you would expect. But a lot of times, people are dealing with hundreds and hundreds of vulnerabilities, hundreds of submissions at a time. So maybe they are task-saturated.

So take a breath, and realize that we’re all in this together, and we all want to get there together, But we have to work together to get there together.

BB: That’s excellent advice. And I think we talked too about some of the best practices, some of the worst practices. Again, angry and poor communication, but then also people that come and say, “Hey, by the way, I have this vulnerability pay me if you’d like to know about it.” Talk about how that’s received on your end.

KN: Extortion is a crime. So I mean, I I hate to say it that way, but extortion is a crime. And if you come to a company that is dedicated to doing the right thing and is genuinely trying with that kind of attitude…

GO: It’s really just not the way to make friends and influence people.

KN: Everyone wants to get this fixed and the faster we get a fix, the faster I get closer to move on to the next thing. My job is to close tickets and move on to the next thing and get the product to be as safe as possible. I don’t want to hold onto it any longer than anybody else does. I want this disclosed, and when it patched, and I want to move on to the next thing.

And so the other thing that’s aligned with that “if you want to know about this pay me now” is the breadcrumbs situation, where I’ll give you just a little bit and then you got to figure it out from there.

How would you like it if somebody did that to you, like we’re trying to people are genuinely trying on all sides, like I think all stakeholders in this relationship are trying to do the best. So keep in mind, there are people involved here.

BB: That seems like the perfect place to stop for today.

I want to thank you all so much for loaning us your expertise here for this hour. It’s been much appreciated and our audience is really enjoyed it too based on their comments. So again, any of you who want to reach out here in our e-mail addresses, please e-mail me. I’m always looking for feedback.

Again, I’m a person, so be empathetic. But do reach out and let us know. And here’s our contact info for our panelists. And if anybody needs anything, you know where to find me.

Please check back with Threatpost regularly for our ongoing coverage, and we will see you next month, and our monthly webinar series. Thank you all.

Download our exclusive FREE Threatpost Insider eBook, 2021: The Evolution of Ransomware,” to help hone your cyber-defense strategies against this growing scourge. We go beyond the status quo to uncover what’s next for ransomware and the related emerging risks. Get the whole story and DOWNLOAD the eBook now – on us!

Read Entire Article