Preview: Zero Day Software Vulnerabilities: Deal With It
Tod Beardsley is the Director of Research at Rapid7, an IT and computer security firm. He has over 20 years of experience in computer security, having worked for major tech companies such as 3COM and Dell. These days, Beardsley primarily speaks at security and coding conferences, discussing the disclosure of security vulnerabilities. Of course sometimes, he still contributes code to the many projects at Rapid7.
Question: Could you tell me a little about yourself, some of the work you do with Rapid7 and how you got started?
Answer: I’m the Director of Research at Rapid7, which covers a few areas. I’m responsible for coordinating and publishing the security research as it happens across Rapid7, which includes internet-wide scanning, exploit research and development, coordinated disclosure of software vulnerabilities, and various special projects. I’m also the primary spokesperson for Rapid7, which puts me in front of the media offer analysis and insight about breaking news in security.
Question: What made you want to speak at SXSW?
Answer: I like speaking at SXSW because it gives me an opportunity to get out of the infosec echo chamber and talk directly to the people who are inventing the future — I very much want to make sure that that connected future is as safe and secure, as well as interesting and fun, as it can be. Also, I live in Austin, so speaking at SXSW makes me hate the traffic it causes it a little less.
Question: How or why did you come up with the idea of this panel?
Answer: My panel, Zero Day Software Vulnerabilities: Deal With It, is all about what to do when someone like me comes calling with some bad news about your product, as well as what hackers and researchers can do to make sure that they’re doing what they can to make the internet safer. I have some experience in the area of reasonable and coordinated disclosure of vulnerabilities. In my time at Rapid7, we have found security issues in products from video baby monitors, to kids toys, to remote-controlled insulin pumps, and companies that produce things like these don’t usually see themselves as traditional software companies. Because every company is accidentally turning into a software company, I want to make sure they all have the processes and infrastructure in place to rapidly deal with software vulnerabilities when they’re discovered and reported.
Question: Are traditional security measures like anti-virus software “dead” so to speak? To me it seems like a never ending game of catching up, once a vulnerability is patched, hackers have moved onto a different one.
Answer: I wouldn’t call technologies like anti-virus and firewalls dead. In the best case, they’re getting baked into the operating system of traditional desktops, laptops, and mobile devices. Microsoft’s Security Essentials, for example, brings Microsoft security smarts to Windows directly, without the need of third party anti-virus software, and it does a good job at picking off the most widespread malware out there. Google is doing a lot of good in both end user software, like the Chrome browser, and online services like Gmail, to make people’s security posture much, much stronger than it’s ever been.
However, in the worst case, there are still devices — mostly IoT — that have no security built in; devices ship without basic firewalling, known-vulnerable components, and a lack of sensible patch management. As we see millions of these devices come online, events like the Mirai botnet become inevitable.
Question: How does the “Internet of Things” play into the equation of software vulnerabilities? Surely some of these devices must be insecure, say Google Home, Amazon Echo, smart TVs, especially printers and even smart fridges.
Answer: IoT is different precisely because it’s made up of devices so unlike how we traditionally think of “computers.” These things don’t have keyboards or monitors, they tend to be always on and always connected, and they tend to be in the hands of non-expert users. Just like we don’t expect every driver to be a skilled auto mechanic, we can’t expect every user of a connected thermostat to be a computer scientist or information security professional. So, it’s unsurprising that people use these computers (which don’t look or act like computers) in ways that are antithetical to well-established safe computing practices, and end up exposing these devices to risks that simply didn’t exist with the old, analog way of doing things.
Take a Bluetooth-enabled door lock, for example. If someone wanted to break into your house, they’d either need a lockpick, a crowbar, or a brick, and break in when you’re not around. They could only do this one house at a time, or recruit a bunch of friends. But, a Bluetooth lock changes the threat model. That lock talks to a smartphone (over radio), and probably has some cloud-based backend to make sure that you’re you. This is all great, but suddenly, the bad guy doesn’t have to be nearby anymore. A compromise on the smartphone app, or the cloud service, or on the lock itself from across the street, are now all possible. The single bad guy can also attack at scale, unlocking every door, essentially simultaneously. Assuming decent security on all these components, the first attack is difficult, but the first thousand attacks are nearly as easy as the first.
These are the kinds of risks that we’re introducing with IoT.
As far as we know, it’s impossible to produce software of any complexity that is provably bug-free. Computers are general purpose, by design, which is what makes them so useful. It also means that it’s nearly impossible to predict what a computer will do in the presence of an active, malicious operator.
Companies like Google and Amazon, who began as internet-connected software companies, do have a leg up in writing and auditing code that is reasonably secure, since they’re practiced at it. But a company that used to build purely mechanical goods, like refrigerators and thermostats, are just now figuring out how to do blend computing power into their projects, and are relearning all the painful lessons about secure code, vulnerability assessment, and patch management along the way.
Question: What are you hoping will be the biggest takeaway for the audience attending your panel?
Answer: If you are designing a device or service that relies on a computer, you will inevitably produce and ship software that has a security vulnerability. If that vulnerability is discovered either internally through code audits or externally via an independent bug reporter, you will need to be prepared to deal with that bug in a responsible and reasonable way. There are some simple techniques to make allies out of hackers, journalists, upstream vendors, and I hope my panel will help people plan for the day that someone comes knocking with a fresh 0day vuln in their product.
Question: Do companies need to become more receptive to grey hats? In the past, I’ve heard that companies have turned down paying grey hats for vulnerability information, then a few months later, the exact vulnerability is used to steal information from their servers.
Answer: I’m not sure what a grey hat means in this context. I don’t know of any case like this off-hand. I’ve had people ask if Rapid7 offers bounties on our software, and currently, we do not. But, we do offer credit, acknowledgement, gratitude, and a spiffy T-shirt.
I’ve also had companies ask me if I’m trying to extort them when I’m offering a vulnerability disclosure on their products — after all, Rapid7 is a for-profit enterprise, and I’ve occasionally met some skepticism when I offer to help a company fix their product without compensation or a trade for a services contract. While we do sell services like code and design auditing, once we find a bug in publicly available software, we do what we can to help that company with that issue. If they’d like to hire us based on that interaction, then super. But, our disclosure work is designed for the common good, and making the internet safer and more secure for everyone.
Question: Are there any ethical dilemmas here you’d like to highlight?
Answer: Aside from extortion, researchers and vendors alike need to always consider the best social good they can achieve through vulnerability research and disclosure, especially when it comes to systems that, if abused, could result in injury, environmental damage, social unrest, or death.
At the same time, sitting on vulnerability information through inaction or restraining order, can also put people at risk. I always assume I’m the last person to notice a security flaw, even if I’m the first person to report it. For all I know, criminals and unfriendly intelligence organizations are already using this vulnerability for nefarious purposes. There’s also the threat of rediscovery — even if I’m the first to notice and report a vulnerability, there’s no reason to think I’m going to be the last. The next person who sees it might try to exploit it for whatever reason, or simply drop it on a full disclosure message board, or sell it to an unfriendly government or criminal enterprise.
Question: Is there anything else you’d like to add about the future of cybersecurity?
Answer: There is very little distinction between our “real” lives and our “virutal” lives anymore. Jon Ronson, in his book _So You’ve Been Publicly Shamed_ makes a compelling argument that a person’s internet self is, in fact, more real than her “real” self, since online reputation counts for so much more today than it ever has. So, while I’m confident we can get ahead of all the privacy- and security-busting vulnerabilities that are bound to be introduced in the coming months and years, it won’t happen by accident.
I also expect the practice of cyber security to become even more democratized, as more people start to get curious about security. We’re going to see people like normal programmers, designers, gamers, and casual users join in the hunt for 0day. It’s fun, it’s challenging, and given the advent of bug bounties, it comports well with the growing gig economy.
Question: If you were interviewing yourself, what would you ask?
Answer: What’s your academic background, and how much does traditional book-learnin’ figure into a career in information security?