Why bug bounty schemes have not led to secure software

Governments must hold software companies accountable for developing insecure computer code. So says Kathy Moussouris, hacker in white hat and a security expert who first convinced Microsoft and the Pentagon to offer financial rewards to security researchers who discovered and reported serious security vulnerabilities.

Since then, bug bounty schemes have become widespread and have now become the norm for software companies, with some such as Appleoffering rewards of $2 million or more to those who discover critical security vulnerabilities.

Mussoorie compares security vulnerability research to work for Uber, only with lower pay and less job security. The catch is that people only get paid if they are the first to discover and report a vulnerability. Those who put in the effort but get second or third place results get nothing.

“It's essentially exploitation of the labor market. You're asking them to do speculative work and you're getting something very valuable out of them,” she says.

Some white hat hackers, motivated to help people solve security problems, have managed to make a living by specializing in finding medium-risk vulnerabilities, which may not pay off as well as high-risk bugs, but are easier to find.

But most security researchers struggle to make a living because bounty hunters.

“Very few researchers are capable of finding these elite-level vulnerabilities, and very few of those who are think that chasing bugs is worth it. They would rather have a good contract or a permanent position,” she says.

Ethical hacking comes with legal risks

It's not just the lack of a stable income. Security researchers also face legal risks associated with anti-hacking laws.for example in the UK Computer Misuse Law and draconian US measures Computer Fraud and Abuse Act.

When Moussouris joined Microsoft in 2007, she persuaded the company to announce that it would not prosecute bounty hunters if they discovered online vulnerabilities in Microsoft products and reported them responsibly. Other software companies have since followed suit.

The UK government has now recognized the problem and promised to introduce legal protections for cybersecurity researchers who discover and share vulnerabilities to protect them from prosecution.

Another problem is that many software companies insist that security researchers sign a non-disclosure agreement (NDA) before paying them to disclose vulnerabilities.

This goes against best practices security disclosurewhich Moussoorie championed through the International Organization for Standardization (ISO).

When software companies pay a bounty to the first person to discover a vulnerability in exchange for signing a non-disclosure agreement, it creates an incentive for those who discover the same vulnerability to publicly disclose it, increasing the risk that an attacker will exploit it for criminal purposes.

Worse, some companies use non-disclosure agreements to hide vulnerabilities without taking steps to fix them, says Moussouris, whose company Luta Security manages and advises on bug bounty and vulnerability disclosure programs.

“We often see a big pile of uncorrected errors,” she says. “And some of these programs are well-funded by public companies that have a lot of cybersecurity staff, application security engineers and funding.”

Some companies seem to view bug bounties as a substitute for secure coding and proper investment in software testing.

“We use bug bounties as a stopgap measure, as a way to potentially control the public disclosure of bugs, and we don't use them to identify symptoms that might diagnose a deeper lack of security controls,” she adds.

Ultimately, Moussouris says, governments will have to step in and change laws to hold software companies responsible for bugs in their software, much in the same way that car manufacturers are held responsible for safety flaws in their vehicles.

“All governments have largely refrained from holding software companies accountable and legally accountable because they want to stimulate the growth of their industry,” she says. “But at some point that has to change because cars weren't heavily regulated and then seat belts became required by law.”

AI could lead to less secure code

Advances in artificial intelligence (AI) may make white hat hackers completely unnecessary, but perhaps not in a way that will lead to improved software security.

All major bug mitigation platforms in the US use AI to triage vulnerabilities and enhance capabilities. penetration testing.

AI-based penetration testing platform, XBowrecently topped the list of bug-fighting leaders, using artificial intelligence to focus on relatively easy-to-detect vulnerabilities and systematically testing likely candidates to identify security bugs.

“Once we create the tools to train AI to make it seem as good, and in many cases better, than humans, you've pulled the rug out of the market. And then where do we get the next bug-hunting expert?” she asks.

The current generation of experts who can detect when AI systems are missing something important is in danger of disappearing.

“Bug Bounty platforms are moving to an automated, unmanned version of Bug Bounty, where AI agents will replace human bug hunters,” she says.

Unfortunately, it is much easier for AI to find bugs in software than it is to use AI to fix them. And companies are not investing as much as they should in using AI to reduce security risks.

“We have to figure out how to change this equation very quickly. It's easier to find and report a bug than for an AI to write and test a patch,” she says.

Bug bounty failed

Moussouris, a passionate and enthusiastic proponent of bug bounty systems, is the first to admit that bug bounty schemes have in some ways failed.

Some things have improved. Software developers have moved to more advanced programming languages ​​and platforms that make it more difficult to introduce certain classes of vulnerabilities, such as cross-site scripting errors.

But, in her opinion, there is too much security theater here. Companies still fix bugs because they are visible, but refrain from fixing things the public can't see, or use non-disclosure agreements to buy silence from researchers and hide vulnerabilities from the public.

Moussouris believes artificial intelligence will eventually replace human bug researchers, but says the loss of expertise will be detrimental to security.

The world is on the verge of another industrial revolution, but it will be larger and faster than the previous industrial revolution. In the 19th century, people left agriculture to work long hours in factories, often in dangerous conditions due to low wages.

As AI takes over more of the tasks currently performed by humans, Moussouris predicts that unemployment will rise, incomes will fall and the economy risks stagnating.

The only answer, she says, is for governments to tax AI companies and use the revenue to provide universal basic income (UBD). “I think it has to be this way, otherwise there is literally no way for capitalism to survive,” she says. “The good news is that human engineering ingenuity is still intact at this point. I still have faith in our ability to hack and get out of this problem.”

Growing tensions between governments and bounty hunters

Bug hunters' work has also been impacted by moves to require software companies to report vulnerabilities to governments before they fix them.

It all started with China in 2021, which required tech companies to disclose new vulnerabilities within 48 hours of their discovery.

“It was very clear that they were going to evaluate whether they were going to exploit the vulnerabilities offensively,” Moussouris says.

In 2020, the European Union (EU) passed the Cyber ​​Resilience Act (CRA), which introduced similar disclosure obligations, ostensibly to allow the European government to prepare its cyber defenses.

Mussoorie is a co-author of the ISO vulnerability disclosure standard. One of his principles is to limit the knowledge of security bugs to the smallest number of people before they are fixed.

The EU argues that its approach will be secure because it requires neither a deep technical explanation of the vulnerabilities nor proof-of-concept code to show how the vulnerabilities can be exploited.

But that misses the point, says Moussouris. Increasing the number of people with access to vulnerability information will increase the likelihood of leaks and increase the risk that criminal hackers or hostile nation states will use it for crimes or espionage.

Risk from hostile countries

Moussouris has no doubt that hostile countries will exploit the weakest links in government bug reporting schemes to explore new security vulnerabilities. If they are already using these vulnerabilities for offensive hacking, they will be able to cover their tracks.

“I expect that there will be a revolution in threat intelligence because our adversaries absolutely know that this law is going into effect. They are certainly preparing to learn about these things through the very leak of information that will be notified,” she says.

“And they'll either start attacking that particular software if they haven't already, or they'll start shutting down their activities or covering their tracks if that's what they were using. It's counterproductive,” she adds.

Moussouris is concerned that the US is likely to follow the EU's lead and introduce its own bug reporting system. “I was just holding my breath waiting for the US to follow suit, but I warned them against it.”

UK Share Program

In the UK, GCHQ regulates the government's use of security vulnerabilities for espionage through a process known as an equity scheme.

It suggests security experts are weighing whether the UK will put its own critical systems at risk if it fails to notify software providers of potential exploits, against the exploit's potential value for intelligence gathering.

This process has the veneer of rationality, but it fails because in practice government experts have no idea how widespread vulnerabilities are in the nation's critical infrastructure. Even large vendors like Microsoft have trouble tracking where their own products are used.

“When I worked at Microsoft, it was very clear that while Microsoft is very aware of what is being deployed in the world, there are a lot of things that they don't know about until they use them,” she says.

“The fact that Microsoft, with all its telemetric ability to know where its customers are, is struggling means there is absolutely no way to reliably assess how vulnerable we are,” she adds.

Kate Moussourie spoke to Computer Weekly on SANS Cyber ​​Threat Summit.

Leave a Comment