Mohit Kumar rants about bug bounty programs. Done correctly, a bug bounty can be a great incentive for finding problems with software. But there is a repudiation problem inherent in the system. Here’s a stylized example to show the problem.
Suppose we have a company X, which releases a software product. Company X’s management decides to offer a bug bounty for security vulnerabilities, realizing that security is important and very difficult.
Security Researcher A finds a novel security flaw in the product. He submits corroborating information to Company X. Company X determines that yes, it is a bug. In the meantime, Security Researcher B also finds this same flaw. The problem is that B doesn’t necessarily have any idea what A has already done, and so doesn’t realize that what he’s doing is a waste of time.* Eventually, an honest X would award A with the bug bounty and alert B that somebody else received the bounty for this.
But X could have an incentive to repudiate the bounty a certain percentage of the time. The company still gets increased outside security help from people motivated by the monetary reward, but doesn’t actually pay it out. Instead, they tell Security Researcher A that somebody else already reported it (or internal testing found it first) and A has no proof otherwise. As long as you don’t do it in cases which are high-profile (e.g., competitions at large conferences) and the outside security researchers don’t have a way of getting proof of this recalcitrance, it’s easy to perpetuate.
A standard mechanism for non-repudiation is to give a hash of a message. That way, you can prove later that the message you see is the same as what the sender originally hashed, meaning that nobody interfered with the message. An analog here would be to have a public database of vulnerability hashes, with researcher IDs and dates of entry. When you submit a claim to a company, a hash of your attack goes into the public database, and you could see whether or not you are the first person to submit this particular vulnerability.
By itself, this is probably not a feasible mechanism in this particular case, due to the fact that you would need to hash concepts rather than the submission text itself. In addition to the external bounty site, you would also need an independent arbiter with access to the code and a mandate to arbitrate fairly. Making the hashes public would go a long step in that direction by keeping the arbitration agency honest.
* – There are cases in which B might find something that A did not find, so we don’t know a priori that this is a total waste of time. But it probably is.
There’s an even simpler problem — suppose this just creates incentives for employees to create easily fixed bugs, then get in contact with a security researcher, splitting the profits.
I remember something like this happening in at least one of Scott Adams’s books (the Dilbert Principle), although in that case, it was where the company paid employees directly for finding bugs.
The typical way around that is using internal controls: code reviews, peer programming, internal QA, and suites of automated tests help reduce the likelihood of an internal malfeasance scenario. It doesn’t eliminate that scenario entirely, but for large companies like Google, it does help some.
I’m not ashamed to say I stole that from Scott Adams.