[liberationtech] Can you confirm these are not best practices for handling disclosure?

Rich Kulawiec rsk at gsp.org
Thu Feb 2 08:24:01 PST 2017


On Mon, Jan 30, 2017 at 05:49:08PM -0500, Zak Rogoff wrote:
> Is anyone who's knowledgeable about disclosure policies able to take a
> look at it and share your thoughts?
> 
> To me, it looks like it's not much of a protection for the researchers,
> because it's totally voluntary and apparently allows companies to ignore
> it if they make such arbitrary judgements as that the security
> researcher didn't give them a "reasonable" amount of time between
> private and public disclosure.

You're correct.  This policy is worthless, as are -- to a good first
approximation -- all the "responsible disclosure" policies I've seen.

Let me explain why I say that.

First, these are all constructed based on the supposition that security
researchers owe the companies in question something.  But we don't.
We're not their employees or business partners: we owe them NOTHING.

Now, we may *choose* to give them something -- like a heads-up about
a bug -- because we think it's a good idea or because we think it's
a nice thing to do or because it's Thursday -- but the fact that
we may exercise that choice does not magically turn it into an obligation.

Second, it is the responsibility of companies to release software
(firmware, whatever) without security issues.  They often fail to do so,
because they use poor development practices (e.g., closed source),
because they rush its development (e.g., nearly everyone), because they
skimp on QA (also nearly everyone), and because  -- particularly in the
case of DRM -- they focus far more on denying users use of their own
computing hardware than they do on protecting users' security and privacy.

Let's be clear: a failure to do that is a lapse in *their* responsibility.

We're not required to compensate for all that.  We're not even required
to try.  It's not our product.  It's not our company.  We are not required
to spend our money and our time doing the things that they were unwilling
to spend their money and their time on.  (And after all: if they *did*
spend sufficient money and time, there would be little if anything for
us to find.)

Third, by now we all know that the playbook for companies presented
with a security problem is some combination of:

	A. Deny
	B. Delay
	C. Obfuscate
	D. Threaten reseacher
	E. Attempt to censor
	F. Litigate
	G. Relabel as feature
	H. Deny and delay and obfuscate some more
	I. Reluctantly fix it poorly, likely introducing a new problem
	J. Take credit
	K. (later) Release new product with same problem, return to (A)

These "responsible disclosure" policies are an attempt to facilitate
this approach by recasting the researcher's side of it as somehow
"responsible" for THEIR side of it.

This leaves researchers with various options, which I'll boil down
to roughly two approaches:

	1. Try to do it their way.  It's more likely that this will
	results in threats, censorship, litigation and possibly
	prosecution than in a timely, accurate, complete fix
	and credit where it's due.

	2. Don't try to do it their way.  Blow this off and either
	publish anonymously, or sell the vulnerability on the open market.

Vendors have only themselves to blame for this.  Had they not,
in the aggregate, accrued a long and sordid history (which they're
adding to every day) then perhaps other choices would be viable.

Fourth, "responsible disclosure" policies are based on the happy
delusion that one researcher has found has ONLY been found (and
found recently) by that researcher and not by half a dozen others
(and some time ago).

This would be convenient, and perhaps some of the time it's true
(although there is no way to prove it, only the opposite) but it's
not a solid operating assumption.  Software comes under scrutiny
because it's new, or it's perceived as important, or because it's
used in critical roles, or because it has a history of problems,
or because the vendor has a history of problems, or because the
vendor is a jerk, or because it resembles other software with
problems, or because it's deployed at a target, or for myriad
other reasons that would take up pages.  But the point is that
if it is of interest to one person, there are plenty of possible
reasons that it's of interest to other people.

Thus when researcher A dutifully does the responsible disclosure tango,
there is absolutely no way to know that researcher B quietly (and
profitably) sold the exact same thing to parties unknown three months
ago, or that researcher C, who happens to work for a nation-state, has
been happily exploiting it for two years, or that researcher D will
come across it tomorrow.

The myth of responsible disclosure is that these things never
happen and can't happen, and thus responsible disclosure actually
protects the public.  And maybe sometimes it does.  But it's
not a good bet, and as the number of researchers and the sophistication
of their tools and the resources available to them all increase,
it's a worse bet every day.

	"What one man can invent another can discover."
		--- Sherlock Holmes

Fifth, and this is specific to DRM: DRM is an attempt to remove
some of the user's control over their own hardware and software.
It attempts to constrain what a user can do with their property.
This is no different from many forms of malware, so it's not
surprising that it's fraught with security and privacy issues
by its very nature.

In other words, an attempt to institute DRM creates a set of problems
that didn't previously exist.  They're *choosing* to do that: nobody's
making them do it.  And, having chosen to create this set of problems,
they're now trying to impose conditions on the researchers who wish
to investigate them. 

That's an non-starter: if they are so concerned about those problems,
and the consequences of those problems, then all they need do is
refrain from creating them.

---rsk




More information about the liberationtech mailing list