[liberationtech] Cryptography super-group creates unbreakable encryption
Rich Kulawiec
rsk at gsp.org
Tue Mar 5 15:23:55 PST 2013
On Fri, Feb 15, 2013 at 01:35:53PM -0800, Adam Fisk wrote:
> At the risk of getting swept up in this by consciously saying something
> unpopular, I want to put my shoulder against the wheel of the "open source
> process produces more secure software" machine. [snip]
I've been thinking about your (excellent) comments for several weeks now.
And I'm going to argue that open source doesn't necessarily produce more
secure software, but it's a prerequisite for any credible attempt. And
that in this particular case, there's just no substitute for it.
But before I get started, let me pointed out that I'm very much *not*
arguing that the contrapositive is true, that "open source == chewy
goodness" automatically. We've all seen open source code that was junk.
Lots of it. We've all probably written some, too; I know I have.
So here goes:
Consider this hypothetical: you have the imaginary disease Bieberitis,
which progressively imposes the characteristics of Justin Bieber on you,
then kills you. So not only do you die, you die badly. Clearly: it's
an awful fate.
There are only two drugs available to treat this disease.
Drug A has a history that looks something like this: the basic
biochemistry has been known for 18 years. It's been studied at multiple
universities and research institutions. There are numerous published
papers on it. Early animal trials were conducted 15 years ago, and those
results were published as well, leading to another round of animal trials
with a slightly different formulation and more publication. Following
review by independent agencies 12 years ago, limited human trials were
held, with still more publication. A lengthy review and debate ensued,
the drug was discussed and debated at numerous conferences and meetings,
other (new) researchers weighed in with their papers, and a second
round of human trials took place 9 years ago. Following that, review
by multiple government agencies commenced. Additional work continued
in parallel on refinement of dosage and delivery. Eventually, following
another blizzard of paperwork and publication, the drug was approved --
and is now available to you. Studies are still ongoing, of course,
and it's expected that half a dozen more papers will be published in
referreed journals this year.
So: drug A has a long history. Lots of clueful eyeballs have investigated
it personally, and many more clueful eyeballs have read the published body
of work, thought about it, argued about it, reviewed it, critiqued it,
supported it, rebutted it, and otherwise been involved in the process.
Moreover: nearly all those clueful eyeballs are INDEPENDENT clueful
eyeballs, who have, in many cases, substantial motivation to disprove
claims made -- since one of the best ways to make one's academic
reputation is to perform ingenious, ground-breaking work which
demonstrates that something everyone agrees on is completely wrong.
Now, about drug B: drug B has no publications associated with it.
It's never been independently reviewed. It has none of the lengthy
history of A. What's it got? It's got a shiny color brochure written by
the marketing department that tells you how great it is, because it was
developed by some of the top people ever. Really. Top people. As in:
Major Eaton: We have top men working on it now.
Indiana Jones: Who?
Major Eaton: Top...men.
That's it. That's all you get. Promises. Assurances. Hand-waving.
Top...men.
Now: which drug are you going to take?
Of course the obvious answer is A, since B is more commonly known as
"snake oil". It's garbage. No thinking, responsible person would
ever choose B, because -- absent the history and the research and
the publication and everything else -- it might be the instant cure
for Bieberitis, or it might be sugar pills, or it might be poison.
There's no way to know.
All serious fields of intellectual endeavor use the same model as I
outlined in the development of drug A, which I'll lump under the rubric
"peer review". Architecture and law, physics and economics, medicine and
civil engineering, everybody uses this. And they use it because, despite
its flaws, it works really, really well. It's an essential component of
the scientific method. It's how we make forward progress, however slowly.
Fields of study that don't use this are crap. Astrology, creationism,
alchemy, homeopathy, phrenology, and yes, closed-source software: all crap.
There is no way we should accept what any closed-source vendor claims
about their code. There is no reason to, no matter who they are, no
matter how much we trust them, no matter how pure their motives are.
Heck, we often can't even trust OUR OWN CODE to do what we think we want
it to do, even when we're staring right at it -- so why in the world
should we make the fantastic leap of faith to trust someone else's when
we can't even see it?
Closed-source software is the equivalent of drug B. We're expected
to take the authors' word that it (a) does everything they say it does
and (b) does nothing else. We're expected to do this despite decades
of history proving, many times per day, that this is not only wrong,
but completely, wildly, amazingly wrong. (For a small drink out of
the firehose of evidence substantiating that statement, read bugtraq,
or full-disclosure, or the -developers list for any substantial project,
or the bug queue for something hosted on SourceForge, or check the patch
lists for any piece of software, or look at your own code.)
We, for a value of "we" meaning "all programmers on this planet",
pretty much suck at writing software. Even the best of us, and I'm
sure not one of them, struggle to write programs of any size/complexity
that meet their functional specifications and don't have major security
or privacy issues. The only slim chance we have of maybe, MAYBE, on
a good day, with the wind blowing in the right direction, of actually
getting somewhere vaguely close to what we're aiming at, is peer review.
It's not a great chance: but it's the best we've got.
Maybe in 50 years that'll change. Maybe by then we'll able to write
large-scale/complex programs with verifiable code that matches verifiable
specifications. But we're not there yet, so yeah, I'm gonna stick with:
source or GTFO.
But wait! There's more!
This isn't just any old piece of software: this isn't a word processor
or a database: this is crypto that is intended to keep people *alive*.
And while I won't even pretend to be a cryptographer, one thing I've
learned is that developing solid cryptographic algorithms is hard.
Really hard. People with significant expertise in the field spend
mountains of time working on them...only to find that 8 months after
publication, somebody on the other side of the world has already managed
to mount a credible attack. Then there's a tiny crack...and soon someone
else widens the crack...and then, in a flurry of published papers and
conference presentations, the whole thing gets demolished.
Or at least compromised to such an extent that everyone concurs it won't
survive much longer, that what's on the table strongly indicates that
better attacks will come along and finish the job.
The only way, really, that we can have any confidence in any cryptographic
algorithm is to see it published...and then wait. We wait to see what
happens when people get a look at it and start thinking about ways to
tear it apart using either theoretical or practical attacks, or more
likely, both.
How long do we wait? That depends. There's no fixed schedule.
But every year that an algorithm withstands scrutiny slightly increases
our confidence that this is not an accident -- that it's not escaping
attack because nobody's trying, but because it truly is robust in the
face of clueful and determined experts.
So in the case of cryptography software, it's not just source or GTFO:
it's publish the algorithm or GTFO.
---rsk
More information about the liberationtech
mailing list