[liberationtech] DecryptoCat

Jonathan Wilkes jancsika at yahoo.com
Mon Jul 8 21:34:02 PDT 2013


On 07/08/2013 07:07 AM, Nadim Kobeissi wrote:
> On 2013-07-08, at 3:34 AM, Tom Ritter <tom at ritter.vg> wrote:
>
>> On 7 July 2013 17:20, Maxim Kammerer <mk at dee.su> wrote:
>>> This thread started off with discussion of peer review, so I have
>>> shown that even expensive, well-qualified peer review (and I am sure
>>> that Veracode people are qualified) didn't help in this case.
>> As one of the people on this list who does paid security audits, I
>> both want to, and feel obligated to, weigh in on the topic.  I don't
>> work for Veracode, I have done audits for projects in the LibTech
>> space, I have not done one for Cryptocat.  I would like to, and given
>> enough free time might get around to it,
> Just a quick note out of your awesome email:
> If you don't have enough free time, let me help you make some. I am able to fund further auditing. Round up a team and get in touch!

Are you still offering bounty for finding security bugs?  Because that 
seems to be the system under which a critical bug was revealed, and 
subsequently fixed.

-Jonathan

>
> I sincerely appreciate the perspective in the rest of your email.
>
> NK
>
>> but like _everyone_ in this
>> space, doing something publicly and putting your name on it means
>> holding yourself to a certain standard, and that standard requires
>> time - a lot of time (on the order of 60-100 hours).
>>
>>
>> A good security audit will give you two things.
>>
>> Firstly it will give you bugs.  These bugs are usually constrained to
>> issues that are security vulnerabilities (but not always depending on
>> the issue/the auditor/the time available).  We find these bugs through
>> meticulous testing and reading source code (see the 60-100 hours),
>> through checklists to make sure we don't omit anything (see
>> https://github.com/iSECPartners/LibTech-Auditing-Cheatsheet for a
>> Creative Commons version of mine), and through experience and hunches.
>> Usually an audit is time boxed so we don't literally read every line
>> - the 60-100 hours would balloon up considerably if it were the case.
>>
>> We read a lot of code, we see and find a lot of bugs.  The best
>> auditors are always reading about new bug classes and exploitation
>> vectors so we can recognise these bugs when they are in new projects.
>> The Cryptocat bug, using a random string (or decimal digits) instead
>> of bytes - I've seen before, as most people in my company have.
>> (Usually it's in the form of using a hexadecimal string instead of
>> converting the hexadecimal into bytes.)
>>
>> I know Cryptocat has been audited by humans in this fashion before,
>> I've read their report, and it found good bugs.  Of course, no auditor
>> is perfect - we never find every single bug that's in an application
>> and we can never say something is 'safe'.  Which is why we give you
>> the second thing:
>>
>> We give recommendations for making your codebase better.  In older,
>> more mature codebases this usually takes the form of recommendations
>> like "Everywhere you do file handling is wrong, and you need to
>> centralize it, fix it in the centralized version, and make sure
>> everyone uses that going forward."  Those are the straightforward
>> ones.  Sometimes they're more defensive, like "You really like to use
>> the php system() function for doing stuff like removing files from the
>> filesystem and chmodding.  You do really meticulous character
>> escaping, so we couldn't get command execution - but nonetheless, you
>> should really use the unlink() and chmod() functions, so you can be
>> sure a bug never makes it's way in."
>>
>> Now those are pretty obvious examples.  In a project where the
>> developers are making every effort they can to lock things down, where
>> they're making every effort to do things 'right' - if we still can't
>> provide examples, we're not doing a very good job.
>>
>> There are a lot of defense in depth measures that can be applied to
>> web applications.  Request preprocessors can look for global IDs, and
>> assert that the current session can access that object (in *addition*
>> to the page-level checks on object access).  Database query logging
>> can assert that all queries that go into particular tables use a
>> certain column in the WHERE clause.  I can go on and on.  A good
>> source of these is an ex-coworker's talk "Effective approaches to web
>> application security"
>> http://www.slideshare.net/zanelackey/effective-approaches-to-web-application-security
>> (honestly, Etsy is driving the industry forward with their techniques
>> for proactive web app security.)
>>
>> Defense in depth lends itself very easily to 'classically' exploitable
>> conditions around things like Cross Site Scripting, Direct Object
>> Reference, Command Injection, Code Execution.  Weak RNG and
>> fundamental protocol flaws are much harder to talk about in terms of
>> defense in depth.
>>
>> So, not avoid the hard problem, let's take this particular bug.  What
>> I would say is MOAR ABSTRACTION.  Now, I'm not actually a fan of
>> abstraction, I hate seeing a ton of one-line functions, but in this
>> case, to prevent a bug like this from happening again, I would
>> structure the code like this (taking into account I'm really bad at
>> naming things):
>>
>> //rng.ext
>> ///This class is a CSPRNG that outputs a stream of random bytes
>> class RandomNumberGenerator {
>>
>> }
>>
>> //rng-types.ext
>> ///This class outputs a random stream of the specified type
>> static class RandomSequences {
>>   ///Return a random upper,lowercase alphabetic string
>>   static string RandomAlphaString() ...
>>
>>   ///Return a random sequence of base 10 digits
>>   static string RandomDigits() ...
>>
>>   ///Return a random sequence of bytes
>>   static byte[] RandomBytes() ...
>> }
>>
>> //rng-objects.ext
>> ///This class produces a specific random application-level object
>> static class RandomObjects {
>>   ///Return a random RSA public & private key
>>   static RSAKey RandomRSAKey() ...
>>
>>   ///Return a random symmetric key
>>   static SymmetricKey RandomSymmetricKey() ...
>>
>>   ///Return a random username
>>   static string RandomUsername() ...
>>
>>   //...
>> }
>>
>> The RandomNumberGenerator class is ONLY, EVER, used by
>> RandomSequences.  This class will produce specific sequences that are
>> needed.
>>
>> And the most critical part is that RandomSequences is ONLY, EVER, used
>> by RandomObjects. RandomObjects produces all of the application-layer
>> randomness needed by the application.  Cryptographic keys, default
>> passwords, usernames, whatever it is that you need, each gets its own
>> function, and 99 times out of 100, that function will be a single-line
>> call into RandomSequences.
>>
>> Each of these classes is pretty modular, and is unit tested up the
>> wazoo.  Verbose commenting is crucial, and whatever IDE you're using
>> should grab those comments and display them to you via Intellisense as
>> you're using those functions.  (Visual Studio and Eclipse will do this
>> with the /// commenting practice.)
>>
>> RandomNumberGenerator only ever produces random bytes, and is sent
>> through your favorite strong random number generator tester.
>> RandomSequences makes sure 1) that the only characters that appear in
>> the output are the whitelisted ones and 2) that all the whitelisted
>> characters appear at the correct distribution.  RandomObjects is a
>> simple, easily skimmable class to see what the type of each random
>> object will be.  "Random Username is generated out of AlphaString,
>> okay.  Random Password is generated out of AlphaString? Why not
>> AlphaDigitsSymbols?"  You apply similar unit tests to RandomObjects as
>> RandomSequences but you can also give a second set of unit tests that
>> make sure the objects fit the applications expectations (e.g. around
>> length of the username).
>>
>>
>>
>> It's very difficult to write code that is defensivly architectured.
>> You usually wind up with code that many consider 'not pretty' because
>> it relies on a lot of dependency injection.  DI lets you Mock objects
>> and write more unit tests.  Unit tests are lovely for making sure you
>> codify your assumptions (e.g. RSA exponents will always be 65535) in a
>> way that causes _shit to break_ if they are violated.  Write lots of
>> unit tests.  Extract out the most security-critical code into seperate
>> files, and push changes in those files to people in a
>> persistent-but-not-overwhelming manner, maybe via a
>> crytocat-security-diffs twitter feed.[0]  Every bug you fix (or at
>> least, every non-UI bug) you write a regression test for. Put
>> preconditions on functions enforced by your compiler and runtime that
>> all strings are passed around in UTF8 (or 7 bit ASCII until you're
>> ready to try to tackle Unicode).  Put assertions from your unit tests
>> (e.g. the RSA exponent) into the actual class itself.  Fail early and
>> fail often (in the sense of blowing your program up on unexpected
>> input).  Never ever ever have an empty 'catch' block or a switch
>> statement without a default: raise.  And none of that even includes
>> memory unsafe languages =P
>>
>>
>> In conclusion:
>>
>> No project is bug free, the best to the worst developers make critical
>> mistakes.  Offensively architecting a project with a threat model,
>> protocol spec, and peer review of design is critical.  Defensively
>> architecting a project for easy review, deployment, testability, and
>> trying to prevent exploitable conditions at multiple layers of the
>> stack is also critical.
>>
>> If you think this bug could never happen to you or your favorite pet
>> project; if you think there's nothing you can learn from this incident
>> - you haven't thought hard enough about ways it could have been
>> prevented, and thus how you can prevent bugs in your own codebase.
>>
>> -tom
>>
>>
>> [0] https://codereview.crypto.is/ is/was a mostly dormant attempt at
>> getting security experts fed a set of changes in areas of their
>> experience.
>> --
>> Too many emails? Unsubscribe, change to digest, or change password by emailing moderator at companys at stanford.edu or changing your settings at https://mailman.stanford.edu/mailman/listinfo/liberationtech
> --
> Too many emails? Unsubscribe, change to digest, or change password by emailing moderator at companys at stanford.edu or changing your settings at https://mailman.stanford.edu/mailman/listinfo/liberationtech
>




More information about the liberationtech mailing list