Indistinguishability Obfuscation, or how I learned to stop worrying and love DRM

There’s a lot of hype running around about this:

Lots of excitable talk of “perfect security” and other stuff. One of the possible applications is supposedly quantum-resistant public-key crypto. But if you read into it, it’s actually a way of making code resistant to decompilation. So instead of creating a secret number that’s processed by a well-known algorithm, you create a secret algorithm that you can ask other people to run without fear of it being reverse engineered. So the “public-key” crypto is really shared-secret crypto with the secret sealed inside an obfuscated algorithm.

In other words, it’s bulletproof DRM. deCSS is even referenced (obliquely) in one of the articles as a use case.

Of course, this makes it in principle impossible to test code for malicious behaviour. You could insert a latent trojan into it and never be discovered, and it removes one of the most important security features of security software – auditability of the algorithm. For example, someone could write a rot13 algorithm and call it “encryption” and the only way to (dis)prove it would be to run a statistical analysis on the ciphertext.

So the question becomes – why would anyone allow IO programs to run on their systems? Virus scanners would be useless in principle. Performance, even in the most optimistic case, would be dreadful. And it doesn’t do anything for the end user that can’t be achieved by traditional crypto (barring the development of a quantum factoriser, and even that is not yet certain). No, the only people who gain are the ones who want to prevent the next deCSS.

Web of Trust vs Certificate Authorities – a hybrid approach

The only thing in engineering worse than a single point of failure is multiple single points of failure. A SPOF is a component that the entire system depends upon, and which can thus bring down the entire system single-handedly. MSPOFs are a collection of components any one of which can bring down the entire system single-handedly. The X509 certificate architecture is such a system.

The job of a public key encryption system is to transform the task of securely sharing secret encryption keys into one of reliably verifying publicly-available certificates. Any form of reliable verification must necessarily involve out of band confirmation of the expected data – if an attacker has compromised our communications channel then any confirmation could be faked just as easily as the original communication. Out of band confirmation requires an attacker to simultaneously compromise multiple independent communications channels – this is the rationale behind sending confirmation codes to mobile phones, for example.

The competing verification models

Public key encryption systems typically rely on one of two out of band methods – an Authority or a Web of Trust. An Authority is a person or organisation that is assumed to be both well-known and trustworthy. If a chain of signatures can be established from an Authority to the certificate in question, then the trustworthiness of that certificate is assumed. By contrast, a Web of Trust requires a chain of signatures to be established from each user to the certificate in question – each user acting as his own Authority.

The out of band confirmation of an Authority relies on the shrinkwrap software model – assuming that your software has been delivered via a trustworthy method, then any Authority certificates that were included with your software are by implication verified. A Web of Trust typically relies on personal knowledge – users are assumed to only have signed the certificates of those people they either know personally or have otherwise verified the identity of offline. In this case, “out of band” means “face to face”.

In addition, some PKI systems allow for multiple signature chains to be defined – this is typical in Web of Trust models where the intermediate certificates are usually controlled by ordinary users whose reliability may be questionable. Multiple chains mitigate this risk by providing extra confirmation pathways which collectively provide greater assurance than any single path.

In a Web of Trust model, each user is responsible for cultivating and maintaining his own outgoing signature chains. This means continually assessing the reliability of the downstream certificates, which even with the help of software tools requires a nontrivial amount of work. The reward for this effort is that the reliability of a well-maintained Web of Trust can be made arbitrarily high as more independent chains are established. An Authority model also requires a maintenance overhead, but Authorities are typically large organisations with well-paid staff, and the certificate chains are much shorter. Authorities also tend to use automated verification methods (such as emails) which can be used by an attacker to escalate from one form of compromise to another.

An Authority is thus more easily attacked than a well-maintained Web of Trust, but less easily attacked than a badly-maintained one, and more convenient for the end user.

Why X509 is broken, and how to fix it

X509 has several points of design weakness:

  1. CA distribution is done via shrinkwrap software – but since almost all software is distributed over the internet this is easily subverted.
  2. Only one certificate chain may be established for any certificate – multiple incoming signatures are not supported, so every chain has a single point of failure.
  3. All CAs have the authority to sign any certificate they choose – therefore if one CA certificate is compromised the attacker can impersonate any site on the internet. Thus every certificate verification has multiple single points of failure.

The first two flaws can be addressed by incorporating features from the Web of Trust model:

  1. Web of Trust for Authorities – browser vendors and CAs should agree to sign each other’s signing certificates using an established WoT, thus mitigating against the distribution of rogue certs. Signing certificates not verifiable via the WoT should be ignored by client software, and technical end users can verify the integrity of the entire CA set using established methods.
  2. Multiple signature chains – each public site on the internet should be expected to provide multiple independently signed copies of the same certificate. A site providing only one signature should be treated as insecure by client software.

The third flaw can be addressed by limiting the validity of individual CA certificates. Each signing certificate should contain a field restricting its signatures to a subset of the internet using a b-tree method. A given CA certificate would state that it may only be used to verify leaf certificates whose names, when hashed using SHA256, match a particular N least-significant-bits, where N increases as we travel down the signature chain. Client software should invalidate any signatures made outside the stated b-tree bucket, and deprecate any signing certificate whose bucket is larger than an agreed standard.

(It has been suggested that CAs be geographically restricted, however it is unclear how this could be enforced for non-geographic domains such as .com, and may conflict with a solution to problem 2.)

With these improvements, the Authority model would organically gain many of the strengths of the Web of Trust model, but without imposing a significant burden upon end users.