There’s a lot of hype running around about this:
https://www.quantamagazine.org/20150902-indistinguishability-obfuscation-cryptographys-black-box/
https://www.quantamagazine.org/20140130-perfecting-the-art-of-sensible-nonsense/
Lots of excitable talk of “perfect security” and other stuff. One of the possible applications is supposedly quantum-resistant public-key crypto. But if you read into it, it’s actually a way of making code resistant to decompilation. So instead of creating a secret number that’s processed by a well-known algorithm, you create a secret algorithm that you can ask other people to run without fear of it being reverse engineered. So the “public-key” crypto is really shared-secret crypto with the secret sealed inside an obfuscated algorithm.
In other words, it’s bulletproof DRM. deCSS is even referenced (obliquely) in one of the articles as a use case.
Of course, this makes it in principle impossible to test code for malicious behaviour. You could insert a latent trojan into it and never be discovered, and it removes one of the most important security features of security software – auditability of the algorithm. For example, someone could write a rot13 algorithm and call it “encryption” and the only way to (dis)prove it would be to run a statistical analysis on the ciphertext.
So the question becomes – why would anyone allow IO programs to run on their systems? Virus scanners would be useless in principle. Performance, even in the most optimistic case, would be dreadful. And it doesn’t do anything for the end user that can’t be achieved by traditional crypto (barring the development of a quantum factoriser, and even that is not yet certain). No, the only people who gain are the ones who want to prevent the next deCSS.