Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Stanford Javascript Crypto Library (stanford.edu)
147 points by austengary on Aug 29, 2013 | hide | past | favorite | 86 comments


Obligatory quote from http://www.matasano.com/articles/javascript-cryptography/

WHAT ABOUT THINGS LIKE SJCL, THE STANFORD CRYPTO LIBRARY? SJCL is great work, but you can't use it securely in a browser for all the reasons we've given in this document.

SJCL is also practically the only example of a trustworthy crypto library written in Javascript, and it's extremely young.

The authors of SJCL themselves say, "Unfortunately, this is not as great as in desktop applications because it is not feasible to completely protect against code injection, malicious servers and side-channel attacks." That last example is a killer: what they're really saying is, "we don't know enough about Javascript runtimes to know whether we can securely host cryptography on them". Again, that's painful-but-tolerable in a server-side application, where you can always call out to native code as a workaround. It's death to a browser.


You may be interested in the defensive js (http://www.defensivejs.com/) project which seeks to securely isolate JavaScript code from maliscous javascript being injected on the page. It also provides a verified crpyto library implementation.

Combine this with HTTPS and I think doing the crypto on the client is certainly feasible (and has been implemented by mega (well, they messed up a few things, but that was an issue in their use of crypto, not the crypto implementation) and 0bin).

EDIT: Also, the site you are quoting is explicitly complaining about people using JavaScript security but NOT over HTTPs, and they claim that there is no advantage to using JavaScript crypto when you are using TLS anyways. This is wrong, because using both means that the web service we are using (for example) never knows our plaintext password, so they can't attack us under the assumption of password reuse, like in http://xkcd.com/792/


>This is wrong, because using both means that the web service we are using (for example) never knows our plaintext password, so they can't attack us under the assumption of password reuse, like in http://xkcd.com/792/

TLS doesn't protect you against a malicious site that is collecting passwords. Even if you were to examine the javascript code to verify that it isn't sending the plaintext[1], they could send a different chunk of code any time you access the site in the future. Either because the site is malicious -- as in the above example -- or because it has been compromised (whether that be by skiddies or three letter agencies with legal papers).

The only 'benefit' javascript crypto gives you is that it makes it easier for people to develop apps where the users data is encrypted before it is sent to the server (such that the server can never decrypt it). However doing this in a javascript web app totally negates that since the server can just send a compromised chunk of js any time it feels like. So the additional security to the user is basically zero.

If you want to seriously create a service like this don't use javascript inside the browser. Do what Tarsnap does: provide an open source native client that does not automatically update.

[1] and let's not pretend that modern javascript is at all readable, in the age of minification and asm.js


Assuming that the web service is doing proper client side authentication most of the time (which, with enough effort, can be verified), and then gets coerced into sending compromised JS at some point in the future - we still have better security than the alternative of no client side security at all. Sure the increased chance of a malicious update is a real threat, but it is better than that attack not needing to be performed at all.

A further incremental improvement could be made using a Mega-style root of trust (ironically, an article explaining how they messed up the implementation explains it best: http://fail0verflow.com/blog/2013/megafail.html). Only the initial loader page (with embedded hashes and MACs) needs to be checked for whether or not it has changed on each page load. If you want to be immune from a later malicious update, you just have to manually save the loader page and open it instead of re-requesting it from the Mega. Yes, during an update they can stop serving the old scripts and deny access, but you will at least always know when an update has occurred.


You must always trust something. In the case of a SpiderOak-like service, you must trust one of their client implementations to use it. Whether it is their compiled binary (which isn't even open source[0] "yet") or a javascript client in the browser delivered securely. Even in the case of tarsnap, with open source implementations that don't self-update, you must trust your own code review or somebody else's -- not a trivial task.

[0]. https://spideroak.com/faq/questions/35/why_isnt_spideroak_op...


The point anon1385 is trying to make about JavaScript clients is that they can be changed at any time by the web service provider, where as open source clients can be 'verified' once by the open source community and then can't be easily changed by the service provider (unless they have an auto updater, which Tarsnap does not)


Ah, very good point. And the provider can send uniquely compromised versions to individuals to reduce their chance of detection, as well.


Hushmail.


It's almost like a punchline these days, innit?


Converting a weak encryption scenario into a Hushmail scenario seems, in a perverse way, an indication of progress.


Yes one must always trust something[1], but the impression I get is that people seem to want to use JS crypto specifically so that users don't need to trust the server with their data. But crypto in a js webapp can't provide that assurance.

I struggle to see what the point of it is[2]. You are already (hopefully) sending all the user data securely over the wire. In a current app the user needs to trust the server with their plain text data. With a js crypto app the users needs to trust the server to provide the crypto code to encrypt their plain text. Either way the user has to trust the server and once the server is compromised their data is trivially retrievable. At best it would prevent your data being seen until you login. If all you are interested in is data storage that is secure because nobody ever retrieves the data then I can offer you a great price on such a service.

[1] http://cm.bell-labs.com/who/ken/trust.html

[2] other than attempts to provide plausible deniability for legal reasons


I've got an odd use-case where JS crypto _might_ be a sensible choice…

A piece of hardware, with an embedded linux machine and wifi capabilities – intended as a piece of consumer electronics(1) – with the main user interface being an HTML5 webapp intended to run on a phone(/tablet/computer). The hardware will either boot up and establish a wifi access point that you connect your phone to, or you'll be able to configure it to connect to your home(/office/shop/venue) wifi network where it'll (typically) live behind your NAT gateway on a 10...* or 192.168.. network. In either of those two configurations, it's difficult to ensure TLS secured communication between the phone and the hardware – where by "difficult" I mean "without big scarey 'YOURE ABOUT TO CONNECT TO AN UNTRUSTED SERVER' warnings baked into browsers when they hit TLS certs without proper root-CA-trusted signatures, for a bit of inexpensive consumer electronics aimed at your Mom or grandparents - accepting self-signed certs or installing self generated trusted keys is a difficult UX problem, and buying real trusted-CA-signed certs for production runs of 10,000 is expensive (and it's impossible to then secure those private keys against anyone with a screwdriver).

I'm currently deciding between relying on WPA2 encryption and forbidding access to things requiring sensitive data (passwords) if the wifi isn't encrypted, or using a modified version of OAuth2 and relying on a cloud service that I can TLS secure with a CA-signed cert (OAuth2 assumes you'll TLS secure all the redirects, but I can't use TLS for the phone/hardware segment as described above. I've got an "I hope not too screwball" workaround involving generating public/private keys on the hardware, sending the public key to the cloud server, doing the OAuth there, and instead of sending the auth token back over a TLS secured channel, encrypt it to the public key first then send it over regular old http). I'm now wondering is JS crypto might solve that part of my problem…

(1) in my case, intelligent Christmas tree lights – http://holiday.moorescloud.com for the curious, and http://dev.moorescloud.com for the (open sourced) hardware and software details.


> It also provides a verified crpyto library implementation.

Verified by who?


Perhaps the key to crypto in the browser would be to implement it in extensions rather than on webpages? Extensions run in a higher level of security clearance, are sandboxed from the rest of the page, and are much more intolerant of code injection.


downloading and executing code on the fly as in the web browser its a broken design problem..

browsers are being patched for a long time now.. to fit into sophisticated models.. it came all over here, by using duct tape.. but now it can be patched anymore to get into the next level.. it cant reach the next level because its broken by design.

how long until people face the reality?



Built into the browser would be even better (like SSL).


What if the sites used SJCL by default, but urged the user to install an official browser plugin that detects the presence of SJCL on a page and falls back to the browser's own copy of SJCL, always trusting it over any copy sent over the wire. It's not perfect but at least gives users a stepping stone to secure communications.

The biggest problem with getting users to adopt crypto is giving them stepping stones to something better, and giving them subtle explanations of risks with lesser involved approaches. I understand the fear of giving the user a false sense of security. We should try to mitigate doing so, while still giving a path to adopt best crypto practices over time. Wholesale adoption of crypto by society is only going to happen when people promoting crypto realize that crypto, like everything else competing for the attention of users has a conversion funnel.


Then instead of backdooring the sjcl, the malware would just steal the plaintext out of the DOM or hook into keystroke handlers.

The problem is JavaScript.


Steve Jobs is turning in his thumb... you just suggested the p word


It's useful if you store things in local storage.


Matasano are experts here; I'm not; but here's my argument:

There's parts of their post that I don't like:

"If you don't trust the network to deliver a password, or, worse, don't trust the server not to keep user secrets, you can't trust them to deliver security code." "How can you do that without SSL? And if you have SSL, why do you need Javascript crypto? Just use the SSL."

I wonder about the implicit threat model. Its too black-and-white for me. In it, you either trust the server or you don't.

But security and privacy, for me, for large numbers of users, is all tradeoffs and games, shades of grey.

Consider: maybe our computers are uploading the private files of everyone who runs Windows to Microsoft. I don't know for sure, but I'd gamble that they aren't. I don't think MS would they take the risk of some security researcher catching the anomalous traffic on the network. The threat of the resulting PR storm would help keep MS honest, at that scale. So I'd expect MS would try pretty hard to fight a government leaning on them to do that.

Lets say that, in future, all data on the worlds most popular social network is encrypted client side. My ID is just a public key, and my browser encrypts all my traffic, using a JS package the social network securely delivers me. (Over SSL. With great care.)

That'd be a similar scenario to MS. I'd be trusting them when I enter my password into the JS they sent me, that is running in my browser. But, again, I'd figure that if they were injecting JS to steal my password, or subtly poison my RNG, and if they were doing that to everyone, probably some security researcher would call them on it. And so, they'd be incentivised not to.

That's a fundamentally different situation to the current one, where if Facebook gives all our data away, its pretty hard for someone outside their organisation to find out. Probably they could gamble on keeping it secret.

So that's why I think the 'Javascript Considered Harmful' post is too strong, in its "you either trust the server or you don't" attitude.

I also think that we aren't going to get anyone using crypto, unless its delivered seamlessly in the browser. There has been good crypto available for years (e.g. GPG on the deskop plugged into thunderbird). But I think we've learned that if its even slightly harder to use, then end users are going to ignore it.

And the way to deliver seamless ease-of-use to a wide audience is on the web. And we badly need easy-to-use privacy these days. So I think it'd be a shame if posts like that discouraged research into JS crypto.

(I can think of other arguments:

e.g. where you want to trust a server now (e.g. to encrypt and backup a document) but decide not to trust it later (decide to abandon the document because in the years since you uploaded it, you think the server/company is compromised - or you ask to be e-mailed the ciphertext)

or regulatory differences: I don't think its the same thing legally to demand or steal a copy of someones server side data, and any server side keys, as it is to demand that the javascript be changed to snoop future passwords as users enter them into their browser application?)

Again, I'm no expert in this area; corrections welcome.


This is where you lose me:

"...I'd figure that if they were injecting JS to steal my password, or subtly poison my RNG, and if they were doing that to everyone..."

To everyone — maybe that would get found out, maybe it wouldn't. But it doesn't have to be done to everyone. That backdoor in the JS could be served to only you. Maybe only certain times of day. How are you going to know?


If its targeted just at you, its a different class of attack. Of course, no technology can protect against all classes of attack.

I'm just saying that there are situations where JS crypto might be useful, such as increasing the difficulty of conducting mass surveillance in secret.

Secret mass surveillance being, of course, a big topic recently.

What people need is not security that defends perfectly against every threat in theory, but no one uses in practice.


We are all open to that. Google, MS and Apple send us software updates that have root privileges on our computers. They hold the keys to our digital houses. We allow them to enter and fix our plumbing and electrical wiring almost every week, and rarely can see or understand what they "fixed". We are with our pants down.


And it's pants all the way down! Can we even trust our chips? But we stagger on.


Anyone interested in signed JavaScript?

Initiatives like this are great. However, I'm most interested in signed JavaScript. I'm surprised that there isn't more of a discussion going about this since JavaScript crypto is near worthless if it's served from an untrusted server.

For example, let's say that you have an application that uses client-side crypto in JavaScript. Then let's assume that the server (that serves up the client-side app) is hacked and the client-side application is modified to send your private keys back to the hacked server, there is currently no way you'd know as the consumer of that client-side application. If signed JavaScript existed, then browsers could alert you that the JavaScript that you're running has been modified and doesn't match the signature, so it refuses to execute it.


It occurred to me the other day that it should be pretty simple to write a script that gets loaded first on the page and removes all subsequent scripts, then loads them itself, checking the MD5/SHA1 of the script against a known good value, stored in that script's attributes.

    <script src="scriptloader.js"></script>
    <script src="jquery.min.js" data-md5="a1b2..."></script>
Then you could decide to not load scripts that do not match the correct hash. It could even ping the server to alert it to broken scripts.


How would this protect you against a malicious server?


If the server serving the HTML is pwnd, then it doesn't, but it doesn't really matter then, either.

This protects against any external scripts being unexpectedly modified, e.g. someone MITM your jquery source.


I guess that's true, but in practice XSS is a much bigger threat to your application than someone MITM'ing an HTTPS connection to Google's (or whomever's) CDN.


If you don't allow external scripts to be modified, why host them externally at all? Why not just wget them and host them locally alongside the checksum document and skip all this silliness?

Oh, also, those scripts can themselves load in other scripts you haven't checksummed.

This is madness you're suggesting.


>> Why not just wget them and host them locally

Because you might be using a CDN for performance and bandwidth benefits.

If you relying on a third party piece of code that you allow to change at any time, then it is very difficult to do any release testing to give you a known set of conditions your application should work under.


In what scenario do you want external scripts to be modified? Why not take advantage of their ability to serve the scripts while also verifying that they are the same scripts you expected to have? You can also verify that those scripts do not load any other scripts in the version you have. Then, if it's changed later to load more scripts, you'll know about it.

How is checking the validity of the scripts that run on your site madness??


It's... kinda madness. Just to be clear that we're talking about the same thing, here's the proposed process as I understand it:

1) load your loader script, which has the URLs, fingerprints of the scripts you want to run, and the necessary dependency information (jquery-ui must load after jquery for example). In the best case, this file is being served out by the same server that's hosting the HTML, that way at least you're not adding more attack vectors.

2) from the loader script, initiate ajax requests for each of the remote files you need

3) as you get each one back, validate that its signature matches those that are expected, raise an exception if it does not (ideally also displaying something to the user), and evaluating it if its signature matches and we've loaded all of its dependencies.

So, why is this madness?

1) Most of the time the reason that you're letting a third party host these files is for speed. They've got a CDN, and hopefully the file will already be cached by your user. Grabbing resources with javascript that you could load directly in the html will slow down your page's loading time, as the browser's html parser isn't able to look ahead and fetch resources that are likely to be needed before the renderer has asked (HTML has a defined rendering order that can be kinda strict sometimes, this is the same reason why you don't put your <script> elements in the <head>).

2) Another reason for using a CDN for your JS libraries is convenience, which this process also wipes out.

3) The whole thing won't work at all unless the third party server sends back cooperative CORS headers, as you can't do an ajax request to a third party site without their cooperation.

Finally though – and this is the big one – it's more convenient for the developers, strictly safer, and faster for the end user if you just compile all of the JS and serve from the same domain that's serving your HTML. As stated above, if that server is compromised, you're toast anyways (barring a browser extension or similar). If you really want some more security, look into SSL (and actually look into it, there's definitely much better and much worse ways of doing it).


Most of what you're describing as "madness" is already done in head.js and require. They have no particular speed penalties and handle dependencies better than just putting script tags in the right order. The one difference is that a system like this would check the hashes to verify the code.

The one possible catch, as you mention, would be getting access to these scripts before they are loaded without having cross-origin problems.

There are a number of problems with serving from your own domain. It is, in fact, much less convenient for developers, as it adds an extra step to the build process and requires the system to properly handle caching so that old resources are not still served after a build. It is also slower to serve from the same domain as there are connection limits. Lastly, it gives up all advantages of a CDN.

My proposal is an attempt to continue taking advantage of CDNs and third party resources, but without giving them the keys to your site. Did you ever consider that Google has access to all of your users' cookies, if they wanted to add a small modification to jQuery or Analytics? Considering recent revelations about government involvement, is it really out of the question to believe that they never would take that information?


head.js and require do have significant speed penalties unless you're just using them as for tracking dependencies and for developing locally. It may be the right tradeoff of effort vs performance for some projects to leave this going even in production, but there's nothing to gain in denying the huge performance boost you're leaving on the table by not compiling your js.

I'll try to extract the core of my argument. The hashing proposal is madness for two reasons: 1) it's slower and less secure than just serving all of your js in one file; 2) it will not actually work without the cooperation of a CDN.

1) The proposal requires you to have one trusted server that you're serving javascript resources out of (because you need to load the script loader and fingerprints from there). If you want fast and secure, you've already paid the cost of a round trip to server #1, and the risk of trusting server #1. The sane thing to do from a performance and security standpoint is to load all of the javascript that you can in that request. Otherwise you're going to be blocking on that request returning, then the renderer reaching that script's location in the html, then that script being executed before it fires off the requests.

2) I'll phrase this as a challenge. Try to load jquery from a CDN with an ajax request. Remember, the key is to get the source of the script into memory without executing it, so that you can hash and validate it first. Feel free to try it right now in your developer console, I'll even give you a code snippet to start from:

  url = '//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js'
  var request = new XMLHttpRequest();
  request.open('GET', url);
  request.send();


There needs to be both signed javascript and signed native code plugins for javascript (similar to JNI). Both of these require significant leadership because of the fragmented nature of existing implementations. The upside is an ability to deliver native crypto libraries and other plugins that remove the potential of having its internals introspected or MITMed.

The protections against malicious code have to be there, so it would be the same order-of-magnitude of installing any other browser plugin.


If you don't trust the server, you're screwed anyway.


Consider CDNs and static content domains. It's generally easier to break or MITM a CDN or static content server without people noticing. This way, you could trust that the code you received was what you were expecting.


Maybe, maybe not. But if you only rely upon the client-side feature of the app and if code signing is employed, you could safely use the app without fear of potentially your private data being stolen (assuming the original code doesn't send private data to the server).


Well, let me explain my thinking.

Suppose we sign https://server.com/sjcl.js. That resource is now tamper-proof in the event that server.com is compromised. But what about the rest of the application HTML/js? For example, an attacker could just say:

    <script src="/sjcl.js"></script>
    <script>sjcl = ...</script>
And overwrite the library on the client side.

Ok, suppose the browser support for content-signing prevents this somehow. The attacker can still modify the application-level code to use the library unsafely. I'm not totally familiar with the API sjcl exposes, but consider something like the e=1 RSA bug that was found recently.

Even if sjcl's API is so great that it's impossible to misuse, an attacker could simply render application code that doesn't use it at all. Or he could just log keystrokes and phone home the pre-encryption content.

What if we sign everything, including application HTML/js? Now we have another problem. We've basically resigned ourselves to serving only static content. We can't render any user input, because this would require a new signature. Probably not a useful web app.


A single page web app might be statically delivered and useful... Further, I think it just means that the page that does the crypt/decrypt has to be statically delivered, not the whole app.

EDIT: so..if you had a single page web app, signed or even checksum'd the whole hunk in a known good state, couldn't you then trust the execution of the app? Further if you had an external service (as I propose in a comment below) that validated the signature/checksum couldn't you then trust the whole package?


Yes, however isn't this sort of what HTTPS is supposed to accomplish?


Not as far as I know... I could run a server that serves up HTML/JS and if that JS was modified by an attacker who hacked the server, consumers of my HTML/JS wouldn't know regardless of whether it's served up on HTTPS or not. Please correct me if I'm wrong.


If the server is compromised you're SOL.


That is not true if you're using code signing. If someone compromises the server, they can overwrite the files, but they cannot forge code signing without additional access to code signing private keys.

Security is achieved through layers. No single layer can protect you from everything, but we lay down the gauntlet in the hopes that an attacker will encounter a road block that they cannot pass.


Exactly. That's why I'm advocating for a signed JavaScript/HTML/webapp standard of some sort.


HTTPS just tells you that you are talking to the 'right' server. It doesn't tell you anything about the validity of the javascript that the server is sending you.


In both cases you're using a private key, which you need to protect, to authenticate you are who you say you are. I suppose the difference is a JS bundle could be signed offline and uploaded to a potentially insecure server/CDN/app store/whatever.


The fact that a private key is used doesn't mean HTTPS and code signing are the same.

HTTPS says, "The communications you send to the server cannot be intercepted and changed between the client and server, and the identity of the server you're connecting to has been verified by one of your trusted certificate authorities."

HTTPS promises that your communications cannot be snooped on, and the person you're talking to is verified by a mutually agreed upon third-party.

Code signing does something entirely different. Code signing says, "The code you're about to run was written by the developer specified in the signing certificate, and has not been modified since it was signed. The identity of the developer has been verified by one of your trusted certificate authorities."

So the practical difference is this:

Say your browser requests https://domain.com/random.js. HTTP ensures that you're connecting to domain.com, and that your communications won't be changed or observed in transit. However, it does not guarantee that some malicious third-party didn't overwrite random.js on the server. With code signing, you can accomplish that. An author can "sign" a code package so that any alterations to the code itself would cause a certificate invalidation.


Signed by whom? Signatures distributed how?


In an Bitcoin-related open-source project I'm currently working on (still in early alpha stages), I'll be using a browser extension that verifies the response using offline signatures (with a way to verify the public key using the Bitcoin network) and compares against builds from the github repository (using travis-ci).

Here's some explanation from the website FAQ:

        #### Browser extension
        Our browser extension provides improved security by verifying the integrity
        of the files served by the server. The verification is done using two factors:

        - **Cold storage signature verification:**
          In addition to SSL, static files (html/css/javascript)
          are signed using standard Bitcoin message signatures on an offline machine
          (the private key was created on that machine, and has no other copies)
          and appended to the response body as a comment.

        - **Comparing against the code on GitHub repository:**
          The source code from the GitHub repository is built on Travis-CI,
          and the resulting hashes are published publicly on Travis's job page.
          The extension compares the web server response against those hashes.

        If a potential attacker gains control over the web server, he still only has
        access to information the web server already has (which is very little).
        To get sensitive information, he would have to modify the client-side code
        to send back more data to the server.

        For an attacker to succesfully mount such an attack against someone with the
        browser extension, he would have to:

        1. Gain access to the web server
        1. Gain access to the personal computer of a developer with commit access
           to the Github repository
        1. Commit his changes to the public GitHub repository, where they
           can be seen by anyone
        1. Gain **physical access** to the offline machine with the private key

        For users without the extension, he would only have to do the first step.
        It is highly recommended to install the extension.

        <a class="btn btn-primary" href="TODO">Install extension</a>

        #### Public key verification
        To prevent an attacker from modifying our published Bitcoin public key,
        its permanently embedded into Bitcoin blockchain in a way that is
        [nearly impossible](https://en.bitcoin.it/wiki/Weaknesses#Attacker_has_a_lot_of_computing_power)
        to modify (and becomes exponentially more difficult as time goes by).

        The public key can be verified by taking the following procedure:

        1. Take the SHA256 of the domain name ("****.com")
        2. Create a Bitcoin address using that hash as the private key
        3. Find the **first** transaction with that address as its *output address*
        4. The *input address* of that transaction is our public key

        If its ever required to change the public key, the announcement
        will be signed with the old public key.


If the maintainer is reading this, I submitted a pull request that significantly speeds up CCM encryption by using arraybuffers. I'd love to see it merged in: https://github.com/bitwiseshiftleft/sjcl/pull/89


The W3C have a draft API for hosted JavaScript cryptography:

http://www.w3.org/TR/WebCryptoAPI/

Netflix, of all people, have implemented a flavour of this at least once for Chrome:

https://github.com/Netflix/NfWebCrypto


With the recent revelations that private internet companies (ISPs) are colluding with the NSA, I very much doubt the security of certificates issued by a "certificate authority". I really like the idea of Secure Remote Password (SRP) which uses a Diffie–Hellman-like key exchange instead of relying on third party certificates. The main difficulties I see to SRP adoption are: 1) Not all browsers natively support SRP, though this is changing. You therefore need a good javascript library for interfacing with the server. This of course leads to 2) No trustworthy javascript crypto library exists with the possible exception of the incomplete SJCL. The biggest problem current JS libraries have is that of 3) generating random numbers. Because there is no cryptographically secure rand() javascript implementation, the solution I've seen is to use mouse movement or other user input to generate random numbers. The problem with this is that it takes ~30 seconds of random movement from the user to "seed" the generator! One interesting method I've thought about is to use http://www.fourmilab.ch/hotbits/ to retrieve random numbers, but this just leads back to depending on a third party for secure communications. I think an efficient, cryptographically secure pseudo random number generator is the biggest deficit to js-based crypto tools.


I'm not sure how to engage with the idea that SRP is a viable replacement for certificate authentication; it only works with the client and server have a pre-shared key.

I very much do not trust certificate authorities, but observe that you don't have to trust certificate authorities to make the security architecture of TLS work. Already, CA compromises have a minimized impact on properties like Google Mail, whose certificates are pinned in Chrome and Firefox. Soon, all properties will get the same privilege, when we adopt schemes like TACK that allow dynamic certificate pinning.

As soon as a critical mass of browsers support dynamic pinning, it will become drastically less profitable to target CAs, because attempts to present forged certificates to Internet users en masse will quickly be detected.


> it only works with the client and server have a pre-shared key

What "pre-shared key" are you referring to in SRP? The only a priori value needed for SRP is the safe prime (N) and generator (g).


The password.


This doesn't necessarily solve your problem, but new versions of Firefox, Chrome, Opera, and IE all provide CSPRNGs. It's experimental, but it's been available in Firefox for several versions. So if you have a modern browser, just open your console window, and try: window.crypto.getRandomValues(typedArray)

On mobile, this random number generator is unfortunately not available on Android's browser, although it is available on Chrome for Android, Firefox Mobile, and Safari on iOS and up.

There's also a full browser-based Crypography API in the works. You can see the draft here: http://www.w3.org/TR/WebCryptoAPI/

Once all of this is implemented, the chicken and egg problem is solved, since the browser will then have native crypto primitives available. At that point, the main argument I see against browser-based crypto in JavaScript is the malleability of the Javascript runtime. And if you believe that's an intractable problem, then you should probably reconsider the use of any language for crypto that can be monkey-patched, including common server-side languages like Python, Ruby, and to a certain extent PHP.


I didn't know about the window.crypto.getRandomValues(typedArray). Thanks!

> And if you believe that's an intractable problem, then you should probably reconsider the use of any language for crypto that can be monkey-patched, including common server-side languages like Python, Ruby, and to a certain extent PHP.

Someone else in this thread mentioned http://www.defensivejs.com/ and I think it is going in the right direction.


Don't you need to trust a certification authority in order to be sure that the browser running SRP hasn't been tampered with?


This could lead to a rise in services offering client-side encryption. A 'better' homepage might help.


Anyone who needs a "better" home page probably ought not to be using it.


Developers with a budding interest in cryptography should certainly be using this, even if they shouldn't be asking users to trust their work. Your mentality excludes a rather large set of users who may not realize what they've found.

Also, a homepage with a pitch, a visually appealing design, etc would likely generate some buzz from people who aren't interested in using it but are interested in the concept. This could very well lead to discovery by developers who would otherwise have never found it.


Mega does something like this. Encryption is done client-side.

https://en.wikipedia.org/wiki/Mega_(website)#Data_encryption


What do you guys think about a service (think pingdom) that you set up to periodically request a js file from your server and check it against a known good checksum?

EDIT: to spell this out, you would, self host the stanford library, for example, and have this service verify it against a known good checksum.


You don't just need to authenticate sjcl.js (or whatever). You need to authenticate every page element that can influence the JS, because JS is malleable. The service you propose won't work.


Well, couldn't checksum the whole page?

I know that creates a big pain in the ass in terms of modifying the page and in terms of making the page dynamic, but bracketing those two concerns -- why wouldn't that work?


You can checksum the whole page, but any externally loaded JS can monkeypatch any other part. Use analytics? How about a payment widget? All of these can affect every part of the js environment, overwriting anything from jQuery to sjcl. Alternately, they could leave the crypto alone and just hook into keystroke handlers or the DOM and steal your plaintext that way.

Also, some browsers will run JS from urls referenced in img tags as long as they are served with a text/javascript MIME type.

It's far too big an attack surface.


Because if I own your server, I can set it up so that it serves the good file to your pingdom-like service, and corrupted files to everyone else. Or more realistically, I'd do something more targeted, like serving the bad files only to the ip block of my business competitors.


i built Masel with it (as a demo for our local javascript meetup) i.e.: http://replycam.com/m/ pretty much serverless encrypted message sharing. its MIT L. and unfinished https://github.com/franzenzenhofer/masel


Off-topic, but 'masel' is the Scots word for 'myself'.


i named it after the jidish http://en.wikipedia.org/wiki/Mazel_tov (written in german "masel tov") masel stands for "a drop from above". so basically "masel" as "drop" of privacy. but well i coded it during a train ride while i was drinkng some beer (reached the balmer peak with it http://xkcd.com/323/ )


This is a great library; I used it to build a Diffie-Hellman key exchange and symmetric encryption in an ASP.NET MVC4 app. The result is horribly insecure, but there is a situation where our product is installed under Http rather than Https that I solved.

If anyone is curious, it works just fine with BouncyCastle and RFC2989DeriveBytes (for PBKDF2).


Universities should do more open source projects. How about a university that funds open source works exclusively?


I'm sorry but I have to ask, is this sarcasm? I can't really tell.


In case of lack of sarcasm. Check out http://en.wikipedia.org/wiki/Berkeley_Software_Distribution The core of iOS


Relatedly, I'm looking for Javascript checksum implementations, especially sum24 or any other good 24bit hashing algorithm, and I couldn't find anything. Does anyone know of canonical hashing implementations for CRC and checksum (not crypto-level, I need short, short hashes) for JS?


I was thinking about this too -- but if they compromised your encryption JS couldn't they also compromise your checksum JS?


I was playing with a javascript crypto library and grease monkey. Here's a quick and early facebook demo: https://www.youtube.com/watch?v=3HlQJWXlknE


Great. Now can we see more services implement easy to use client-side encryption before uploading data to their servers?


Great. Now we can pretend we're secure instead of grappling with the problem that we're not. :)

Observe that SJCL's own authors warn about this problem.


Sure! Check out crypton.io. SpiderOak is going to be shifting towards this method. Although we already use client-side cryptography, it's in fat clients written in Python. We're going to reimplement the client using Crypton, which uses SJCL.


Let's all congratulate austengary on successfully trolling HN's star quarterback.


It's JavaScript, not Javascript..




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: