Skip to content
← Back to blog

What's hard about end-to-end encryption in the browser?

Blog post cover image

We're Tender - an inbox for your personal finances. We're building a secure, private personal finance tracker that doesn't sacrifice the conveniences of a modern web-based service.

One of the key features we're thinking about is browser-based end-to-end encryption (e2ee) to add an extra layer of security. The idea behind e2ee is that encryption happens on the user's device, using keys that only the user has access to.

In the past, implementing this kind of encryption was complex because there weren't readily available web APIs for client-side encryption. However, modern browsers have been shipping crypto primitives for some years now. Even then, the encryption scheme is only part of the puzzle. The code to perform said encryption has to come from the server to begin with.

So the model breaks down here - if the server is compromised, then the whole thing is still compromised.


Building trust

From the angle of minimizing our attack surface, we can try to reduce the scope of what needs to be trusted from "the entire server" to a smaller surface that can be secured more easily.

To do this, we can try to do what other "app store" distribution channels have done forever: we can sign the application and distribute public keys so that users can verify the application came from us. That way, even if an attacker gets a hold of our server, they can't send users a compromised copy of the app without tripping some alarms.

Implementing signing on the web

Signing our web app involves two main steps:

  • signing the app (or with subresource integrity, at least signing the root index.html file for our single-page app)
  • distributing the public key to verify the app

While there are no out-of-the-box solutions for these steps, there have been exciting developments in the last few years.

Let's explore at some approaches.

Verification via Browser Extensions

Mylar

My first encounter with this problem came from the Mylar paper from 2014. The approach roughly boils down to signing the index.html contents with a key that's verified via chrome extension.

The user has to download this separate chrome extension which also has to be trusted. There's then a cute mechanism to deliver the public keys via a section of the TLS cert, i.e. "if you trust my TLS cert, then you can trust my app."

The clever part here is combining the public key distribution with an existing mechanism - the TLS cert.

Meta Code Verify

More recently, Facebook Meta has taken up a similar approach with Code Verify (2/5 stars on the chrome web store) which works in a similar fashion to secure messenger.com and instagram web.

In Meta's scheme, the hashes for the app are separately hosted by Cloudflare - the idea being that it'd be hard for an attacker to compromise both Meta and Cloudflare at the same time.

WebSign by Cyph

WebSign had a pretty interesting take on the problem. Instead of relying on a browser extension, WebSign hands verification duties off to a secure "bootloader" of sorts via a service worker, since service workers can intercept requests and do some of the things a chrome extension can.

On first load of the application, WebSign installs this bootloader, which is made permanent by abusing the now defunct HTTP Public Key Pinning (HPKP) feature. tl; dr: WebSign would deliver the bootloader over TLS and periodically destroy/remake its TLS cert. This effectively prevents the browser from fetching new versions of the loader code, since a new version's TLS cert wouldn't match the cert you previously received, which has been pinned by HPKP. Kind of neat!

This really hinges on the bootloader being simple and free of bugs. HPKP would later get removed from browsers for this very reason - developers could trivially brick their app by accident with no way of fixing it.

As an aside, this approach reminds me of a security measure Apple takes with iCloud Keychains; in that scheme, server code is signed with a private key that then gets destroyed by a literal blender to prevent upgrade attacks.

Delivery via a big url blob

One final curiosity I found that's worth mentioning is caution.js. The idea is roughly the same as WebSign - the index.html page is signed, blah, blah, blah. The cute part here is the delivery mechanism. The user downloads this "bootloader" equivalent into a bookmarklet (literally a browser bookmark to a "url" with some js in it javascript:function abc()...) which can't be tampered with by the server.

Pretty cool - though good luck teaching users about bookmarklets.

Establishing Trust

In the end, these approaches are all pretty much the same. WebSign and caution.js establish trust on first use (TOFU) by making sure subsequent versions of the app all come from the original source. Mylar and Meta's code verify don't do TOFU but instead distribute keys separately from the app.

Unfortunately, it seems like these mechanisms for trusting web apps are still pretty early and there's no blessed path for one to easily implement.

On the web standards side, there was a Webpackaging proposal a few years back that tried to tackle this problem, but it doesn't look like it's gone anywhere. Notably, the Safari team opposed the addition in 2019 and Chrome removed experimental support last February.