I'm actually curious how anyone not a geek gets on Wifi at many coffee shops etc. Many captive portals don't seem to do anything and chrome just says "can't connect" or "cert bad". I used to go to test.com, now I go to foo.com. But I'm geek. What do all the non-geeks do?
Most operating systems have started detecting captive portals and presenting a notification. All of the modern consumer ones (OS X, Windows, Android, iOS) appear to have this detection. (I don't use it so not sure how well it works, though.)
I used to stay a lot at a hotel that would "helpfully" exempt a good number of domains from the captive browser, including whatever macs and androids use to detect connectivity...
There should really be a standard for dealing with this, like a flag on DHCP saying "O HAI you need to login here", and providing a REST endpoint that will tell the OS the status of the connection at any given time.
Bugs will happen, but it's a very simply operation, they simply load an HTTP URL that has an expected response (Google's is just a 204 No Content) and check if it matches.
1. Complain that the internet isn't working.
2. Talk to friendly geek who explains going to non Secure site forces pop-up
3. Forget that this work around applied in all cafe and not just the one were problem solved. Goto 1.
This was my immediate first thought too. A lot of subway stations got WiFi in NYC and for the one directly underneath my office I always had to load Google first to get the login portal to load. Any other websites I visit (including this one) just displayed a HSTS error instead.
You can use http://example.com operated by the IANA, which is accessible though http and https.
Since an auto-upgrade to https would break a considerable number of examples, and there's no compelling business need on their part to promote https-only, it's much more likely to stay available through http than just about any other website run by commercial interests.
Same thing there. Oddly enough, `dig @8.8.8.8 example.com` seems to work. No mention of it in /etc/hosts. Guess my ISP must be up to some shenanigans. :/
Probably not; google.com will still reply on HTTP, it's just that if you have a browser that visits the HTTPS page, it'll record a rule to never visit the HTTP version. I doubt the code doing captive portal detection supports HSTS at all, so most likely it'll still work just fine.
But that would be exactly the use case here; browse google manually and then hsts kicks in from then on. Later, I get a captive portal wifi and it uses http://www.google.com/generate_204 and whoops, no request sent and no way for the wifi to redirect me to their login portal.
Both happens in chrome hence the same browser context.
Yeah. The wifi alliance track record is so bad that it's a real pity they run one of the most important protocols we have today. They are decades late on most required innovations, and when they try to design something, it's just so severely broken that requires many iterations to get to a decent status. Miracast is another example of being late and failing at producing something decent.
That's always how it worked. nosslsearch only responds if you access it as "www.google.com" not as "nosslsearch.google.com".]
(As of writing it still works if you do it on a new browser. However once you have HSTS info it will always attempt to do an HTTPS connection I suppose)
I believe nossl was intended as a concession for schools that believe in censorship. It makes it relatively easy to configure things so Google doesn't go over SSL so that your existing MITM boxes which censor continue to function as before.
If they continue to support this use case, it may be hard to do without introducing bugs - one exposure to a 'real' service which spits out an HSTS header (or the preload list), and the machine loses the ability to conduct Google searches.
I think they'll either have to use some nasty workarounds, or they'll need to use a different domain - which isn't necessarily something you want to do when you are trying to provide simple rules which allow users to identify phishing.
More likely they'll simply force sites that want to continue to MITM to load their own CA roots.
Although I don't think this is their motivation, it also has the neat side-effect of making Google's Chromebook & device management services more useful.
> If they continue to support this use case, it may be hard to do without introducing bugs - one exposure to a 'real' service which spits out an HSTS header (or the preload list), and the machine loses the ability to conduct Google searches.
Those wishing to spy on their users with nossl could just disable HSTS in the browsers they provide.
> Turn on SafeSearch VIP
To force SafeSearch for your network, you’ll need to update your DNS configuration. Set the DNS entry for www.google.com (and any other Google ccTLD country subdomains your users may use) to be a CNAME for forcesafesearch.google.com.
>We will serve SafeSearch Search and Image Search results for requests that we receive on this VIP.
Do you know how much funding is at stake nationwide? (It's E-Rate, as I recall.) How credible is it that an organization or wealthy individual or crowdfunding effort could offer a wholesale alternative? In San Francisco, SFPL chose to give up this funding several years ago in order to decline to install censorware.
Libraries do tend to be bastions of extreme left-wing views on surveillance and censorship, so perhaps on the library side.
School boards, no way. I can't think of an easier way to torpedo a school board career (or indeed a "pillar of the community" parent's status in the neighborhood) than by mentioning that they think children should have easier access to pornography at school.
Call it "Internet Deregulation" and say that you will be able to lower taxes by X percent if you don't have to spend money on expensive internet filter solutions.
Much easier to just load a root CA certificate - this use case is explicitly supported and does not require maintaining a browser fork & compile infrastructure.
But "just" doing either of these things turns out not to be simple for many organizations. It's bad enough needing to update your system image/deployment scripts (if you have any!). You also need to figure out what to do about all the devices you don't own. BYOD is a thing.
Just curious, this looks like some already supported feature of nginx, only that it's supported through redirection. Is this a redirection too or a protocol change? How will that be reflected on the address bar?
HSTS will one day be remembered as the HTTPS version of SMS 2nd auth: A bad hack with good intentions. Sure, it can have some positive effect in the short term, but there are so many ways to subvert it that as its popularity grows, so will the attacks.
I'm interested in what these attacks are and what you think would be better. Especially with HSTS preloading (supported in Chrome, Firefox, IE, and Edge) I don't see any attacks really.
Potential security considerations identified by the authors of HSTS are manyfold [1]. The most obvious is (14.6) 'Bootstrap MITM Vulnerability' states that since a non-preloaded site's initial contact will be through a non-secure channel, a man-in-the-middle can tamper with the response and never set a HSTS policy, or set an intentionally misconfigured HSTS policy. Preload is designed to mitigate against this, but preloading every single website is... difficult.
That's not entirely clear from the spec. Per section 11.3 [1], if HTST policy is enabled with a self-signed cert, then "then secure connections to that site will fail, per the HSTS design".
In my reading, it doesn't say that a compliant user-agent must not set HSTS policies for self-signed certs; rather, it says a compliant user-agent will show an un-remediable error upon attempt to visit the website.
4. Send RSTs on all HTTPS traffic. Eventually the user will get fed up with the website not working and try plain HTTP on a different browser/client which doesn't have the HSTS policy cached.
4.1. After the new client connects, force some kind of error or warning on the connection, but still allow it to continue, and use SSL stripping for any initial plaintext connections. HSTS will not be set if any warnings or errors are found on the connection. (RFC section 14.3)
5. Phishing: Send non-secure URLs to your target with instructions that if the connection fails, try the same non-secure link on a different browser. (It may be easier to just register a fake-but-similar-sounding domain and give it a valid cert, and send links to that)
5.1. Phishing with a non-secure root domain cert (RFC section 14.8)
6. Develop a method to determine when a client's policy will expire - or just wait - and wait to attack the moment of the next request, which will [potentially] be in plain HTTP.
7. Invalid configuration, such as forgetting to include the 'includeSubDomains' flag.
Pre-loaded HSTS lists are affected with some of the above attacks, and public-key lists are affected eventually because eventually you need to rotate a key.
These are just the ten identified attacks so far, and they all stem from the design itself, which allows MITM on first connection or policy expiration. If your whole security feature is designed to allow MITM even once, it is guaranteed to happen at least once. This is why it's a well-intentioned bad hack.
The solution was - and still is - to design a protocol which will never allow insecure connections. This could be accomplished by forking SPDY or HTTP2.0 and naming the new protocol "SECURE", and passing out URLs like 'secure://google.com/'. This way it would be totally obvious to both the user and the browser that the connection should only ever be made using at least a valid signed cert, and HSTS would never be needed (but public-key pinning would still be needed). Mandatory DNS-based security would improve this further.
This almost happened with HTTP/2, but nobody proposed the new protocol name. Terrible missed opportunity.
None of these are an argument against HSTS. They are ways you can attack DESPITE it, IF you put in the extra effort or have specific favorable circumstances.
The only way your proposed solution (secure-only protocol) avoids your identified design flaw (non-secure connections are sometimes possible) is if everyone immediately switches it and retires HTTP. That's not going to happen.
Considering a number of your attacks rely on the user not looking at the site security level (or in some cases, even at the domain name spelling), I find it curious that you think another visual indicator will make a difference.
The argument against is it's a false sense of security.
And no, you don't have to retire http. and no, we don't have to switch to it immediately. You can simply add it as a new protocol and begin educating users.
No user that I personally know has ever been trained to identify markers on a browser, or the difference between 'http' and 'https'. It's all too complicated... What, is that a padlock? A check mark? Green? Blue? Yellow? What does "ache tee tee pee ess" mean anyway?
But what is very simple and intuitive is the word "secure". Just make sure the first word is "secure". It even rhymes!
Why "secure"? Why not "anquan"? Google tells me that is the Chinese word for secure and since there are about 3 times as many Chinese speakers in the world than English, shouldn't we be considering them first? Or maybe we should consider Spanish first?
A 'secure:' scheme sounds good, but I don't see how would it solve (1), (2), (5), (5.1) and (8); the scheme can't protect you if the attackers can switch it out or send the user to a domain they control.