Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What would happen if someone actually managed to move google.com to a non-google registrar account under their control? Would someone step in and just seize it back? Can you imagine the magnitude of client devices hitting the wrong server for gmail,android updates,chrome even for a few minutes?


I can imagine that such an attack would be dealt with a mix of manual intervention and technical measures, something in between the Google.com search page outage that happens once in a blue moon, and the false routes for YouTube.com IPs that have been propagated several times during the past few years.

Big companies that rely on Internet presence are quite pro-active, and there are teams of people whose job is to prevent something like this from happening in the first place.

DNS is not a secure protocol, and you can redirect connections intended for google.com from the same local network easily, yet the world still keeps turning.


>there are teams of people whose job is to prevent something like this from happening in the first place.

Reading that along with the rest of this thread reminds me just how bad it is to have so much of the internet rely on large sites like this. The amount of trust and dependency that rests on Google is very dangerous. The amount of damage to the world that could result in a failure of their service is beyond imagination.


On the other hand, it lets them do things like certificate pinning for themselves in their own browser, no? So, good and bad.


Chrome could ship pinned certs for whatever sites they want to cooperate with.

Vertical integration just means that Chrome cooperates more with Google's webadmins than Twitter's.


How exactly would that work. modify an instance of bind and check if the client is requesting to resolve 'google.com'? If true, then respond with the rouge IP? First we must make sure the client machine is set up to use our name servers, the ones we have control over.


You can just set up the zones in e.g. Your local network nameserver to say it's authoritative for google.com then send the traffic to wherever you want. Many companies do this on a large scale on their internal networks for the purpose of having easy-to-use names (that can have the nodes behind them changed out without changing anything else) using, mostly for backward-compatibility or legacy reasons, the same domains / zones that may resolve externally to different RRsets. This is known as split-horizon DNS: https://en.m.wikipedia.org/wiki/Split-horizon_DNS


Never use a rouge IP. They're red for a reason, man.


I'm glad someone else picked up on that! :)


You don't even need to setup the client, if you have control over any number of intermediate routers, you can snag/reroute port 53 tcp/udp traffic any way you like. I tend to setup my home router to do this, so that all open dns traffic goes where I tell it to.

It's also advised to do so for unauthenticated users on shared/public wifi so that you can provide an agreement page/site. Also, so that unauthenticated users can't use DNS as a tunnel method, which is pretty damned cool, but insecure.


Also, so that unauthenticated users can't use DNS as a tunnel method, which is pretty damned cool, but insecure.

You can put TLS into a DNS tunnel too, it's just even slower.


I've done TCP-over-SSH-over-DNS many times (using iodine and sshuttle) and it was actually surprisingly usable! I could get over 200Kbps downstream. Iodine uses NULL requests -if allowed by the recursive DNS server- which can fit 1KB+ per request/reply.


I've set up my laptop to go to my home internal server (old laptop) for DNS. My quality-of-development-environment has increased because I can associate any internal in development app I want with a hostname tied to my internal DNS prefix. Very useful for setting up nginx for multiple applications.


You can do it locally with dnsmasq or using xip.io, without an extra server.


You can do it by listening in promiscuous mode and injecting packets into the network pretending to be the DNS server.

You can also setup a rouge DHCP server that sends a different DNS address.

There are likewise many other methods.


google.com is under a registry lock, nobody can touch it without going through a security song and dance involving the registry (Verisign) and the registrar (MarkMonotor), so it's unlikely to happen.

This looks like because Google's domain selling tool thought he bought the domain, he was authorized for the domain for all the rest of the Google tools, which is scary, but probably not earth shattering. Kind of depends on what you can do in the tools to send people to another site.

If they actually hijacked the domain, they would probably kill their DNS servers, but they could do a lot of things; including likely get some domain control certificates (but likely not from the registrars Google pins to, and a lot of people have google's certificate pins)


It seems highly likely that the tools he gained access to would actually be completely useless for google.com.


That would be like buying the worlds biggest DDOS botnet holy cow.



Come on dude. Google is far bigger than Baidu.


That was not the Baidu attack. This was the piratebay DNS attack. In otherwords, anyone in China with a passing interest in naked ladies.


Google has HSTS so requests will be prematurely terminated, however it'll still be a huge DDoS attack.


Well if you control the domain you can easily get an SSL cert (except some clients might pin the CA for google.com).


IIRC, all Chrome users are pinned for *.google.com


However, chrome will still trust certs issued for Google domains that come from non-Google trusted issuers (things in your local trusted keystore)

It sucks because now your employee can MITM you for gmail/google chat/etc


Certificates are pinned too.


> Can you imagine the magnitude of client devices hitting the wrong server for gmail,android updates,chrome even for a few minutes?

This somehow reminds me about Gamil [0]

[0] - https://en.wikipedia.org/wiki/Gamil_Design#Gmail


This happened before with the German TLD, google.de

http://www.spiegel.de/netzwelt/web/domain-gekapert-google-un...


DNS takes a few hours to fully propagate, last time I checked.


The propagation "speed" is the effect of clients honoring the records' TTLs. Clients and intermediate servers are responsible for pulling updates to whatever records they believe are stale; the DNS itself just sits there serving queries.

Clients and caches sometimes disregard the TTL or use their own, so sometimes changes to a record "haven't propagated" to some clients, but what's really going on is something that's supposed to keep its info fresh decided not to.

Though it's possible for clients to get out of date, the story of a built-in propagation speed you can't do anything about is based on misconceptions. The record owner has a lot of say in how and when their records get refreshed.


That depends on the expiry time ("Time To Live / TTL") set for the particular record. Minimum TTL is 1s, and maximum is 2e32 -1 seconds, or slightly over 136 years[1].

Resolver libraries and daemons keep cached results in volatile memory, so in practical terms, if a high TTL is set, the spoofed result will continue to be used until the given machine is rebooted. For some middle boxes, this can be years.

[1] RFC 1035 section 2.3.4 https://www.ietf.org/rfc/rfc1035.txt


I think the point is -- if the TTL is set low, most ISPs simply ignore it to a minimum setting of at least a few hours. So changing/pointing a Google hostname to a victim might not have that big an impact if done only for a few minutes.


I have seen ever-lower TTLs in the wild, sub-minute even, in the past few years. Even historically, TTLs have in my experience always been respected.

I think what really tends to happen, and this gets the folks confused, is that the initial TTL is high (say, 3 days), then the sysadmin wants to do some changes, and because they want to be able to keep changing the IP quickly, while they're working on it, they set the TTL low (say, 1 minute). Only you cannot retroactively lower the TTL of the records that have been sent previously, they'll expire whenever during the following 3 days.

Your point still stands, mostly. The probability of the old record with a high TTL to be evicted from a resolver's cache during any given short period of time is low.


Back in the day I remember this was true, but nowadays when I make changes to DNS in USA, the change is nearly instantly reflected over here in the UK, and a matter of minutes for apparent propagation worldwide. It's gotten a lot faster!


I wonder whether Google hard-codes their authoritative nameservers through their consumer recursive DNS




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: