Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Gabe Newell: Valve, VAC, and trust (reddit.com)
99 points by cyanbane on Feb 18, 2014 | hide | past | favorite | 24 comments


This is a very "press-releasey" response, and it feels disingenuous.

He doesn't specifically address the weak hash algorithm, nor the privacy concerns of monitoring people's domain visit history. This is basically "Yep, we do that stuff, and we're going to keep doing it, because it's good for you".

Understood; cheating is a big problem for them, their players, and their business model. Also, it's unlikely that Gabe had any direct hand in deciding how to build this cheat detector; those decisions were made further down the org structure.

Finally, they probably aren't currently doing anything really bad with that data, but it's a slippery slope. They _have_ that data now, and they aren't going to stop adding new features. Over time it's likely that some smart developer is going to realize that they can use it in some other way, maybe cross reference it, etc. Gabe isn't asking you just to trust that they're not evil, he's asking you to trust that they will _never_ be evil.

His Q&A at the bottom:

1) Do we send your browsing history to Valve? No. (just the domain names in an easily reversible hash)

2) Do we care what porn sites you visit? Oh, dear god, no. My brain just melted. (We aren't specifically looking for porn, we're not that selective; we collect ANY DOMAIN we find on a cheater's system).

3) Is Valve using its market success to go evil? I don't think so, but you have to make the call if we are trustworthy. We try really hard to earn and keep your trust. (Don't be evil.)

The "You're either with us, or you're a social engineering cheat developer" part rankles a bit too, although there may be some truth to that scenario (I don't know either way, but that's one path that could lead to discovering the DNS interception).


I sympathise with the position Valve are in here, which appears to be a “least bad solution” kind of problem.

Even so, I can’t help feeling that the mere viability of userland software like VAC or DRM/copy protection schemes shows that we remain in the Dark Ages in terms of computer security models. For any normal application, it should not even be possible to read system-level information like what is cached within the networking stack, or to hide funny things in your file systems or mess with your boot sector, or to monitor and/or interfere with communications between other applications and the OS.

However, on a typical laptop/desktop/server OS today, we still usually determine authorisation at a resolution marginally finer than “Are you root?”. On some mobile platforms, there seems to be a move toward a more controlled and application-specific model. However, in practice, apps frequently ask for everything including the kitchen sink anyway, and the more controlled installation/update/removal processes for apps seem to be more a (possibly unintended) side effect of locking users into app store models than an active decision to firewall each application for security reasons.

So, out of genuine curiosity: Is anyone actively researching better ways to model user and application permissions that might find their way into mainstream operating systems any time soon? By now, I really ought to be confident that even if I were silly enough to open a dodgy attachment in my word processor, it couldn’t then scan my e-mail, load a couple of juicy system settings files, and upload it all to a server in Western Hackville without further permission. And if I have a problem with a particular piece of software, I really should be able to summarily destroy every trace of it with a simple and reliable procedure, without having to worry that it’s left anything behind that I didn’t want. We’ve obviously got tools that do parts of this job, such as chroot jails and checkinstall in Linux world, but for the average user this whole area seems absurdly underdeveloped given how many common attack vectors would immediately be closed down entirely or at least much better contained.


I agree this is highly desirable. However, what if it came to pass? A game can just say, "you must run me with super permissions, or you don't get to play me." How many people are going to decide not to install the latest Battlefield or Call of Duty or whatever the kids are playing these days, just because of that? I think they'll happily click through whatever prompts are necessary, and put in whatever scary administrator passwords are required.

An alternative would be to not let any third party have these capabilities at all. But unfortunately that includes you, the user, because the system can't distinguish between things that the user actually wants to do, and things the user doesn't want to do but told the system he really really wants to do because he wants to go shoot some bad guys.

Neither outcome seems great, although the first is better than the second to me.


I certainly agree that there is a social/education angle to this as well, and in many ways that is the hard part. Just look at the usability problems with Microsoft’s early attempts at UAC on Windows, or how many people give permissions for all kinds of things on auto-pilot when using mobile devices.

Still, you can’t start to educate even sensible, cautious users about how to make informed decisions if you don’t first have a robust technical security framework so you know what decisions need to be made. Also, we could tighten restrictions on applications without requiring any extra user understanding or interaction at all in quite a few useful cases.

For example, suppose we distinguished between installing general applications and system tools, with the latter meaning third party software that really does need unusual levels of access to system resources, such as security or disk management utilities. Suppose also that we restricted general applications severely in terms of low-level access to system resources. Now all a web browser or e-mail client has to do to benefit from improved security is initially install as a general application, which is presumably the default behaviour and requires no interaction with or notification of the user. Any malware that subsequently finds its way into that application and tries to access restricted system resources or APIs is assumed to be hostile and gets killed immediately, with no need for user interaction then either.


Or you can do what CyanogenMod does: allow the user to pretend to give some permission to a program. This will obviously devolve into an arms race between better emulation of these permissions and better detection of the emulation (debugger detectors come to mind), but this seems to be a better outcome than you posit.


> it should not even be possible to read system-level information like what is cached within the networking stack

This kills lots of innovations. Like what Google did to TCP/IP stack, which is invisible to 99.999% programs.


Please note the words “For any normal application” in my original post. Of course there will always be system software, some of it part of the operating system and other parts coming from other sources. I’m not challenging that, I’m challenging the idea that the default should be for installed applications to have this kind of access. It’s just the principle of least privilege.


They what happens next? Let the user to choose which app should have what access to which resource? It will be confusing.

Your idea is there, but it's not practical.

Let's just use the DNS example here. So VAC is md5 all DNS cache entries on Windows. How do cheaters get around this? Any program can use UDP socket, what prevents the cheaters directly craft DNS packets to an external DNS server? This completely avoids `DnsQuery_A` or `gethostbyname()` APi calls.

So VAC need to sniff all UDP traffic to port 53?


They what happens next? Let the user to choose which app should have what access to which resource?

Yes, I think a lot more flexibility in the underlying security model could be useful for people like system administrators or “power users”.

For “normal” users, it would obviously be important to keep things simple enough to understand or ideally completely transparent where that’s possible. This is why I’m a fan of ideas like the separation between general applications and system tools that I mentioned in another post. The default behaviour is the safest behaviour, which is normally what you want anyway. You can add a suitable level of scary warnings, signature/integrity checking, or whatever other safeguards might be appropriate if the user really does want to install software that will have greater access.

Sure, some people will go ahead and say yes, obviously im_not_a_trojan_honest.exe received by e-mail from “Here’s COD4 Free!” looks legit and they should just click through everything and start playing. There is a limited amount you can do to help such people. But a lot of people are smarter than that and will take security at least somewhat seriously if it’s not too intrusive and they understand that “If in doubt, click no” is the basic rule. And of course for the geeks, or for people at work whose security settings are actually going to be configured by professional system administrators, different rules apply anyway.


I am honestly not sure how to feel about this. On one hand cheating is bad, on the other hand vac having that much capability is scary. Even if it is using it to nuke cheaters from orbit with surgical precision.


How is this any scarier than running Microsoft Security Essentials and other removal tools like Spybot? At some point there has to be trust. Trust that the tool which is controlled by another party that it will do only what it's supposed to do. If someone don't trust anyone, then that person shouldn't even have their computer connected to the Internet so complaining about it is moot.

I trust Valve. They've given me no reason yet that they aren't trustworthy. They have to go after cheaters because hacks and cheats ruin online games so quickly. So it's damned if you do/don't in many ways.


Any application you install on your computer and give the same privileges as Steam has these capabilities. Half of the applications on your computer are therefore scary, but I don't see you complain about those...


Maybe you should be pragmatic, since you can't have both. If you care more about the cheating, keep Steam. If you care more about scary Valve, remove Steam.


Since this was posted on /r/gaming, I would strongly advise that you do not read the comments.


Clearly you are unfamiliar with the caliber of comments on hacker news for the borderline-tech articles.


Clearly you are unfamiliar with the caliber of comments on hacker news for the tech articles.


[deleted]


Did you read the rest of the post? Or did you just set your phasers to Maximum Snark and start shooting?

Valve only sent details about known cheaters servers back to their own servers. If I'm reading it right, it only did that once it detected that you were using cheats. Then, it sent back only enough information to prove that you were using those cheats. This seems like the least invasive possible method - hardly 'spying on its users'.

Well, unless you consider VAC to be spying on customers. In that case, don't play on VAC enabled servers.


[deleted]


But each time you install a proprietary platform like that on your computer, you're subject to being spied upon. And when things like this start coming out, who's to say they aren't doing even worse stuff? It's naive to assume they aren't, actually. Whether or not you are OK with that, is personal preference. Each to his own.

What is your proposal for a free software anti-cheat system?


Why would it send my entire dns cache? Wouldn't a flag that says "this guy visited awesomecshacks.com" enough? Am I missing something?


In the post, it says that it sends hashes of DNS only after the cheats have been detected.


Yeah, I read it too. But why the hassle? Why not the hash of the cheat site I visited. They probably want all my sites to find other probably unknown sites.

And even if I am cheater, does it really justify it? "Oh, he cheats so we can ignore his privacy". Read cheater's email and listen his conversations with steam's next android app while you are at it.


So if I'm a super cheater, what do I do now?

I distribute my latest cheat over a botnet, access from all over, and let Valve take the heat for all of the random computers they target. With innocent people affected.

Valve will not win this war.


It's a war they still have to play. If you re-read what Gabe wrote, it's really about driving up the cost of doing business. That alone is a significant deterrent.


VAC does not ban IP of innocent people, it bans the Steam account's access to VAC enabled servers. It's not economical to create millions of accounts for a botnet.

Innocent people won't be affected unless their Steam account is stolen, which is a much bigger issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: