Firewalls are just some stupid crap industry made up and went with. We've known since the Orange Book days that security had to be done holistically involving every endpoint and network. Their standard for security was a strong TCB on endpoint with trusted path (see EROS or Dresden's Nitpicker); a network card with onboard security kernel, firewall, and crypto (see GNTP + GEMSOS); connections between networks through high assurance guards (see Boeing SNS or BAE's SAGE); proxies + guard software for risky protocols such as email (see mail guards like SMG or Nexor). All of this collectively working together was what it took to enforce a fairly-simple, security policy (MLS). More flexible attempts happened in capability model with KeyKOS + KeySAFE, E programming language, CapDesk desktop, and so on.
So, the above was the minimum that NSA et al would consider secure against adversaries on their level. Every security-critical component was carefully spec'd, implementation mapped against spec 1-to-1, analyzed for covert channels, pen-tested, and even generated on-site. Commercial industry, aiming at max profit and time to market, just shipped stuff with security features but not assurance. Broke every rule in the field. Came up with firewalls (knockoff of guards), AV, and so on to counter minor tactics. Of course that didn't work as it doesn't solve the central security problem: making sure all states or flows in the system correspond to a security policy.
The best route is to put security in the end-point along with E-like tools for distributed applications and hardware acceleration of difficult parts. Within your trust domain, you just check data types and use that for information flow control (aka security). Outside trust domain, you do input validation and checks before assigning types. The hardware will be like crash-safe.org or CHERI processor in that it handles the rest. A security-aware, I/O offload engine will help too. Fixing the root problem along with a unified model (capability-based, distributed) will make most security problems go away. At that point, firewalls will be about keeping out the riff raff and preventing DOS attacks.
If this observation is meaningful, shouldn't it also be the case that firewall deployments aren't meaningful to enterprise security?
Because: that seems intuitively not to be the case.
To wit: on an annual site-wide pentest of any major enterprise network (this is a project every security firm does for a couple clients a year), the moment the pentester gets "behind the firewall" (ie: code execution on any application server) is invariably game-over.
If firewalls were just some stupid crap the industry made up, shouldn't they make no difference at all? Shouldn't attackers just make a beeline for wherever the high-value information is, rather than scanning the perimeter and looking for some chink to use to get behind the firewall?
My argument would be: whether firewalls are "stupid crap" or not, they certainly do seem to matter right now.
'Game over': I think this is exactly the problem. In all the organizations I've been in, firewalls have been an excuse for negligence. 'We don't need to think about security because we are behind the firewall.'
Right now the compliance world is addicted to firewalls, to the detriment to reasonable appsec. In my fantasy world, I'd like the auditors to be telling companies 'in 5 years, you won't be allowed to firewall your business network, and if you aren't secure without the crutches, then no certification for you.' That would light a fire under management to care about software quality all over the place.
You're probably right that firewalls allow negligence elsewhere.
But if they can't secure their one firewall, what makes you think they can secure their complex network of a plethora of interdependent services running across many subdomains on a whole roomfull of machines?
"Simple" is a key step to effective security, and I think the reason we've latched on to firewalls is they are often the simplest, most contained, and most standard way to reduce the attack surface of your network.
I think in many cases you will be right and 'they' won't be able to secure it. This will force them to contact out those applications to someone who can. Plenty of SaaS providers able to secure a network. Just because my incompetent I.T. Guy can't properly harden a mail server doesn't mean we can't hire Rackspace or Microsoft or someone else who can. Let's incentivize competence, not hide incompetence.
> In my fantasy world, I'd like the auditors to be telling companies 'in 5 years, you won't be allowed to firewall your business network, and if you aren't secure without the crutches, then no certification for you.' That would light a fire under management to care about software quality all over the place.
Your fantasy world also has auditors. What concerns me most is "self-auditing", mostly because it's a joke, partly because a lot of places don't take it seriously.
Re-read the comment and you'll see your answer. The choice of insecure endpoints, insecure protocols, insecure networking standards, and connections to an insecure internetwork of malice means that trusting security to a low assurance filter of internetworking part is... a joke. Might be why having a firewall didn't reduce odds of any major I.P. and data breaches I've read about.
You want network security? Use a guard [1] with additional security checks at the endpoints and working with software on the guard for protocols such as email or HTTP. Want to stop script kiddies all day long and get silently breached by the exact people that really worry you? Get a firewall: the cheap, knockoff of guards specifically designed to save money they would've spent on real security. I hear they even come with OS's that brag on hundreds of CVE's under their belts. ;)
Oh, and you need to do the endpoint security, too. My posts on HN regularly mention prior work immune to many forms of malware by design. The DARPA, NSF, and Euro funded teams are cranking out one good hardware and software TCB after another with strong arguments against leaks, injection, and so on. At this point, unless I.P. is withheld, there's no excuse for industry or FOSS not building clean-slate efforts on something like that.
Note: To be clear, I'm not counting guards built on crap such as Linux. Many in medium to high assurance industry are doing the same cost-cutting crap as COTS. Sadly, they tell me the reason is "no demand for high security systems." I've heard that in U.S. and U.K. Pre-Snowden, though. Maybe there's hope.
Hi Thomas, I recall you saying at one point that you are not a fan of static code analyzers for improving application security. Could you elaborate? "None of them found Heartbleed" might be one reason, I suppose, but it seems to me they do find a lot of more ordinary XSS, SQL injections, etc. Do you really think it's not worth using them at all?
The best way to use analysis tools is to code in a way that makes it easier for them. Old Orange Book B3/A1 systems had to be coded in a very layered, modular, and internally simple way to facilitate security analysis. Likewise, many of the analysis tools will get lost once you start coding a certain way. Each one also has its strengths and weaknesses.
So, my normal recommendation for people that can't waste time is to use those with few to no false positives combined with a coding style making it easy. For instance, I coded in a structured programming style with relatively simple control flow, quite functional in each unit's structure (see Cleanroom methodology), and avoided hard to analyze things. That made it easy for the tools.
The cool thing is that tools such as Astree and SPARK can prove portions of your code immune to certain defects. Others can do this for concurrency problems. And plain old reviews of design, code, and configuration with a list of common issues can help a lot by itself. That each of these methods has positive results for teams that use them speaks for itself. Them together can be quite powerful.
To be clear I'm not claiming that firewalls are irrelevant in the enterprise campus scenario, especially if they have DPI functions that are effective in discovering outbound control channels. Even huge corporate environments rarely have more than 10Gb/s of transit and those Palo Alto devices I talked about work fine in that scenario.
What I am saying is that hardware firewalls are not an option at scale and that Layer 3/4 protections are being pushed into the host for scale-out operators. Note that "into the host" does not necessarily mean "in the operating system". There has been great work by some operators to push these controls into the Ethernet firmware, although I'm unaware of a standards-based open way of doing such.
I'm enjoying this HN discussion, where people are disagreeing with a response to a misquote of a incorrect summary made by somebody who didn't watch the talk. :)
Stamos seemed to be making a point about the progression and resources being spent on security solutions. Firewalls are deterrents, but there seems to be a general consensus that they are not feasible for the future and the closest approximation that can be achieved to a "fully secure" system is by focusing on Applicaiton Security. During the video Stamos admits and an audience member loudly agrees that "we suck at appsec". Firewalls are stupid crap because we suck at appsec. If we didn't suck at appsec firewalls would and should not matter.
We just need to do what was done in the past, present, and ongoing in various circles: architect the hardware and tools in a way that makes security (esp integrity) an easy default rather than a nightmare. Then we work from there esp making that faster. I've seen Linux and FreeBSD run on such systems with little modification so I know it can be done. It's why I spread the word.
I should've added that I'm not saying they're useless: just not a solution to the root problems or even the best solution to their problem. I use them if I have nothing else or just want to reduce traffic (DDOS) on a guard. Hell, using an obscure processor architecture while removing anything in traffic IDing it got me more security than any AV or firewall. I retired that strategy after 5+ years of it working with eBay hardware lol...
> My argument would be: whether firewalls are "stupid crap" or not, they certainly do seem to matter right now.
Which would be the hard nut of the matter. Just as security guards at a checkpoint to a military base provide some value in at least awareness of a threat. But if we take the article at face value then we have to believe that the role of the firewall will become greatly diminished to something more sentry like rather than something that aspires to be a portcullis. And while the ability of the gateway infrastructure to prevent an attack from occurring arrives it may be able to raise the cost of mounting an attack by the network equivalent of ASLR on the stack by intentionally re-routing some parts of the network connect to avoid things like bogus source routing, or confusing CBC ciphers in private protocols.
There are still a lot of software which uses network masks for authentication purposes. When something like that can be become key components inside an enterprise, you can't really talk about enterprise security when inside the walls of the firewall. There simply isn't any.
It's because the entire experience of SSL sucks. Want to deal with some random, annoying and recurring issues? Deploy SSL on your app and then try and figure out which arcane cert issue is causing verification to fail.
They are certainly useful. But all these users behind the firewall frenetically clicking on every link they can find and opening any attachment from any email received are effectively an army of trojan horses which to me is an order of magnitude more of a problem than latency.
If you opened by saying "Firewall MARKETING is just some stupid crap ...", people might hear your message better.
Yes there's huge complacency about security. But the problem is people, not firewalls.
Holistic security is important and a huge opportunity created by this mass hypnosis. There's never been a better time to raise money. Happy to discuss, contact info on my profile.
Fair enough as that's a huge part of the problem. Yet, if firewalls should be trusted, they need to meet these basic critera:
1. Attention paid to firmware security and its ability to load kernel.
2. Firewall TCB is strong in that it can prevent or contain compromises.
3. Each component is isolated with restricted interactions subject to believable security arguments, static analysis, or formal verification.
4. Every piece of every packet is inspected for foul play.
5. Covert storage and timing channel mitigation is in place.
6. Supports application-layer security for whatever it's being used for.
Can you name a single firewall that meets all these criteria? That's how guard's were designed in the past before firewalls got invented to ignore most of that. So, firewalls (in theory and practice) are technically incapable of doing their job unless the coders were nearly perfect. Then, they're marketed as doing much more than they can. So, why people demand firewalls instead of companies getting the cost of guards down is beyond me.
Here's an example of a real firewall that is more like a guard in practice:
A nice architecture combining highly assured firewalls and SNS Server guard (15-20 years without compromise) with COTS enhancements for quite a security argument:
Once I see the real thing, esp seeing NSA pentesters achieve nothing, it's hard for me to make excuses for security engineers making the same mistakes for years despite being shown what works. I've sent about every firewall vendor validation reports of what made it and why. They don't care and that's why firewalls are some stupid crap industry trusts but shouldn't.
I have tried preaching similar message while I have worked for a C4I unit. I found it extremely hard to get anyone understand what the actual point was, and even after that I got mostly "but we're all COTS now" with a shrug.
The previous, while working with netsec, stands practially for abandoning the sound principles and going for superficial compliance models. There is no real security architecture in place for most systems, there are not trusted paths of handling information, and the assurance level is at rock bottom. The result is scary, when you take it into the context of your adversaries being hostile, active, and very well funded (typically state sponsored).
Actually I considered elaborating the previous with examples from real life, but then I realized that stuff might be classified, so... Meh.
Appreciate the corroboration from the inside. I've suspected as much given that even the "controlled interfaces" are usually EAL4 at best. Did you know Navy people built an EAL7 IPsec VPN? I'm sure you can immediate realize (a) how awesome that is and (b) what value it has for our infrastructure/military. Yet, it got canceled before evaluation because brass said "no market for it." Virtually nobody in military or defense were interested in setting up highly secure VPN's.
Haha I feel you on that. It's very important for people to understand the basic way C.C. works: a security target or protection profile with the security features needed (can't leave anything out!); an EAL that shows they worked hard (or didn't) to implement them correctly. I'd explain what EAL4 means but Shapiro did a much better job below [1]. That most of the market has insufficient requirements with EAL4 or lower assurance shows what situation we're in. Hope you at least enjoyed the article as I haven't been able to do much about the market so far. ;)
EAL criteria are so operationally restrictive that useful work is effectively prevented from happening. No one needs worse security, we need better security.
A number of us have conformed to higher ones on a budget with small teams. The highest one's are indeed a ton of work to accomplish yet there's been dozens of projects and several products with such correctness proofs. They figured by the 80's they needed their certified TCB to be re-usable in many situations to reduce the issue you mentioned. Firewalls, storage, communications, databases and so on all done with security dependent on same component. Modern work like SAFE (crash-safe.org) takes this closer to the limit by being able to enforce many policies with same mechanism.
So, your claim is understandable but incorrect. Useful work repeatedly got done at higher EAL's. It continues to get done. The real problem is (a) bad choice of mechanism for TCB and (b) bad evaluation process. Most of us skipped high-EAL evaluations for private evaluations instead by people working with us throughout the project. Saves so much time and money while providing even more peer review.
They really need to improve the evaluation process itself so it's not so cumbersome and update their guidance on best mechanisms for policy enforcement. Probably sponsor via funding some of them like they did in the old days. Fortunately, DARPA, NSF, and EU are doing this for many teams so we can just leverage what they create.
How does Anti-Virus play into this as a counter to "minor tactics?" Are you expecting all end-users to personally verify all of their software? No matter how secure the network connection is, end-users need software to use their computers to do work/have fun/etc. Unless you have a completely closed system of 100% trusted software. If you're part of an organization like the NSA, that might be doable, but home users don't really have this luxury unless you advocate for a walled-garden type system.
re antivirus. It doesn't work: they dodge it constantly. They can also use it to improve their odds of beating it by tuning the malware against it. Need I say more about why its barely a defense?
Back in 1961, Burroughs designed a mainframe [1] that anticipated all these problems. They tagged their memory with bits to protect pointers or differentiate code vs data. That's two bits per word of data with almost no performance overhead if it's all you use. That system was immune to almost every attack modern malware uses for code injection. It was very successful for a while but the market eventually chose against it in favor of IBM et al's systems that did dumb, fast, data crunching with hardly any security. Market as a whole went that way.
So, the problem is code can be injected, the isolation mechanisms don't work, and the toolsets are insecure by design. Fix these to make security the easy default with attackers working in a straight-jacket. The CHERI [2] team and others are doing exactly that. Investments in such systems will increase their functionality. I've seen architectures that even do it with 2 bits like Burroughs did albeit with a different model. It's compatible with Windows architecture. What's lacking isn't technology or knowhow: it's willingness of industry and FOSS to adopt methods that work instead of mainstream methods that don't. Always been the problem. Putting backward compatibility and no rewrites ahead of everything else is the other huge contributor to insecurity.
I saw that! I think they were also calling their cores minions. That's great lol. I forgot to send them a list of all the tagging schemes I know, esp patent immune. Might help them out.
I would argue a publicly auditable software stack would be a strong alternative to the self audited stack. I run a completely open source OS and run all non open software on a machine I don't trust.
If someone can't have that then surely it would at least be good to a system that doesn't autorun things automatically, and stops common attacks like bootloader virus, email virus, etc...
I think AV is meant to deal with "minor tactics" like stopping things from autorunning or blocking common kinds of self replicating code and perhaps stopping known bad things.
That blacklist approach most AV takes can never guarantee security, but maybe some of the time it helps.
I would argue that almost all FOSS is insecure and many (OpenSSL) have had easy to spot vulnerabilities for years. The important part of closed or open software assurance is review. People also often focus on the open or closed part as if it's a dichotomy rather than a spectrum. To help, I wrote an essay illustrating the security levels offered at various points in spectrum of open vs closed source here:
That's what secure takes against even black hats these days. It can be simplified with a strong TCB, better hardware, and better languages + toolchains. The problem is that only a tiny few projects in FOSS are doing that and not many more in commercial. Whitelisting, stack canaries, AV, firewalls... this is all just added complexity around the root problem that hackers bypass regularly. It isn't security except against the incompetent.
Getting the real thing might require throwing away a lot of code or apps. Or virtualizing it on secure architectures with crazy good interface protections. That's why market as a whole won't do it. Good news is there's small players making such things: eg Turaya Desktop, GenodeOS, CheriBSD, Secure64 SourceT. We'll get more over time but it would help if waves of FOSS coders invested in stuff that provably works instead of what holds them back. GenodeOS, L4, and MirageOS communities are only ones I know doing it at endpoint these days.
I'd agree with that argument for the general case. Yet, there have been proprietary systems that resisted attacks in their attack model (with source code!) for years and all were designed with established methods for increasing assurance. There's dozens of done that way, esp in defense and smartcard markets. There's a few OSS projects with either good design or code review (medium assurance) that were done by pro's and open-sourced. Far as the actual FOSS development model, there are zero high assurance security offerings done that way. That's despite decades of examples with details published in journals, on the web, etc to draw on. So, FOSS has never done high security, NSA pentesters did give up on a few proprietary offerings, and therefore FOSS is inferior to proprietary in high security because only one has achieved it. Matter of fact, the open-source, commercial MCP OS of Burroughs was immune to pointer manipulation and code injection in 1961 via two bits of tag. FOSS systems haven't equaled its security in five decades.
They need to catch up really quick because they could be the best thing for high assurance. The mere fact that there's tons of labor, they're free, and not motivated by commercial success avoids the main obstacles to high assurance, commercial development: that the processes are labor-intensive, difficult to integrate with their shoddy legacy stuff, and hard to sell. If FOSS ever groks it, they could run circles around the other projects and products in terms of assurance. Closest thing is the OpenBSD community but they use low-assurance methods that lead to many bugs they fix. Their dedication and numbers combined with clean-slate architecture, coding, and tools would produce a thing of beauty (and security).
And, yet, the wait for FOSS high assurance continues. If you know anyone wanting to try, Wheeler has a page full of FOSS tools for them to use:
And when they are found they are fixed and the community is always outraged.
When a closed source project has a bug in it, sometimes the knowledge of that bug is kept hidden. Maybe most of the time it is handled responsibly, but without oversight how can an outsider tell?
You mean for those few FOSS projects that both get plenty code review and fix those bugs? Sure those probably are better off than average proprietary. Much worse than proprietary niche that's quality-focused. Yet the community isnt outraged enough to use low defect processes to prevent the next set. Further, that both FOSS and proprietary focus on getting features out quickly with few review ensures plenty of bugs in both.
The trick to either is the committment to quality/security is real, each commit is reviewed before acceptance, and independent verification is possible. With proprietary, the confirmation can come from a trusted third party, several third parties (mutually suspicious), or source provided to customers (but still paid).
In the long the source being publicly available means the bug will be found.
> Much worse than proprietary niche
I disagree, but even if I didn't how can the average purchaser of software discern quality software from junk. If they had the source, they could pay an expert.
I agree a commitment to quality, and therefor security, is important. But I feel that if all other things are equal the open source software will always have an advantage over the closed sourced software.
Reliability, determinism, and security vulnerabilities are a good start for the purchaser. For the reviewer, we already know what methods [1] historically improved the assurance of software. Every method they added, the more bugs they found. That most proprietary and FOSS software use little rigor is why they're insecure. Only a few proprietary or academic offerings, not community driven, had the rigor for the B3/A1/EAL6/EAL7 process. I give examples here [2] for those that want to see the difference in software lifecycle.
Can you name one FOSS product designed like that? Where every state, both success and failure, is known via design along with covert channels and source-to-object code correspondence? I've never seen it. Although, it has happened for a number of proprietary products whose claims were evaluated by NSA & other professional reviewers for years straight without serious flaws found. So, for high security, the "proprietary niche" that does that has beaten FOSS by far and mainstream FOSS is comparable to mainstream proprietary in quality (i.e. priorities of provider matters most).
FOSS can potentially outdo proprietary in highly assured systems given they have free labor. In practice, they do whatever they feel like doing and so far that's not using the best software/systems engineering practices available. So, I don't trust FOSS any more than proprietary except in one area: less risk of obvious subversion if I verified transport of the source and compiled it myself. Usually plenty of vulnerabilities anyway, though. Would love to see more high assurance efforts in FOSS.
I would argue a publicly auditable software stack would be a strong alternative to the self audited stack. I run a completely open source OS and run all non open software on a machine I don't trust.
It sat out there for a long time and was fixed. All parties involved were notified. There was never the opportunity for anything else to happen. This is the nature of open source, no room for deception in the long run.
If the same kind of bug (major impact, wide distribution and long exposed history) existed inside the code of microsoft, apple or oracle code no reasonable person would think that the company responsible would let that out with details on impact level. The hit to stock prices would be enormous. They would silently issue a patch and hope no one notice and likely no one would because there is no oversight. There is room for deception built in, even if it is not intended as such.
I am aware that companies do patch and do frequently notify, but they rarely let all the information out for public consumption. The larger the issue the more they downplay it. For how many years did the buffer overflow in the ie6 address bar or windows shatter privilege escalation attack remain vulnerable in windows.
Shatter was first described in windows xp before 2002 and was still present when windows xp reached end of life. The people affected never had any say and no one outside of microsoft ever had any opportunity to fix it.
Could you expand on your last para please - it seems to promise there is a solution to software security already available ...
E-language seems a bit out dated from the intro I can find,
What's an IO offload engine?
What do you means about unified model (capability based / distributed implies E-language again?)
Is this using strong data types to base security capabilities on? And how does hardware for in here
I ask for interest as your comments here and on Schneier imply a lot of knowledge under the surface and am run in to catch up
Read it, its two links, and whatever they link to for plenty of inspiration. If you want, I'll email you a list of my designs and essays on there. I use his blog to reach as many people as possible. I can't make money on high assurance without selling out to the enemy so I just posted my stuff online for free anyway. The discussions and peer review over there were grade A with a few high assurance guys as regulars. This site is good, too. I'll get you those links if you want.
"Firewalls are just some stupid crap industry made up and went with." -- I can't even begin to unravel how short sighted that comment actually is.
I'm not sure you really understand the state of the firewall industry at this point in time if I'm allowed to be blunt. While I do think that traditional firewalling (L3/L4) has lost it's overall efficacy there are solutions on the market that address application control, identity, A/V, IPS, spyware and malware solutions in a single solution (not UTM) and that are stream based (single pass - again not UTM).
Firewalls at the enterprise level are FULLY required for business to operate in a relatively secure manner today. Controlling the applications ingress and egress is not an option - it's a requirement. Greg (Etherealmind) has been very well known to be, well, a bit opportunistic in his early assessments. He mentions NSX in the East/West flows in the SDN environments, however what he fails to mention is that many customers implementing NSX have also been implementing purposebuilt firewalls in NSX via the exposed placement of security services tied to the NSX and NetX APIs (http://www.networkworld.com/article/2169448/virtualization/v...).
Working for a security company in this space let's refute the majority of his numbered components..
1) The majority of the customer verticals I deal with buying 10Gb+ firewalls buy A LOT of them. These are environments doing millions to, literally, billions of dollars of revenue per hour. A completely licensed, supported firewall rated at, say 20Gb can be had for under $300k and maintained annually for less.
2) 6.7 nanoseconds is a myth - unless you're in financials and the HPC space. There are so many conga line security products today, and ill conceived network architectures and a thousand other things where 6.7 nanosecond expectation is a unicorn. We typically get to the microsecond levels and customers (even financials) are often fine with those numbers in critical environments.
3) Yes you can. There are a lot of customers using NSX and OpenStack using fully supported, fully modern security solutions in production today. I've been involved in said projects - the best part about those environments is it's actually easier to deploy because it's software and more and more platforms have fully exposed APIs and are built for automation and abstraction.
4) BS. Application security? For real? Most of the Global 2000 are NOT software companies. That means they're software development is not their forte. Which means that most will continue to have SQLi (and other trivial) problems well into the next decade.
5) Let's just say for a minute that the perimeter is collapsed - which I hope that at this point it is for the majority of organizations who take network security seriously. That doesn't change the fact that overlays can't have security insertion points and that there can't be microsegmentation. Because there already is today.
6, 7 and 8... They make the least sense of any of the arguments because they are so pointed and least relevant to all scenarios.
Sure - fixing the endpoint and the software involved is an awesome approach to security. But traditional firewalls never fixed that in the first place, all they controlled was access. However today's firewalls go well beyond that and provide much more granular application and user control as well as threat services on top to boot.
But I'm sorry - if firewalls provided no business value there would not be companies building and selling 10 & 100Gb firewalls for hundreds of thousands to millions of dollars to protect, segment, identify and inspect - well beyond what this is lumping all "firewalls" into.
> Firewalls at the enterprise level are FULLY required for business to operate in a relatively secure manner today.
They're also completely unsustainable, because "firewall traversal" will always be a thing. The result is a tit-for-tat arms race between firewalls and applications, with application protocols being encapsulated deeper and deeper, and firewalls trying to inspect packets deeper and deeper. The overall system complexity skyrockets, and we all know that complexity is antethetical to security.
I predict that within the next few years, we'll see attackers successfully targeting vulnerabilities in firewalls and antivirus software directly. Add BYOD to that and the entire mess will collapse in a decade or two---probably much sooner.
Firewalls are a temporary workaround for poor application security, nothing more. They are pollution---they hurt everyone by turning connectivity into a hard problem. Once we have good appsec (which we already know how to do; we just haven't done it), the cost of firewalls will vastly outweigh their benefits, and they'll quickly disappear.
Appsec does not solve netsec and vice versa. A lot of these comments are being posted by people who may know appsec rather well, but know very little about netsec. Firewall technology has come a long way - again, if you think that it's simply L3/L4 filtering, you're completely off base.
People have been targeting firewalls and A/V for years already - this is nothing new and about to change as stated. However these systems are much easier to secure based on a generally small footprint and protected management access.
"they hurt everyone by turning connectivity into a hard problem" - again, sure - circa 90's technology. I'm not sure you're aware of the positive enforcement model that some vendors approach today, focusing on allowance of using applications that should be used and blocking those that shouldn't.
Firewalls are not temporary, they're like a lock and key on your house - they don't solve all security problems, but they're a key component within the system as a whole.
If you'd like to take a friendly wager I'll hold you to your last statement, because they're going to be around at least another two decades.
Enterprises already run very heterogeneous stacks/software and more often than not a large portion of that is proprietary or outside of their direct control in other ways. I don't see why any enterprise would take the risk of not having additional layers of security, layers that they can actually control.
I only see that going away if all software is reliably mechanically auditable for security.
Edit: actually thinking of it, there's still many firewall features that one wouldn't want to reimplement app-level each time like rate limiting, network access logging or even basic routing the list goes on. I'm not sure what definition of "firewall" you all are thinking about. To me it's any hardware or software appliance that processes incoming connections.
> there's still many firewall features that one wouldn't want to reimplement app-level each time like rate limiting, network access logging
One of the major things that was learned in the NCP->TCP/IP transition was that it's better to put complex logic in the endpoints, rather than in the network.
> basic routing
Routing isn't what a "firewall" does. Routing is what a "router" does.
> I'm not sure what definition of "firewall" you all are thinking about.
I'm talking about packet filtering that looks at more than the source & destination addresses, stateful packet filtering, "deep packet inspection", etc., especially when they're set up as default-deny.
Application developers shouldn't have to worry that their packets will succeed or fail to be delivered depending on their content.
That might be true if the system is monolithic ingress/egress, but that's not true for any chassis based firewall that's rated at, or above, 100Gb today (and there are quite a few).
Be blunt: I am and it's true that there's huge chunks of the industry I rarely interact with. Might have missed plenty. I particularly appreciate you bringing the NSX security framework to my attention. However, most of what you're mentioning are features that firewalls support where my post said they needed features + assurance (aka "guards," or firewalls with security inside). Most of the firewalls, if evaluated at all, stay at EAL4 or lower: certified to stop "casual or inadvertant attempts to breach security." They don't even get pen-tested by pro's or a source review. Any pro taking time examining a unit will probably find a 0-day or bypass. Grime's reviews showed many even had unknown services running, like FTP, without telling users. They're also prone to subversion as only EAL6/7 reduces that and Snowden leaks confirmed that for many companies.
So, my comment and yours actually agree that network defense is necessary. I just added this in my original comment: (a) real endpoint security, (b) app/protocol-layer security, (c) the right features in firewall, and (d) rigorous assurance and evaluation for each. The result of these combine did resist strong attackers in the past and present. The Boeing SNS Server, for example, hasn't been compromised in 15 years despite multiple pen-tests by NSA and private labs. That's high assurance and minimum of rigor that stops nation states. Commercial firewalls are largely not designed like that. So, they have the features but not assurance of implementation or self-protection. And not integrated enough with endpoints for enforcement to be split properly between the two. See below for an example of a stronger configuration:
Back to your peer review of his list, which I appreciate given your an insider. No 1 I've seen myself and agree. No 2 yes lol. No 3 I learned from you and will repeat to anyone else not aware of these things. No 4 is THE DUMBEST THING HE SAID, has never happened, and won't happen without fundamental changes I preach about here. Enough said. No 5. If my perimiter collapses, they're seeing (a) encrypted traffic that tells them nothing or (b) plain traffic whose nodes resist their attacks. Perimeter to me is minor DLP, DOS prevention, and IDS mainly. No 6, 7, and 8. Alright, that's 3 in his favor.
Your last point is the weakest one: companies regularly spend millions on inferior or non-solutions to problems because they don't know better. How much IT industry spends on something tells us nothing about its security or quality. If you're right, then Windows, Oracle, SAP, and Cisco switches are the highest quality and most secure things out there. (Checks the CVE's and news reports.) Nevermind...
> In the questions at the end, he points out the bug bounties are a PR Problem. When you pay a bug bounty and fix, the researcher needs to shutup instead of going public about the vulnerability. Of course, the researcher needs the publicity to build a business & credibility. So bug bounties are likely to die.
Because security researchers need to build their business, they will find vulnerabilties and disclose them, no matter what. The biggest splashes in the past year were Heartbleed and Shellshock. Correct me if I'm wrong, but neither were driven by bug bounties.
Bug bounties are a PR problem, but they are a smaller PR problem than a zero-day disclosure that results in massive exploits. The point is get the company slightly ahead of the PR curve, not to kill disclosure (which would be impossible).
The "Disclosure Process" doesn't explicitly spell it out, because I think it's just the mental baseline assumption all the authors were operating under, but everything ends up disclosed in the end. It's just a matter of timing.
Perhaps sometimes things are hidden and never disclosed, but it is at least not the general policy.
(Disclaimer: I work for a company that is a bugcrowd customer; I chose HackerOne's policies as my point to avoid any entanglement. I'm not aware of anything we've ever permanently hidden, either.)
Bug bounties are a PR problem if handled badly. If handled well, bug bounties say "look at all of the ways we've made our product more secure". Put another way, do you think Google is less secure or more secure because of their engagement with security researchers?
When I report a vulnerability, I ask (if it's not already known) what their timetable is for patching it. If they want more than 30 days for a simple fix, I disclose immediately.
The turnaround time for most projects I've reported to was less than a week.
Years ago at a large car manufacturer I had an argument with the "Data Security" team about firewall settings. They had some crazy dumb ideas of what had to be on the firewall and it was constantly causing us pain. I went to visit this person in charge and argue my case. He was in another building outside of the "Secure Datacenter".. He argued with me for about 45 mins about how nothing leaves that data center and the firewall is our last line of defense. I pulled a 5Gb 8mm tape out of my pocket, dropped it on his desk and said "That's a copy of every single customer in our database and our entire parts catalog with all order history so much for your firewall".
The next day we had more intelligent discussions about firewall settings, and permissions on the mainframe for tape backups...
Additionally my favorite trick to this day when visiting a company is just plugging in a laptop to various random ethernet ports. I was recently as at a place where they have a "Guest Wifi" that changes passwords every week and the password is emailed to everyone. Sitting in a random conference room I plugged in and had 100% access to their corporate network. In today's work of IOT this is basically gross negligence to only rely on a firewall for security.
To think a firewall is much protection at all is to stick your head in the sand and pretend everything is ok.
Great stuff through and through. It's why I use end-to-end security that doesn't trust the network wherever possible. Let them screw with my Ethernet ports: the NIDS just tells me there's a problem and where to find it. Or they walk away with a lot of data that might be useful for... Monte Carlo simulations or studies in random numbers? Haha.
I've seen multiple security audits that didn't see fit to mention such ethernet ports as a problem. I don't know why; it's possible management told them it was "out of scope".
If blade1 needs to talk to blade2, running it through a firewall means that the communications needs to flow out of the blade back to the datacenter network (ie. flowing north to the top of the rack switch). That adds latency and requires more network and firewall capacity, as all traffic needs to leave the chassis.
If there is no firewall requirement, traffic flows east/west within the chassis on the blade backplane. Security can be layered with host firewall or similar technology. (ie. IPSec, proprietary solutions like Unisys Stealth)
"If blade1 needs to talk to blade2, running it through a firewall means that the communications needs to flow out of the blade back to the datacenter network (ie. flowing north to the top of the rack switch). That adds latency and requires more network and firewall capacity, as all traffic needs to leave the chassis."
For years (15 ?) I have been putting very simple, very small ipfw rulesets in place on non-firewall systems that allow only the traffic I believe that system should be sending/receiving.
It's a firewall. It's on the host itself. It is a firewall that is securing "east/west traffic". It's a simple model that any host can implement and has very low (typically zero) cost.
Related:
This is the first, and last, time I will ever use the term "east/west traffic". Christ.
Indeed. But the view of the NetSec team is that you server is not trusted to secure itself.
If every service in your ecosystem implemented ipfw rules (or equivalent) then that's great. But if your box got popped, then can I be sure that it won't be used as an attack vector for other machines? I will turn off the ipfw ruleset locally, and start connecting out to other systems. If there was a firewall sitting there between me and other systems, this would hit rules that should never be hit, resulting in the NetSec team getting some alerts.
Now I believe, like most sane people, that if you've popped an appserver, it's already likely to be game over, and this is a moot point.
For most applications, the app server doesn't live in its own little DMZ, and usually does have privileged access to the DB, often shares the same authentication domain as other services which is not properly secured (e.g. your [backup|log|monitoring|deployment] server connects to every machine with a service account, not SSH protected, and now I have the service account for all machines).
You wouldn't be foolish enough to have mixed admin functions (content management?), and user functions on the same app server... right? Right? Oh... wait... almost everyone does that.
I'd guess it more has to do with physical colo/datacenter layouts. Traffic moving between racks is considered east/west*. Traffic moving in and out of your routers (and through to the meet-me-room) would take place over fiber pairs up into the ceiling or down through the floor.
It means he doesn't know about those PCI card firewalls specifically designed to enforce security policies on traffic flows within the network (or enclave). They're an uncommon thing in industry because most industry thinks outward-facing firewall = secure. Others that know better didn't want to spend money on a security device attached to every server they own. So, the companies in the 90's offering highly secure versions got acquired after little sales, the current ones are obscure, and all he got to see were the more limited crap the industry adopted en mass.
A sad, recurring theme in INFOSEC industry. They're figuring out a lot of the old stuff, though, slowly but surely. Especially in cutting edge datacenters doing things like OpenFlow.
You're diagramming your network. Typically network engineers like using a tree layout - core devices at the top, flowing down to aggregation devices, down to access devices, down to the end devices. Hence traffic to the North was to the core, and out, traffic SOuth was to the end devices.
Typically in a campus you would see traffic going from the end devices up to the core, and then out of the core, either to datacenters/machine rooms, or out to the internet.
In a datacenter, historically, you had a few servers that talked to each other, connected to the same "Access" switch (commonly refered to as the top-of-rack or end-of-row), and then almost all the traffic for those servers also went "north" to the core, with a much smaller amount going south. Almost all the traffic was from clients out in the corporate network, down to their specific set of services.
However over time, end users represent a smaller and smaller portion of what an application does. More systems integrate with more systems - pulling in data from many other systems, doing analysis, backup, etc. etc. This is the east - west traffic, that flows between things in the same tree diagram. East-west traffic is by far the largest throughput in a modern DC.
When the traffic was mostly north-south, network engineers secured the traffic at the edge of the DC - where the DC joined the core/internet. Now the traffic is between servers that are sitting in the same rack/row/room/DC, securing it in the same way just doesn't work.
Cool-kid words for cloud infrastructure in-groups. Somewhere along the line someone with clout and a laser pointer directed the attention of a roomful of people to an analogy in a Powerpoint presentation, and to curry favor and demonstrate loyalty, dear leader's clones started parroting each neologism.
Note that there is a difference between isolating devices and firewalling in the sense of packet inspection. You're still going to want selective routing and packet forwarding (like port forwarding).
Firewalls will continue to be useful for complex devices that connect directly to the internet (like laptops on public wifi), where all sorts of things you wouldn't want others accessing are exposed by default.
What most consumers and sysadmins think of "Firewalls" and what the presentation are talking about are two different things. Simple packet filters like "don't allow communication on port 123 unless it's from IP a.b.c.d)" will always be part of a defense in depth strategy, but things like stateful packed inspection tools from big-name firewall vendors do not scale when the number of cycles they have to inspect a packet keeps getting lower, especially when they have fewer cycles to actually do basic I/O to get the packet through to the destination.
High performance networking means networking hardware has to get the packets moved faster, so there's less time to do processing on them.
> adding a tiny fixed latency is independent of total system bandwidth
Incorrect on both counts.
A) it's not a tiny latency, not compared to the overall system latency in many cases. This is explained in the article. The speed of light isn't getting any faster, whereas communication rates continue to increase. Which means you have more data on the line at once, which brings me to:
B) most data flows are finite - any reliable communications (such as, for instance, anything over TCP, and a good chunk of things over UDP as well), take a certain number of round trips to come up to speed. Which brings me to:
As such, the overall bandwidth of a TCP (or ghetto TCP via any other means - pretty much any reliable protocol suffers from this) connection is held up more by a fixed delay the faster the link is.
I think you and snuxoll are right that whats throwing our discussion is that firewalls are such an overloaded term with diverse functionality that we're all talking about different things. A taxonomy of them that industry agrees on might be helpful.
Passwords are unsafe for the same reason that roads are unsafe: human beings. Things work well enough for most people, most of the time. However, during certain situations, most people aren't trained correctly and often do the wrong thing. What's more, there's even an accepted culture of doing the wrong thing.
Extending the analogy, passwords could be much safer, just as certain roads are much safer (better engineering, guard rails, fluorescent markers, accurate speed limits for a given stretch of roads), police/EMT accessibility. Enforced requirements for complex passwords, required routine password changes, 2-factor auth, etc. They aren't perfect but they can exceed 'good enough' requirements.
I'm thinking more and more that the best way to do passwords is to not - you generate a random diceware passphrase (or similar) and give it to the user via a secure channel, run it through the KDF, and throw the original away. Preferably on an entirely separate server from everything else.
It still doesn't prevent users from being stupid w.r.t. writing down passwords, but it at least presents users with reasonably secure logins that are relatively easy to remember.
And by plain-text, I mean the server receives information that could then be used to authenticate later.
For instance, if you send the sha of a password, and then store the sha of the sha, you're still sending the password in plaintext, it's just that it's not the password the user entered.
I also wonder how the move to IPv6 will also affect the current paradigm. Internet facing firewalls were typically also NAT machines to save IPv4 address space but all of that is gone in IPv6 meaning your global address is now exposed and a hacker can persistently try to compromise your machine if you don't firewall.
On top of that many modern defenses are based on IP reputation, or black lists. There are several companies that track the reputation of all 4 billion IPv4 addresses. Scores are updated every 5 minutes. With several quadrillion IPv6 addresses this will be a lot harder to do.
I'd like to counter, IoT will probably change this view. (however the points raised are still valid.)
IoT devices generally have utterly terrible security, and you'll not want them public exposed. I can envisage a place for a house wide firewall of somesort, to stop publicly addressable devices being knocked offline, or exploited by persons unknown.
So there will be a need for a "virtual front door" something that home router should really do, but fails utterly in most cases.
I think the article is referring more to enterprise installations for firewalls - I don't think we're worried about 100G internet to domestic endpoints any time soon. Domestic use will still make sense, likely for years to come. In data centers? Not so much.
I was simply pointing out that the audio could be better in case it would be a deal-breaker for others, or in case another source was available. Sometimes pointing out a problem is the first step to finding a solution!
P.S. The content is incredible and the presentation was great, I was just trying to help the audio from detracting from the overall excellent quality.
Awesome Stamos talk (as per usual), but the headline here is a tad clickbaity. Perhaps more accurate to the talk is that they will matter less and less as time goes by. If you don't like that headline - do go watch the talk, there's a lot more subtlety than 8 words convey, and Alex is a fun speaker, with one of the highest signal-to-noise ratios around.
Heck - if you agree with the headline, still go watch the talk. If you care the slightest bit about security, you won't be sorry.
Firewalls are not a 100% solution, nor have they ever been. Defense done correctly is always defense in depth, and hardware firewalls are always likely to be part of that solution.
Alex's point in the video - and one well-made, I think - is that as the landscape evolves, the value-add of hardware firewalls becomes less and less, because assumptions about the environment they are in are changing. Anyone depending only on firewalls (I have called this the "hard candy shell" in the past) was vulnerable before - and as time passes, they are becoming increasingly vulnerable, because the things a firewall can be useful about are becoming less relevant, due to architectural changes and exploits moving up the stack toward the app.
I've said for a long, long time - I don't care how good your perimeter defenses are, you gotta harden the hosts. And in the end, this also is moving up the stacks. Your hypervisor may be secure as all-get-out, but if your app is open to trivial exploits, you're still screwed. You need to do a reasonable amount of security at all levels, including bits like user evangelism (disallowing of insecure passwords, perhaps promotion of MFA) if you want to have an expectation of security founded in reality.
The human element - users and passwords - cannot be underestimated, because a chain is only going to be as strong as it's weakest link, and if you do all of YOUR shit right - that's gonna be the end-user. Someone who can figure out how to replace passwords with a mechanism that ties access and authentication to a single human being in a non-trivially spoofable and inexpensive manner could become very rich...
1. the number is more than 800.
2. NSX is being deployed primarily as a security tool for micro-segmentation. It is displacing firewalls in the data centre is substantly way.
3. Change in the data centre is slow. Infrastructure is commonly built on 10-15 year cycles so actual purchases are a lagging indicator.
I'm not sure I agree with the argument that faster line rates creating a speed limit for firewalls. It seems like firewall hardware could parallelize internally at layer 3, sharding by source/destination IP or port, so all packets from a single flow will go through the same processing core, no? This would add a finite latency, but I don't think it would impact throughput.
1. Why not firewall in the operating system and distribute/scale the load evenly ? Centralising the firewall was done when OS provisioning was bad, now we have Puppet/Chef/Ansible, firewalls operations is simple enough.
2. Simple firewalling is effectively worthless when 99% of all traffic is HTTP/S and SSH. To add value you perform flow analysis combined with deep packet inspection to build a meta-data data to pass through a heuristics/pattern analysis to perform threat detection.
3. Passing through any device creates latency in the order of milliseconds, which is not acceptable in east/west traffic loads. Parallelisation, caching, flow cut-through will all incur a latency penalty.
Cost; specifically, power costs and scaling curves.
If you watched the video, you'd see Alex pointing out the disparity between the best dumb switch he could buy (30Tbps, 5kW) and the best firewall (120Gbps with some, but not all features turned on, using 2.4kW). Point being, he could run a datacentre with one switch using 5kW, but would need 250 firewall boxes using 600kW. And trends are driving the two apart; hardware firewalls aren't keeping up.
Probably a lot. I'm not sure exactly what it is though. If it were that easy, we'd have line-rate firewalls from every manufacturer. Considering that the performance rates are much lower, there are indeed challenges.
Tuple based hashing can get complicated and is highly dependent upon the installation. Some would want source IP. Some want destination IP. Some want a mixture of destination IP and source IP and port. How much you can get through each core (in aggregate) definitely impacts throughput.
Think of a volumetric DDoS attack that rolls into a network over a single path and overwhelms a 1/10/40/100G link. You could have a dozen of those links, but your throughput is hosed because that link is effectively saturated. It might only affect 1/12 of your capacity, but you can't use any of the other links. I hate to bring BGP pathing into a firewall discussion, but maybe it makes sense.
Firewalls today are able to filter at line rate for a single flow on an interface. If you want to allow 100G by handing 10 10G flows in parallel this is completely possible, but not quite the same thing.
Delivering this function is very costly, because of stateful inspection you must implement flow sticking which require buffering which then impacts performance
Firewalls fall into a dark category for IT -- cover-your-ass implementations done without questioning the problem-solution dynamic. For years, the cloud applications I work with have been slowed or made glitchy due to company firewall interference. I will not miss them when my users' experience improves by leaps.
One of the bullet points says "DNSSEC is dead". But what is the plan then? it sounds odd to rely on a completely insecure, unencrypted service for DNS (plus all the new ways in which a secure DNS service could be used, to distribute public keys for instance).
It simply means you cannot rely on DNS (and domain names and such) for your security; your security must be achieved via other means (user auth or such). DNSSEC does not help that much, in reality, despite the implication it might, and that's part of why SSL and Signed Certs exist - it's a given that when I connect to www.microsoft.com from starbucks wifi, the IP DNS returns may or may not be microsoft's. With DNSSEC - presumably you may have a higher level of assurance the IP is from Microsoft, but it is not really practical to implement everywhere due to complexity, so you cannot depend on it to solve for this sort of issue.
DNSCurve solves none of the problems DNSSEC solves, and vice versa.
The only realistic alternative to the DNSSEC PKI is the global SSL CA PKI, with authenticating higher up in the protocol stack. That does not necessarily mean status quo though, as the latter have obvious room for improvement.
You know, I keep hearing this, especially related to ipv6, but the problem to me isnt that the industry is being lazy, its that all the guard systems you reference are archaic black boxes to most IT people. If you want to start pushing Guard to the endpoint of every server and desktop, ok, but show me a product that makes it easy to do and I dont have to be a unixbeard from an defense agency to know how it works...
I dont disagree, but I hear a lot of terminalogy thrown around by you with very little substantial practical and technical information. How about a guide to Guard, EAL etc for the common sysadmin?
Thanks for sharing. Very interesting presentation. As soon as he said the browser is the new OS he lost me, but I understand he's coming from the Internet Industry. I completely agree that we need to design secure application architecture though, and that's why I am excited about languages like Go which facilitate a new client server model that doesn't involve the browser.
The browser took over that throne 10 or 15 years ago, with the rise of web 2.0. We make and download way, way more applications that run in web browsers (aka every web site) than applications that run on Windows, OSX, or any other OS.
I have a deck somewhere illustrating this. My team supported something like 500 installed applications with more than 50 users across a 30,000 user base in 2004 or so. Programmers were churning out PowerBuilder and VB apps, all of which sucked to varying degrees.
Today, I'm not on that team, but the number is something like 50-75. I cannot remember the last time I saw a new bespoke client/server app.
No denying there was lots of breakage back then, and you would have to go back to the late 90's to find the days when Powerbuilder and VB were new. Developers making mobile apps don't assume the browser is the platform. Developers making games don't assume the browser is the platform (even though there are plenty of in-browser games), but it seems we don't question the browser for every other solution. All the recent development in sockets, channels, messaging, etc. has reopened the box of potential solutions for non-browser client server, IMHO.
Strong agree, network based firewalls don't make sense based on performance needs and placement at the edge of an increasingly ephemeral network perimeter.
Host and edge / stub firewalls with strong orchestration will be far more pervasive along with lots of network traffic auditing and anomaly detection that happens in near real-time, but out of the line of fire (out of band).
I haven't seen firewalls on the edge in ages. I guess it's more of a Fortune 500 attitude than tech company thing.
"Firewall" devices still have a place inside your network beyond the perimeter. Today they do ACL enforcement as well as DPI, IDP, IDS, tap data, etc. It's not unheard of to run a "firewall" in completely passive, monitor-only mode to generate telemetry data.
So, the above was the minimum that NSA et al would consider secure against adversaries on their level. Every security-critical component was carefully spec'd, implementation mapped against spec 1-to-1, analyzed for covert channels, pen-tested, and even generated on-site. Commercial industry, aiming at max profit and time to market, just shipped stuff with security features but not assurance. Broke every rule in the field. Came up with firewalls (knockoff of guards), AV, and so on to counter minor tactics. Of course that didn't work as it doesn't solve the central security problem: making sure all states or flows in the system correspond to a security policy.
The best route is to put security in the end-point along with E-like tools for distributed applications and hardware acceleration of difficult parts. Within your trust domain, you just check data types and use that for information flow control (aka security). Outside trust domain, you do input validation and checks before assigning types. The hardware will be like crash-safe.org or CHERI processor in that it handles the rest. A security-aware, I/O offload engine will help too. Fixing the root problem along with a unified model (capability-based, distributed) will make most security problems go away. At that point, firewalls will be about keeping out the riff raff and preventing DOS attacks.