Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Powerful, highly stealthy Linux trojan may have infected victims for years (arstechnica.com)
273 points by SoapSeller on Dec 8, 2014 | hide | past | favorite | 99 comments


I concede that it's not a panacea, but I really do feel like filtering outbound requests is going to be one of the best defences we have against stuff like this going forward.

It protects you against:

- viruses / trojans that try to call out

- ad tracking (and ads in general, if you want)

- intrusive analytics

- suspect consumer devices (TVs that transmit live audio, network cameras that connect to the cloud even though the cloud feature is disabled, content players that try to report your activity)

Edit: formatting


One kind of neat thing I do is filter outbound traffic based on the user. If you're using s newish distro like centos7/rhel7 or a newer ubuntu, you can filter packets with ip tables based on the user.

I force the apps in various docker containers to run as different users (one per major app or major suite of apps), and use up tables to lock those bits down. My wordpress got owned and there was a little perl script attempting to contact the C&C server, which ip tables happily blocked. It isn't a panacea, but security works best in multiple overlapping layers.


I really like this idea. I'm trying to take it one step further in fact.

My filtering is done at the gateway, and I'm hunting for ways of communicating which packets are associated with which users (on Linux and Mac). Probably done by tagging an unused part of the packet with some kind of ID.


Uh set the evilbit as it leaves tue machine and get the gateway to filter it?


Maybe with VLAN tagging?


I like this idea.

It does assume that a user can't set the vlan themselves but my switches support this and I think it would be really cool to have segregated networks for different levels of user trust.


Guys, calm down! This has been applied for ages and it's networking 101. Heck, it's the freaking 'hello world' :-)

And actually a user can tag his packets, but the security is applied at the switch level, where you can just strip it and add yours. Too bad that means it is not per user, but per switch port... although you may leverage 802.1x... ok, this is getting complex :-)

Easier: Block all outgoing traffic, except for an authenticated proxy where every user/app has to go through in order to reach the internet.


QinQ


The DSCP field is a good candidate for that.


That's been successful enough in the past that there's a strong selective pressure for malware to look more like legitimate traffic. How much time are you going to spend reviewing each HTTPS request made to an EC2 IP address?

Similarly, if that works, there's zero chance that a large vendor won't use the same endpoint for software updates, advertising and activity tracking, etc. to make filtering impossible.


I don't dispute that malware is incentivised to look like legitimate traffic.

I wanted to respond to your comment about vendors using one endpoint to inhibit filtering. They have as much freedom to do this as I do to deny them any internet access if they do. If the product does not operate as advertised in light of this, it will be promptly returned to the retailer.

Also, good filtering isn't based on an IP address alone but that's splitting hairs. Yes it is time consuming but I argue privacy isn't free, it must be protected and defended, we all have to find the medium we are happy with.


> If the product does not operate as advertised in light of this, it will be promptly returned to the retailer.

I support this in principle but it really needs regulatory reform: it's hard or impossible return opened software or a device when the manufacturer changes their policies a year after you bought it. That latter point is becoming more relevant as we increasingly see computers deeply integrated into expensive devices with long service lifetimes. Just wait until a car manufacturer pushes out one of those combined “security fixes and new terms of service / we collect your personal data” patches and you're faced with living with problems, suing, or clicking Accept and hoping you'll have better options in a few years when you're looking for a new car.

> Also, good filtering isn't based on an IP address alone but that's splitting hairs. Yes it is time consuming but I argue privacy isn't free, it must be protected and defended, we all have to find the medium we are happy with.

That's a worthy sentiment but I think it's a losing game because at its root it's a social problem. It's going to be a tough battle as long as companies have very limited regulation for collecting personal information, the ability to unilaterally change service terms after purchase with no right to compensation, and – particularly critical – no penalty for security failures except in rare cases where an expensive lawsuit succeeds.

(That wouldn't directly affect outright malware but corporate responsibility would increase the incentives to take security more seriously than most companies have in the past)


Definitely. And command and control centers can also be hosted at a hacked/badly managed site using something like spammimic [1] for messaging.

[1] - http://www.spammimic.com/


Yeah, I'd say this would lead to a cottage industry in selling hacked sites for C&C purposes except that market has been thriving for many years.


Outbound filtering is great but easily beaten with a mild level of sophistication. The addition of outbound monitoring and analysis is gold. What kind if dns requests am I making how many per hour? how many per page load? how many per domain? What do my flows look like? Do i have DNS flows that last an hour? http connections that serve twice the normal data as my normal page load? http connectionsreturned per domain. Is do i return addresses that are not listed on my servers?

Security layers say you should force all your traffic out your neccessary ports then scrutinize your traffic through those limited egressess.


"Normal" web page of today makes unbelievable number of connections to different CDN destinations. As soon as you browse the internet "normally" on the same machine, you'd spend maybe an order of magnitude more to analyze the traffic than to consume the content. The solution would be not to browse the internet at all from the computers which aren't in the DMZ, like the military (hopefully) does.


First, you'll do your filtering on a different box. If someone has root, it's really easy to change the firewall on the box. Also trivial to delete your logs before they are scanned. You also don't want to be doing IDS work on your end clients.

>"Normal" web page of today makes unbelievable number of connections to different CDN destinations. As soon as you browse the internet "normally" on the same machine, you'd spend maybe an order of magnitude more to analyze the traffic than to consume the content.

This is not really true. I'm not advocating Deep Packet Inspection here. I'm talking about taking a look at per flow stats and setting a baseline for your network. In this case, you wouldn't be processing each packet in real time, just looking at the metadata. To be honest I don't really want to know what's in each packet that comes out of my networks. I do want to have a baseline and look for anomalies.


How would you distinguish between e.g. an HTTPS request to a CloudFlare IP for a legitimate blog comment and one for a throwaway blog being used for C&C?


One HTTPS packet? You won't know. But when you add context you can start to understand what traffic is legit. There's really no silver bullet for security, but with a combination of restrictions and data collection, you can raise the bar on the type of attack that can go undetected.


The problem is that this rapidly becomes unworkable for all but the smallest of networks with a huge amount of available time. False positives make this contest incredibly asymmetrical – ever think about how many hours went into developing IDS rulesets looking for malware signatures which were obsolete once people switched from IRC to HTTP? That repeated for HTTP to HTTPS, detecting “unusual” networks rapidly became ineffective as throwaway accounts or compromised hosts became popular, etc.

That process will repeat for every single bit of context you add. The worst case scenario for this is full-on steganography – public blogs posting scraped content with commands hidden in images or text statistics, bots searching twitter or subscribing to RSS feeds, etc. and waiting for user-triggered network activity before transmitting so it looks like just one more request in the 100+ made by a modern webpage.

For anything other than preventing DDoS attacks this is a slow, futile grind. It's much more effective to focus on preventing an attacker from running code inside your network than trying to clean up the mess after they do so. That's a combination of things like changing the UI to avoid asking the user to make critical security decisions they can't meaningfully answer and, most likely, an app-store like model for most people because even that level of review greatly exceeds what a non-expert can do.


Do try to do it and tell us how many different servers were connected in just one web surfing session where you visit all the sites you visit in one day.

I can give you an example of a single page: I've just opened www.yahoo.com and counted 23 https connections and 7 http to different IP adresses. I've removed the multiple connections to the same IP from this count which would otherwise be bigger. Two of those doesn't even have reverse DNS entry.

Then think about the fact that the malware regularly appears in the content of the otherwise "approved" adds. You just don't have the metadata to recognize those.


Yeah I'm not sure I understand your point. I understand that web surfing generates a ton of connections and I don't think that you can pick a packet out of the pile and say that one is the bad one. I've made that fairly clear. I don't even think we disagree. So I'm confused about what you're trying to get at?


In my opinion the arguments you give, specifically "I'm not advocating Deep Packet Inspection here" and "one HTTPS packet? You won't know. But when you add context you can start to understand what traffic is legit" aren't based on your actual knowledge but just a plain guess. Therefore I give you a specific example of just one page.

Moreover, observing "the packet" independent of the connection of course doesn't have sense. But it's you who talk about the single packets, I've given you an example of 30 connections, where each has hundreds of packets. Even observing just the connections as the connections, you can't know which ones are potentially malign unless you analyze their content.


Only if you filter outgoing requests from a different box. If the attacker owns your kernel they can bypass the filtering.

A nice OpenBSD box as an outbound filter does make sense though, with a different control mechanism.


What about connecting the suspect device to a switch and filtering all of the outbound requests on that switch. If request is approved, forward it to the internet?


I think you will find you need a decent packet filter that understands state on that "switch" and most switches wont do that in hardware, so it may as well be an appliance.


A division in the company I work for makes a product that addresses unwanted outbound traffic,http://www.novetta.com/commercial/novetta-advanced-analytics...


> Even a regular user with limited privileges can launch it, allowing it to intercept traffic and run commands on infected machines.

Huh, how do they do that?

> The underlying executable file is written in the C and C++ languages and contains code from previously written libraries, a property that gives the malicious file self-reliance.

Does that mean something? I don't get it.

I thought arstechnica usually was written for a technical audience.


> The underlying executable file is written in the C and C++ languages and contains code from previously written libraries, a property that gives the malicious file self-reliance.

I think they mean that the executable is statically linked.


If they're going to use a description that probably sounds cryptic to average readers, they should at least use a description that's meaningful for the more technically knowledgeable.


>> Even a regular user with limited privileges can launch it, allowing it to intercept traffic and run commands on infected machines. >Huh, how do they do that?

They must have some pretty powerful zero-day vulnerability they're exploiting. Expect patches.


Except a few paragraphs up they say:

"The trojan is able to run arbitrary commands even though it requires no elevated system privileges"

This is just a terribly written article. Few if any tech details, and any tech language just doesn't make sense/contradicts itself.

Sadly The this is another example of a larger downward trend in the quality of articles on Ars. They should use some of that sweet Conde Nast money and clone John Siracusa a few times!


I think they must be talking about privilege escalation via an undisclosed vulnerability. It "requires no elevated system privileges" to run, but if it is running arbitrary commands, it must have gotten those privileges somehow.


Details via: https://securelist.com/blog/research/67962/the-penquin-turla...

Notably, the C&C domain has been sinkholed by Kaspersky.

This has been linked to the complex "Turla" industrial espionage malware, as it shares a C&C server. (Turla: http://securelist.com/analysis/publications/65545/the-epic-t... )


The Turla malware sends data back using PHP proxies running on hacked servers. The same PHP proxy script is used by MiniDuke.

MiniDuke in turn screams Russia in its target selection and spear phishing related to the Ukrainian bid to join NATO.


The Ukraine is trying to join NATO? That seems like a recipe for disaster, why not let Russia have a buffer zone of countries it controls..



It's a userland trojan and it's "one of the most complex APTs in the world"?

One wonders what these people would think if they found MosDef in the wild.


"This is sophisticated nation-state malware" doesn't sound quite as stupid as "we found this a year ago but didn't know what it was".


"This is ... nation-state malware"

I was wondering why Symantec thought it was from a culturally-homogenous country, and why that would be relevant...


I believe they mentioned the fact it couldn't be detected by netstat as an example of it's sophistication.

idk how this got 127 upvotes.


The irony for me is that if you wrote a trojan that could only be detected by netstat I'd be boned, because I find netstat incomprehensible and tend to use tcpdump to solve those kinds of problems.


I share your snarky reaction but a more depressing way of looking at it is to remember that these are using ancient techniques and are still infecting large numbers of systems. It's not true that the last couple decades have made no progress but it's somewhat sobering that the bar for successful attacker is still set this low…


> last couple decades have made no progress

Who has made no progress?

Malware forensic experts are not using netcat to detect malware nor are sysadmins. There are plenty of more modern techniques.

The problem is no one has cared about security (especially gov, many big corps) until recently...not that the toolsets have been weak. Which is why all the news this year is coming out because they decided to finally check if they've been compromised for the first time.


Note that I actually said it's not true that there's been no progress.

My point, rather, was that progress has been unevenly distributed so there's a disturbingly large range in practice: it's certainly true that actual experts are not using netstat to detect malware but it's also true that most system administration is not performed by security experts. The same places which waited until this spring to upgrade from Windows XP or where security updates are blocked behind long review processes also tend to be the places where someone learned how to use netstat 20 years ago and doesn't want to learn a new skill.


I can't shake the feeling that current security measures are designed in the wrong way. Antiviruses are fundamentally flawed (blacklist instead of whitelist; mostly curing instead of preventing). Filtering traffic is difficult (it is relatively easy to hide information in heavy legitimate traffic).

Maybe the way ahead is in ensuring that files (and images in memory, flash,...) don't get changed. Maybe we should have some external device which monitors computer components for change? It should have access to all the computer parts and should be without any interfaces except for physical ones (typing directly on its touch screen). Just an idea...


The problem is that we currently rely on user discretion. Users are really bad at preventing malware from infecting their system.

We can do some things, sure -- sandboxing by default, etc. But when it comes down to it, if the user is able to click an 'allow access to my banking information' button, then that user will be getting screwed.

The only response I can think of is taking that power out of the hands of the user, and putting everyone in a walled garden. That is something that I find distasteful.

Is there a solution that leaves users in power over their own computers? I don't know.


In practice I have power over my computer but there is no 'allow access to my banking information' button for most of my bank stuff because the banks don't allow it. The info is on their servers and in the most extreme case I have to use a physical security device to generate a code to access it. Dunno if that's the way forward?


A tripwire monitor with "immutable" data (btrfs/zfs snapshots or nix pure packages) to rollback in case of suspicious modification. ChromeOS does this at the OS update level. There's also the genode way http://genode.org/ for strong isolation (AFAIK it's different from VM and Containers)


Microsoft has tried to do security in the way you have mentioned. It only pisses pleb users off, and power users don't really need it.

The real solution is not in prevention but in recovery. If the recovery option is easy and fast, then prevention doesn't really matter - detection and recovery will be key.


How would you recover if the intent of the attack was to ex filtrate something, say payment information?


The financial system is great at recovering from that sort of thing.

1) Transactions have to reconcile somewhere--monitor at that point for unexpected activity.

2) Use insurance policies to pay back losses due to theft.

Payment info is among the data I worry about the least; I check my card statements every month, and flag any unknown activity. Under my card and bank agreements, I'm not liable for losses if I catch them within the defined window (30-60 days depending on card), even if I did something stupid like set a weak password or lose my own wallet.


I only meant for my statement to apply to consumer devices. Servers are a completely different matter, and IMHO you should never store that kind of data on a laptop or desktop. That's what VPNs are for.


Well, just when we discussed that : https://news.ycombinator.com/item?id=8723693


Is there a quick and dirty script/one liner I can run to check my VPS right now?


This may take a while depending on the amount of data you have and the speed of your disk(s):

  grep -R -e 'TREX_PID=%u' -e 'Remote VS is empty !' /
Alternatively you could create ClamAV signatures based on those strings.


I suggest with nice/ionice:

  ionice -c 3 nice -n 19 grep -R -e 'TREX_PID=%u' -e 'Remote VS is empty !' /
You can also check the whole drive:

  ionice -c 3 nice -n 19 grep -ab -e 'TREX_PID=%u' -e 'Remote VS is empty !' /dev/sda


This seems to get stuck for me after a few minutes (grep stops taking up CPU cycles). I thought maybe it was getting stuck trying to read something it shouldn't, but lsof gives no clues.


Perhaps try strace-ing the process? It should give you some clue as to what is going on.


Ah, it's getting stuck on /var/spool/postfix/public/pickup

Rerunning with "-D skip"


Where does this come from?


I can't confirm that it works but I was going off this:

> Administrators who want to check for Turla-infected Linux systems can check outgoing traffic for connections to news-bbc.podzone[.]org or 80.248.65.183, which are the addresses of known command and control channels hardcoded into the Linux trojan. Admins can also build a signature using a tool called YARA that detects the strings "TREX_PID=%u" and "Remote VS is empty !"


Thanks for this. Before I set off a search like this on my server I like to know why :)


Thanks.

Being able to provide simple easily verified command on a public forum to detect the most stealthy malware is testament to the brilliant design of unix-style systems. If someone offered a Widows utility to do the same thing on a forum, only fools would run it.


they would just need to paste a command line to findstr: http://technet.microsoft.com/en-us/library/cc732459.aspx

but don't let facts get in the way of your platform wars...


That's why I won't use any OS that's not Unix-like.

grep for this and that in this directory. Brilliant!


Because you can't type a search string into the search box in explorer on windows?


The search box in explorer searches binaries these days? Honest question, I haven't used it in years.

That seems like it would be pretty counterintuitive for users though. Someone tries to search for the string 'program' and it returns all binaries that have 'This program cannot be run in DOS mode' in them. (Which I think is pretty much all PE binaries)


I recall "search all files" looking inside pretty much everything, including exe and unknown file types. (it's been a while for me, too.)


I recall "search all files" only searching indexed folders, which by default leaves a lot of room for the virus to install itself somewhere that is not being search without even trying.


And then you click "search all folders (may be slow)".


I appreciate what you're saying but you could have re-phrased without the negative sarcasm.


Why exactly are you checking your VPS for this particular piece of malware? Look at grsecurity; everything it stops is table stakes for Linux exploit code.


"It can't be detected using the common netstat command."

How is this possible? I thought netstat would show any program which is listening for connections on a port, regardless of whether it's actively doing anything.


The program doesn't set up a listening socket (which would show up in netstat), it's doing what is normally the kernel's job (analyzing traffic on the interface and parsing IP packets) all by itself in userspace.


And it's using libpcap to do it, making it (in that regard) asymptotically as sophisticated as dsniff.

It appears to literally be using cd00r, which FX wrote almost 15 years ago. It's like they assembled it out of junkyard parts from PacketStorm.



But it still need root or CAP_NET_RAW?


The article does not mention anything about privilege escalation, but this backdoor requires root.


From what the article says (and it's not very concise as you can see) it's not listening for connections, or at least it's not listening for connections until it's "awoken" by something external.


> it's not very concise as you can see

The article is very concise. It's just not very correct. :)


Assuming the trojan has a rootkit, it can patch the kernel so that netstat does not report it.


Is there any evidence that it patches the kernel? If it infiltrates the kernel, you'd think that'd be the most important detail Kaspersky could reveal; forget about whether the authors ran "strip" on the binary or not.


You're right. I misread the article where it said that the Windows malware had a rootkit. Checking the linked technical description [0] it looks like the Linux version does not require privlage escalation.

[0] https://securelist.com/blog/research/67962/the-penquin-turla...


honestly I was trying to figure out how this is something new. rootkits have been around for a long time now.


How can it run packet dumping as a non-root user?


LD_PRELOAD? ptrace?

Updated: Actually, I have no idea. The securelist link says "It uses techniques that don't require root access" but then later says "The module statically links PCAP libraries, and uses this code to get a raw socket".

I have no idea how one gets a raw socket without root, but I'm not in the business of creating raw sockets on linux...


Not sure I follow, can a non-root user observe raw packets (like SYN packets and sequence numbers) through these facilities?

Edit: well, a statically linked pcap is still just a bunch of user-mode assembly code. I didn't think linux kernel security hinged on keeping libraries secret :P


I tried interpreting the ars article (rookie mistake) and assumed it was stealing traffic from other programs running as the same user.


Aha, now I see why you brought up ptrace.

But they were talking about magic syn packets etc (in the securelist post linked from another comment, I got the sources mixed up)


In addition to tedunagnst, as we're talking malware, consider also "kernel exploits". Even if the malware doesn't ship with an exploit, a normal UNIX user has enough privs to examine the kernel version and download and execute code that may exploit the given kernel.


That's slightly less interesting. If you gain root then there's not much to it for dumping packets or hiding processes and open files and sockets or anything, really. Why would the article highlight "intercept network packets as a non-root user" of all things (paraphrased)


My words "in addition to" were not extraneous.


No idea.. I've found a reference to cd00r and the source documents various techniques quite well: https://www.phenoelit.org/stuff/cd00r.c


Does there not yet exist a debugger which runs as a hypervisor?


would the attacker not need to have access to the target machine to install this?


"The malware may have sat unnoticed on at least one victim computer for years, although Kaspersky Lab researchers still have not confirmed that suspicion."

So it might not have? And they're not sure? And "at least one" means it might only be one.

All this makes me highly suspicious of the article.


What a worthless article.


With a comment to match




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: