The fact that the binary was infected, I can somewhat understand. However, the way communication happened/is happening on this issue is very disconcerning and basically makes it impossible to know whether it's safe to currently download 2.92 from their site.
Questions like
- how did the compromised binary get there? Was the source code hijacked or was the binary altered after it had been built?
- Were the SHA256 hashes on the site also compromised (btw: Having hashes on the site is good enough for making sure you're not installing a corrupted binary. It doesn't do anything against intentional alterations of the binary though. These hashes need to be stored on an external site)?
- How did the compromise happen?
- what steps were taken to ensure that the same compromise doesn't happen to new binaries posted?
- Did the attacker leave any foothold on the compromised system(s)?
- How were such footholds removed?
All questions that need to be answered before it's safe to upgrade transmission either from the website or with the AutoUpdate feature. A red warning telling me that one binary was infected and that I have to download another binary isn't good enough.
I know the transmission people are volunteer developers and no PR people and I can totally accept that, but there's some things that just need to be made clear before we can safely update to later versions (and thankfully, 2.8 keeps running just fine)
It will probably take time to get all of the answers, but in this case, automatic updates are safe.
Although I'm not a Transmission developer, I develop software that uses the same automatic update mechanism. It appears that the hacker did not update the MD5 present in the automatic update mechanism. (Sparkle) Thus, when the automatic update mechanism downloaded the hacked version of Transmission, it reported it as a corrupted download.
> It appears that the hacker did not update the MD5 present in the automatic update mechanism. (Sparkle) Thus, when the automatic update mechanism downloaded the hacked version of Transmission, it reported it as a corrupted download
yeah. But not knowing how the attacker got access, we have no idea whether they have changed the current 2.92 binary again, this time remembering to update the hash in the appcast or whether this time around the binary is actually pristine.
The fact that the site was never down between this happening and the red warning text appearing makes me suspect that only a hasty cleanup was performed and that the actual security flaw might still exist.
An attacker would need the private key to update the signature in the app cast. It's possible the devs store their private key on the server, although that would be silly.
Although that doesn't discount the recent MITM vulnerability Sparkle had and if transmission is still using an old version of the framework.
Edited to add: If anyone has a copy of the DMG, sha1 5f8ae46ae82e346000f366c3eabdafbec76e99e9, please link me a copy via email (brendandg@nyu.edu) or twitter DM (@moyix).
Maybe take a look around https://build.transmissionbt.com/ - but then again maybe the svn repo wasn't compromised? I tried a "svn diff svn://svn.transmissionbt.com/Transmission/tags/2.90 svn://svn.transmissionbt.com/Transmission/tags/2.91" and didn't see anything suspicious on a fast scroll-through
Side topic: probably not a good idea to expose Jenkins externally, especially if you don't keep Jenkins up-to-date all the time (for transmission bt it is up-to-date right now). This Jenkins probably contain the key to the svn server, so if someone finds a hole...
I do not think it was the build server, though. According to this analysis[1], the developer used a different key to sign the build (all Mac apps need to be signed or the default behavior is to reject that App. You can permanently disable this behavior in settings, or just for on app by holding control while opening the App, which a lot users who use transmission probably do because not all legitimate Apps are signed). Anyways, since the app was signed by a third party's certificate (which was approved by Apple), chances are only the website was compromised. If the build server had been compromised, the attack would have had access to the developer's certificate and they would most likely have used that.
Users who have directly downloaded Transmission installer from official website after 11:00am PST, March 4, 2016 and before 7:00pm PST, March 5, 2016, may be been infected by KeRanger. If the Transmission installer was downloaded earlier or downloaded from any third party websites, we also suggest users perform the following security checks. Users of older versions of Transmission do not appear to be affected as of now.
We suggest users take the following steps to identify and remove KeRanger holds their files for ransom:
1. Using either Terminal or Finder, check whether /Applications/Transmission.app/Contents/Resources/ General.rtf or /Volumes/Transmission/Transmission.app/Contents/Resources/ General.rtf exist. If any of these exist, the Transmission application is infected and we suggest deleting this version of Transmission.
2. Using “Activity Monitor” preinstalled in OS X, check whether any process named “kernel_service” is running. If so, double check the process, choose the “Open Files and Ports” and check whether there is a file name like “/Users/<username>/Library/kernel_service” (Figure 12). If so, the process is KeRanger’s main process. We suggest terminating it with “Quit -> Force Quit”.
3. After these steps, we also recommend users check whether the files “.kernel_pid”, “.kernel_time”, “.kernel_complete” or “kernel_service” existing in ~/Library directory. If so, you should delete them.
"It will then sleep for three days. Note that, in a different sample of KeRanger we discovered, the malware also sleeps for three days, but also makes requests to the C2 server every five minutes."
Isn't it possible to fire a takedown notice to that server? I mean KeRanger committed a felony and Amazon (assuming you mean Amazon's EC2 server) might react quickly if they realize what has happened. It might save a lot of computers from getting destroyed. As long as the server is somewhere in the Western world, it should not be a problem.
The server isn't on EC2, it's hosted on Tor. The malware uses an HTTP-to-TOR gateway service (onion.nu and onion.link) to pull down the encryption key and README file from one of three different hidden services. In theory you could try to get the gateways to block the connections, but I'm not sure they're likely to be cooperative.
Do the developers have an explanation anywhere as to how this happened? The homepage ( https://transmissionbt.com/ ) has a big red warning to upgrade to 2.91, but I can't find any info about how someone went about putting malware in the download.
Yep, this deserves a more detailed explanation (or maybe they still don't know what happened). I updated from the previous version to 2.90 through the app built-in update, and I don't seem to have any "kernel_service" process running. Can someone that has that process in their system tell us where they downloaded the program?
Back when Apple still made Mac OS X Server as a separate operating system, they included ClamAV¹ to scan for malware in mail. They don’t include it anymore, but ClamXav² (been around since 2004³) is a nice GUI for ClamAV that I’ve been using for a while now.
I run a private mail server and swear by ClamAV to help reduce noise and pollution that accumulates and spreads through my server, but I don't think I've ever had any luck with it being a good front line defense against up-and-coming malware, whether it targets Windows or Mac. I don't think I would recommend it as a primary malware scanner for a Mac, or Windows.
If you release commercial or popular open-source software, it's probably a super-bad idea to keep your signing key on a notebook computer you use outside of the office.
Have a trusted machine kept in a secure location to sign it for you if that's practical.
All that stuff - bittorrent, soulseek, calibre etc - lives in a vm, with access to the host only via samba shares. I'll decide what you see and where you can write. Yes, it's great you download stuff. No, you can't write to the stuff I'm sharing. Yes, having a web-server serving up books to the outside world is great. No, you can't serve up anything from my filesystem to anyone who feels like it.
When you can't (be bothered to) vet the source code, stick it in a vm. On a sensible machine with an ssd it's only 10 seconds away. Why risk it. Especially if the software you want/need to run only works under windows.
Or the Mac App Store itself. Its enforced sandboxing would have provided a decent first line of defense against this, but torrent clients can't be submitted to the App Store due to Apple not liking the legal aspects, not to mention the other issues people have with it. (Outside the store, apps can still opt into sandboxing, but that wouldn't help with a malicious installer.)
It should be able to sandbox Windows Apps, except for Metro/Modern UI Apps and Microsoft Edge.
Too many programs are having a backdoor or Trojan in them now. It is a good idea to run any app that accesses the Internet in a sandbox first to see what it does.
Just a warning: by default it doesn't protect your documents from reading.
It isolates the process, all writes (filesystem, registry) go to the sandbox instead of the host filesystem, so a malicious software can't easily install itself. But reading data is mostly unprotected by default, so a malware ran in a sandbox may steal some sensitive data. To protect such data you have to pre-configure sandbox manually.
Yes. Back when I still used Windows five years ago, the app was an essential tool for me. All web browsers, downloaders and basically anything else that I consider high risk must be run in a sandbox, which is routinely emptied. Less risky apps are sometimes also installed in a sandbox, which is emptied much less frequently. And it's great for trying out trial versions of software before I fully trust their publishers. I didn't run any antivirus on that machine at all and I'm that confident. Nowadays I've switched to OS X but I still miss the easy sandboxing of basically any app.
This makes no sense. VMs are by far the most secure form of isolation. No one is going to get infected with malware that escapes VMs - it is far too valuable.
Sure, in theory. Are there any current exploits for VirtualBox?
The way I see it, they're more secure that running the same apps on bare metal. Ubuntu host running a Fedora VM; the latter (with Transmission etc) only running when I need the apps running - seems an almost entirely painless way of providing a lot of security.
Yes! Some reliable ways to extract an RSA key, and some less reliable ways to swap two cache lines. Virtualization on x86 is a helpful tool for configuration management, but should not be mistaken for a security feature.
Yes but if you run a compromised App in VM A and have your sensitive data in VM B then they have to break out of the VM A and then break into VM B. It's no longer worth the effort. There are often easier ways like phising.
You can also lease a VPS (anonymously even) and use Deluge with a webGUI. Said webGUI can be a Tor onion service, for better isolation.
But still, this is about malware in Transmission itself, not anything downloaded using it. So the fact that it's a BitTorrent client is rather beside the point, I think.
It's a very old P2P file sharing network modeled after Napster. It's main propositions (in my opinion) are its active users who share harder-to-find electronic music, and its discussion groups which are available inside the app.
I didn't know Soulseek is still alive - brings me back to the days of Direct Connect / DC++ and hunting down rare live sets and (as you mentioned) electronic music over ISDN/dial up.
Along with the recent Linux Mint hijack, this really illustrates the need for people to verify programs they download. Though I think most people can't be bothered to verify the checksum on a file every time they download it.
On the other hand, the Windows and OS X App Stores are awful. Linux package managers are looking like one of the only straightforward ways to distribute applications securely.
> Along with the recent Linux Mint hijack, this really illustrates the need for people to verify programs they download. Though I think most people can't be bothered to verify the checksum on a file every time they download it.
Barring a situation where a CDN hosting the download is compromised but the main site is not hosted on the CDN, it's extremely unlikely that someone would have the ability to inject malware into the download and not have the ability to make the checksum match. Posting checksums is actually pretty useless, and was something that used to be used to deal with the possibility of malicious mirrors, but doesn't provide any security against mitm attacks (unless the main site is secure but the downloads aren't which is idiotic by 2016 standards anyway), the site getting hacked, etc.
Digital signatures are a little bit better if the key is kept safe, since hacking the site and replacing the binary won't allow a random person to produce a valid signature, although ability to modify the source code would still allow someone to introduce backdoors into the next version, but there's still a huge problem where you need some way to determine what key was supposed to be used to sign the binary in the first place, so just posting a signature on a website is also basically useless.
Digital signatures can work if there's some sort of centralized distribution method, or for safely updating software that's already installed.
In Debian and Ubuntu at least, all published files containing binary executable files (ISOs, .deb packages, etc.) are hashed and the hash signed by a well-known system pre-installed PGP key.
Given trust in the protection of the private key used to sign the hash list file the integrity of the executable content can be proved (assuming useful SHA1 collision creation is prohibitively expensive).
Coincidentally I was writing a Bash script this weekend to auto-install (Ubuntu) releases into LVM volumes and it includes the following code to verify the download:
set -e
# ...
ISO="${NEW_DIST}-desktop-${ARCH}.iso"
for F in SHA1SUMS SHA1SUMS.gpg ${ISO}; do
if [ ! -r $F ]; then
wget http://cdimage.ubuntu.com/${FLAVOUR}/daily-live/current/$F
fi
done
if ! gpg --verify --keyring /etc/apt/trusted.gpg SHA1SUMS.gpg SHA1SUMS; then
echo "Error: failed to verify the hash file list signature; files may have been tampered with"
exit 2
fi
if ! grep ${ISO} SHA1SUMS | sha1sum -c; then
echo "${ISO} is corrupted; please try again"
exit 1
fi
You'd think there would be some sort of global torrent network that simultaneously distributes binaries and signatures.
Doesn't seem like a horrible idea to me: you could just add the developer's key to your client, have your client broadcast interest, receive a _signed_ list of available software with appropriate magnet info... Download servers could serve as initial trackers until enough information has propagated through the network for downloads to be trackerless.
Checksums? Guaranteed. Signatures? Acquired. Checking? Performed automagically.
Granted, this just moves the point of failure to the developer's key. (Key acquisition needn't necessarily take place on the developer's site, a friend in the network could pass you a link containing the dev's key and the application's magnet info.)
> (unless the main site is secure but the downloads aren't
> which is idiotic by 2016 standards anyway), the site
> getting hacked, etc.
It's not idiotic at all. You let anyone who wants to spread the load by providing downloads, but you use checksums - behind https - to ensure they can be trusted.
I thought only apps signed by "identified developers" are run by default on Macs with Gatekeeper now. Shouldn't code-signing have prevented this? Unless they inserted the malware before the signing process.
>I thought only apps signed by "identified developers" are run by default on Macs with Gatekeeper now. Shouldn't code-signing have prevented this?
"By default". Most developers don't bother to register, and lots of people change the default (and after that, they can right click to open the app and bypass the warning).
Anyone can sign up for the Apple Developer Program to become an "identified developer", so there's nothing that stops an attacker from signing their malware.
And according to the analysis [0], this is exactly what they did. They used a different cert to sign their malware.
I have to admit that Windows' UAC is better in that regard, as it shows the signees name. But of course this is only useful if you know the "right" name.
Yeah, I think this is a major issue on OS X. For the average user it is impossible to tell who signed an app, if it is sandboxed, and what permissions it has. Hell, using the codesign command to extract entitlements from all binaries in a package is hard even for advanced users...
(There is third party tool named RB App Checker which does make these tasks a bit easier, though)
“The two KeRanger infected Transmission installers were signed with a legitimate certificate issued by Apple. The developer ID in this certificate is “POLISAN BOYA SANAYI VE TICARET ANONIM SIRKETI (Z7276PX673)”, which was different from the developer ID used to sign previous versions of the Transmission installer. In the code signing information, we found that these installers were generated and signed on the morning of March 4.”
What. That's interesting -- Polisan is a relatively well-known paint company in Turkey. I don't think they have a part in this -- maybe they did not store their private keys well enough?
For the end user? No, it wouldn’t. As thesimon and jakobegger, respectively, said:
“And according to the analysis, this is exactly what they did. They used a different cert to sign their malware.
I have to admit that Windows' UAC is better in that regard, as it shows the signees name. But of course this is only useful if you know the "right" name.”
“Yeah, I think this is a major issue on OS X. For the average user it is impossible to tell who signed an app, if it is sandboxed, and what permissions it has. Hell, using the codesign command to extract entitlements from all binaries in a package is hard even for advanced users...
(There is third party tool named RB App Checker which does make these tasks a bit easier, though)”
There is also the web of trust for PGP, which sort of solves the problem of needing a central store of the key. It does require being inside the web though (bootstrapping). But once you are, you can construct how much you trust a key from someone you haven't met.
>unless the main site is secure but the downloads aren't which is idiotic by 2016 standards anyway
https adds a performance hit. The security of "checksum over https and actual file over http", if the checksum is checked, is the same as "actual file over https", barring preimage attacks.
Granted, the project I saw this reasoning on (https://www.whonix.org/wiki/Download_Security) is one where users are especially likely to do security checks, and they generally aren't satisfied with the security of SSL anyway.
> just to save a negligibly small number of CPU cycles
The link above says they can't afford the additional cost. If it's so negligible, would you sponder the cost of those extra cycles? I'm sure they would host on SSL if someone covered the cost.
Quote from a google engineer in 2010 (it's only gotten cheaper in the last 6 years w/ advances in CPU tech) regarding SSL overhead:
> On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10KB of memory per connection and less than 2% of network overhead. Many people believe that SSL takes a lot of CPU time and we hope the above numbers (public for the first time) will help to dispel that. [0]
None of affects the point they're making, which is that they can't find SSL mirrors that aren't more expensive. If you find one, let them know and I'm sure they'll be happy to switch over.
It is 2016. SSL is not slow anymore. Only case it could be deemed slow would be on a webpage where the browser has to download a ton of small files likes images. Each image would require a new connection and each connection would require full SSL handshake. Even then the fix is not to not use SSL but to bundle all the images/files into 1.
Keep in mind the topic at hand is downloading a single large file, the TLS handshake is a rounding error of the total time, regardless of where you are in the world.
>Practically it is difficult to provide SSL protected downloads at all. Many important software projects can only be downloaded in the clear, such as Ubuntu, Debian, Tails, Qubes OS, etc. This is because someone has to pay the bill and SSL (encryption) makes it more expensive. At the moment we don't have any mirror supporting SSL. We're looking for SSL supported mirrors to share the load.
Is it not true that mirrors supporting SSL are more expensive?
No, it's not true anymore. From the link you replied to:
"On our production frontend machines, SSL/TLS accounts for less than 1% of the CPU load, less than 10 KB of memory per connection and less than 2% of network overhead. Many people believe that SSL/TLS takes a lot of CPU time and we hope the preceding numbers will help to dispel that."
- Adam Langley, Google
Getting an SSL certificate used to be a cost, but that's taken care of now by https://letsencrypt.org/.
So can you recommend a mirror for them that supports SSL?
There are multiple named projects there that aren't using SSL, and I don't think it's just laziness. If you know of a way for them to use SSL mirrors for no additional cost, I'll work on getting them to switch over.
This is only true for Intel and AMD x86_64 servers that have hardware accelerated AES with the AES-NI instruction set. Software implementations of AES and the other ciphers are much, much slower than AES with hardware acceleration. RC4 was the fastest decent software cipher for a while, but that has been found to be insecure and its use is discouraged. The fastest possible replacement would probably be ChaCha20, but that cipher is not widely supported yet. The other software ciphers are very slow, and certainly wouldn't be considered as "fast yet".
Most people download software from websites using GUI browsers, while performing a checksum generally requires opening a terminal, changing directories to where the file was downloaded, and running the checksum program there. Maybe the web browser should provide a UI for doing checksums directly in the download manager. For example, each download entry could have a blank "checksum" text box where you can paste in the checksum given on the page.
> In the case where the attacked has direct control over the website then you're right, it doesn't help at all.
I was pretty sure that's the threat model we were discussing: Software authenticity.
The only way to automatically know if a piece of software is legitimate is to have a trusted public key that can verify a signature.
Also, HTTPS is implied these days. If you're not using HTTPS, you are either malicious, negligent, incompetent, or working for someone who is some or all of the above.
> If you're not using HTTPS, you are either malicious, negligent, incompetent…
Or poor. Hosting large amounts of binaries over https isn't cheap. I just priced Amazon S3 and cloudfront and for the amount of data that I serve it would cost $300 per month. That's a lot to commit for a GPL-ed binary that brings in practically zero revenue. Maybe there's a cut rate VPS out there that can handle 150GB of data and 3TB of bandwidth per month on the cheap, but I haven't found it yet.
Right. All I have to do is distribute the correct hash for my binary as a malicious software distributor because there's no authenticity verification at all, only that the bits in my binary blob match a certain pattern.
That would be a useful extension/plugin for browsers actually.
Maybe like pointed out in another reply, not for checksums but for signatures. So you just copy/paste the signature after selecting a file, and then it can verify it's validity.
Is there no such extension yet? it seems like there should be one already.
Maybe something like:
- have a database of common downloads and all their crypto info, which developers can update once they are validated
- have browser extensions that will check packages on download and alert if suspicious
You could pay for it with some sort of sponsorship from apps themselves, who have an interest in not getting compromised like this (it's terrible publicity).
>Though I think most people can't be bothered to verify the checksum on a file every time they download it.
This wouldn't help anyway. If the malicious party had access to alter the downloads (as they did here) they could just as well change the checksum shown on the page to.
>On the other hand, the Windows and OS X App Stores are awful.
Haven't used the Windows one, but what's "awful" about the OS X one? Quick, one click, installations, isolated, signed, easy updates.
Might be bad for the application developers somehow, but I don't see anything much bad about it from a user perspective -- except maybe the lack of trials. Then again I've been able to get a refund any time I bought an app that was subpar and written to Apple (that was 2 times).
In the original thread, the initial reporters specifically pointed out that the files they had downloaded did not match the checksums on the Transmission page. My guess would be that the attackers compromised a mirror, but not the web server serving up the user-visible page with the checksum.
Generally, Linux package maintainers grab the upstream source, while most of these compromises seem to be of the binaries. And, of course, the maintainers generally review the changes before publishing them
No, Linux distributions offer packages and operating systems that are the result of painstaking work in which all upstream code is reviewed, patched for any inconsistency, and often blocked from going into public archives until known bugs are fixed.
Actually, most have scripts that pull the upstream source and build new binaries without any manual intervention. It is the responsibility of the package maintainer to review every change in code.
The app made it onto the OSX App Store and the author's certs were revoked. This isn't a case of verify source, verify application. This is a case of anything can be infected and it's damn near impossible to check everything.
Transmission wasn't on the Mac App Store, though the app was signed. Apple offers developers the ability to sign their apps distributed outside the Mac App Store to certify them as an Apple-identified developer https://developer.apple.com/library/ios/documentation/IDEs/C...
As such, checking the source is still very much relevant here since this wasn't a compromised app in the Mac App Store, it's an app distributed outside it.
Linux package managers are looking like one of the only straightforward ways to distribute applications securely.
Unless you are a small independent app developer. Virtually no distribution wants to take proprietary software. And you have to package for a wide variety of different distributions.
On the other hand, the Windows and OS X App Stores are awful.
The Mac App store works pretty much effortless for me. It's sometimes a bit slow, but other than that it's pretty trivial to use.
It's not at the time of installation, but prior to updates the package management system will check signatures of the packages. (And it will only accept packages signed with your key, so the attack used against Transmission wouldn't work)
The question is whether we should trust proprietary software even if it is downloaded securely. I consider "hard to get proprietary software into the official repos" as a feature. Unfortunately it's not as hard as you make it sound in most distributions.
- The APIs exposed to Mac App Store apps are more limited (because the OS X sandbox is not completely comprehensive in what it provides). This limits the types of apps that can be sold on the store.
- There's no means of providing paid upgrades. E.g. for a major version bump, which a lot of developers rely on to keep their business afloat.
- The store interface and navigation are also much slower than the iOS counterpart.
- Recently some certificate issues rendered users unable to open their apps.
- Not 100% sure on this one: You can't download older app versions if your OS is no longer supported.
- There's no means of providing paid upgrades. E.g. for a major version bump, which a lot of developers rely on to keep their business afloat.
Apple and other do this by simply numbering the names of apps. They don't allow you to specify special "upgrade" pricing, but the effect of this was that developers no longer really have full retail pricing and everything is just set to the upgrade price.
Logic Pro 8 for instance used to retail at $499. The upgrade price was $199. Now Logic Pro X on the Mac App Store is just $199 regardless of whether you are first time user or someone who had the previous version.
- The store interface and navigation are also much slower than the iOS counterpart.
I haven't really found that the Mac Store is any slower. I've found that they are both slow.
- Not 100% sure on this one: You can't download older app versions if your OS is no longer supported.
I don't believe it will even show you newer versions of the apps as long as the developer properly specifies the minimum OS version.
There's no completely secure way, except for getting the public key directly from the developer over a trusted channel (or in person). And even that won't protect you in case the developer's keys gets compromised.
But there are a number of things that can be done:
- always verify the checksum (if available), in case the download mirror (but not the web site itself) got compromised.
- check for strange strings in the binary (use "strings" and "grep"). E.g. URLs
- scan the downloaded file on Jotti or VirusTotal.
- unpack the binary manually with 7-zip or similar if it's a self-extracting file.
- check installation scripts, build files, etc. (if applicable).
- if downloading source code, check a couple of files at random. Will most likely not protect you, but if everyone does it, it helps detecting embedded malware (or bugs) early.
- run "strace" (Linux/Unix) or "FileMon" (Windows) or similar software and log what the software does when you install and run it for the first time.
I've become increasingly paranoid lately, given that things like these happen and major bugs are uncovered in software that I use almost every day.
It's good that the Transmission developer reacted quickly and made waves so that people can at least be aware that they might have been exposed..
But I wonder how many more applications from the hundreds that I have installed on my machines contain weird stuff - either intentional (for money) or unintentionally (result of a hack).
Open source software is especially vulnerable to this kind of stuff.
If a hacker gets access to a server holding the binaries for an open source app (which most people download), the hacker can just compile the program from sources and add his own code in there and place the installer online.
Given that many big governments are now involved in the information wars, this scenario is quite likely.
"Open source software is especially vulnerable to this kind of stuff."
I'm not sure I follow on this front. Proprietary software could be compromised (whether intentionally by the vendor or unintentionally by some outsider working on the software) effectively forever with no one noticing. At least with OSS, the number of eyes on the source makes it less likely that an exploit will exist for long (though the definition of "long" could vary wildly dependent on popularity and the skill level of the software's normal users).
"Given that many big governments are now involved in the information wars, this scenario is quite likely."
Again, this one seems to point more to proprietary software then OSS. A government only needs to compromise a single company to make an exploit happen in commercial software. OSS exploits can be caught by the Linux distribution vendors that package the software, the users, the developers themselves (who are often working at different companies and in different nations), etc.
So, it may seem easier to compromise an OSS project, by attacking the distribution server and uploading a compromised binary built from source with patches...but, there are many good ways to guard against that (though any single mitigation, like signing with developer keys, can be compromised, the more eyes the less likely it is to succeed for long). But, if a government compromises a company, or someone within that company, all bets are off, and the problem literally may never be found.
I was thinking more about the users on Macs and Windows who use open source software..
The risk is not in the sources, but in the server which hosts the installers.
A hacker could just build the software from sources (adding his backdoor) and replace the original installers with his own.
> "Open source software is especially vulnerable to this kind of stuff."
I am sorry, what? Why would open source contain more bugs/hacks than closed source specifically? It is more often in the news for few reasons, including that many projects are widely used. However it's against any PR from companies to have their security issues disclosed like they are in open source so they try to minimize the exposure. See [1]
The risk is not in the software itself, but in the server which hosts the installers. A hacker could just build the software from sources (adding his backdoor) and replace the original installers with his own, if the server is not properly secured.
The risk is exactly the same with proprietary software. A hacker can unpack the installer and create a new one with his changes. Or, as they often do, create a wrapper which installs their malware and then calls into the original unmodified installer.
I don't follow, what does it matter for the "distribution model" if the software is open- or closed-source? The problem with SourceForge were its malware-riddled installers, how would it be any better if the downloads were proprietary software?
If a hacker gets access to a server holding the binaries for an open source app (which most people download), the hacker can just compile the program from sources and add his own code in there and place the installer online.
Code signing is used to prevent this. So, either the attacker has an Apple developer account (and is hopefully traceable through their credit card information), the Transmission project was sloppy with their signing key, or the machine of the developer with the signing key was compromised.
People have been ranting negatively about the Mac App store. But this is exactly why we need sandboxed applications by default (which is what the Mac App Store enforces). A sandboxed application cannot take your data hostage.
(Yes, I understand that App Store distribution is probably not possible for a Bittorrent Client.)
So, either the attacker has an Apple developer account (and is hopefully traceable through their credit card information), the Transmission project was sloppy with their signing key, or the machine of the developer with the signing key was compromised.
Sorry, I forgot another possibility: some other developer's key was compromised.
Or, as is the case here, the malicious party was simply issued a key by Apple (for apps that are downloaded from places other than the Mac App Store, developers can get a unique Developer ID from Apple (for free) and use it to digitally sign their apps, the purpose being that Apple can revoke it after the fact if it turns out to be malware):
“The two KeRanger infected Transmission installers were signed with a legitimate certificate issued by Apple. The developer ID in this certificate is “POLISAN BOYA SANAYI VE TICARET ANONIM SIRKETI (Z7276PX673)”, which was different from the developer ID used to sign previous versions of the Transmission installer. In the code signing information, we found that these installers were generated and signed on the morning of March 4.”
OK so let's imagine sandboxing is on by default, then "Transmission" pushes an update that asks for read/write access to the whole home directory. You don't know if Transmission has some legitimate need for that or not, so you just shrug and click Allow. Boom—infected.
So I think sandboxing is basically useless against these kinds of attacks. Either they allow apps to elevate their entitlements in an update, or they don't and developers will always out-out from the start (or pick the widest set of entitlements available).
If sandboxing is forced with no opt-out, then users will have to jailbreak their computers so they can install Parallels...
If you are talking about the 'open directory' dialog. Well, if a user is careless enough to just give a sandboxed app access to their complete home directory - tough luck.
So I think sandboxing is basically useless against these kinds of attacks.
No, it's not, because no entitlement allows blanket access to the user's home directory. Hence, an application cannot just encrypt all of the user's data.
> but why bother when you can compile the app from sources?
Well, imagine that you are the attacker here. Would you rather keep your malware source in sync with the upstream code and build every target every update, or just use an off the shelf binary wrapper. Would you answer the same if you were targeting more than one app (in the case of a CDN attack)?
The only scenario in which source level malware makes sense to me is this: you are targeting a specific application and you are able to get your code into the project's SCM. In this scenario OSS is no more vulnerable than closed source.
> Open source software is especially vulnerable to this kind
> of stuff.
Give it 2 or 3 years and stories will come trickling out about how most OS apps have had commits from hackers, governments etc. So far most source checking - to the extent that it happens at all - is all about buffer overruns and the like; micro stuff that's easily catchable. Well, you say that, but, you know, heartbleed etc. But what about whole modules designed with two purposes in mind?
> > Open source software is especially vulnerable to this kind > of stuff.
> Give it 2 or 3 years and stories will come trickling out about how most OS apps have had commits from hackers, governments etc. So far most source checking - to the extent that it happens at all - is all about buffer overruns and the like; micro stuff that's easily catchable. Well, you say that, but, you know, heartbleed etc. But what about whole modules designed with two purposes in mind?
Free Software has existed for over 20 years. Not to mention the fact that the same problem you describe is far more trivial for proprietary software. There's no straightforward way to find out if it's backdoored (although, luckily quite a few backdoors are done badly so we can find out). The point is that if you assume that all free software is compromised, you have to assume all proprietary software is compromised. I'd prefer to have some free software be compromised because then I'm not at the mercy of the vendor to fix it.
Open Source software has existed for more than 2 or 3 years, you know.
All software is vulnerable to bad actors writing malicious code. What makes it any safer if its proprietary software? In any case, in a pessimistic scenario you'd have to change your sentence to "give it 2 or 3 years and stories will come trickling about how ALL apps, open or closed source, were tampered with by hackers, the government, etc".
>Allow downloading files from http servers (not https) on OS X 10.11+
Mac version affected in OP was 10.10, though.
Maybe it had something to do with
>Change Sparkle Update URL to use HTTPS instead of HTTP (addresses Sparkle vulnerability)
?
Edit: it appears the infection was downloaded from a website, in which case this doesn't help. But one did say the in-app update failed on incorrect signature first.
>Allow downloading files from http servers (not https) on OS X 10.11+
This reads like they disabled Apple's "App Transport Security", which only allows HTTPS connections unless a program explicitly makes an exception. Introduced in iOS 9 and OS 10.11 (El Capitan). I bet the failing HTTP connections caused a bug in Transmission, and it was an easier fix to disable ATS than to transition whatever connection to HTTPS.
It's probably for "web seeds" or similar, where a torrent file's author specifies alternate URIs where the content can also be found. Transmission has no control over whether the torrent file's author specifies http or https, it has to allow both (and http is actually safe, since the downloaded file goes through the same piecewise checksum as if it were downloaded from a peer).
> and it was an easier fix to disable ATS than to transition whatever connection to HTTPS.
Pretty sure this is for arbitrary downloads. Unless you want to prevent transmission to download from http based sources out of principle it makes no sense to do anything other than opting out of this behavior.
Right, being a web connected app based on a distributed community of other clients, it's very possible that the encryption isn't possible to implement on their end. IIRC it's only blocking HTTP connections, so the torrent transfers themselves aren't affected (unless it's masking that as HTTP traffic to avoid easy inspection?), but there may be other things that require HTTP. Connections to trackers maybe?
On the other hand, El Capitan came out last September. If this just changed in 2.9.0, the restricted HTTP connections can't have been that big of a problem.
Looking more at this issue, it seems like the problem may have been (hard to tell, not a lot of information) a compromise of a third-party mirror to which https://www.transmissionbt.com/ redirected users; the checksum on the HTTPS site was unaltered, and was used to identify the altered download.
Perhaps a defense against this kind of attack would be an altered version of HSTS - one that protected the content of download links, and not just of sub-resources included on the page.
It might be worth updating the title to specify the vulnerable version (2.90) and the platform (OS X - from what I can tell, this is not a vulnerability on Linux or Windows).
Isn't it quite popular on Debian and derivates too? It's Pre-installed with GNOME there as far as I know.
Fair enough, it's extremly interesteing. Never saw such an infection in the "free World", outside the laboratory.
I hope they can find the source.
At least Linux distributions usually compile from source. I wonder if the source was also modified, or only the binaries.
EDIT: I downloaded the Transmission 0.90 and 0.91 source code and took a look. The diff between them is quite small, with nothing suspicious being removed, and the 0.90 .tar.xz MD5 matches what Fedora used (according to http://pkgs.fedoraproject.org/cgit/rpms/transmission.git/com...). So, unless there was also a malicious source code change the developer didn't catch, Fedora's package should be clean.
Threw away Transmission as soon as I read this (even though I was running a old version), my trust is pretty much gone now, never installing it again.
Shame because it really was a nice app.
I don't understand this attitude, the Transmission team responded immediately to the problem. There's no indication that this was the result of some problem specific to the application or its developers.
Transmission was and remains my favorite torrent client on OS X, although I still like the torrent information presentation uTorrent had, and I still will run my old pre-adware 1.6.4 if I want more detail on a swarm.
Given that older versions still work, it seems exceptionally silly to delete an old version because of a site compromise.
I don't even trust websites and emails, so not sure why you would trust a bittorrent client. I still use these tools, but with some some caution. Your level of caution is up to you. Other posters suggested things such as verifying checksum and virtual machines.
Not an official comment, but from other parts of the hacker news thread it sounds like one of the mirrors the main site redirects to was hacked, not the main site itself. The SHA sums on the main site where apparently unaltered. So it sounds like the only fault on the developers is trusting that mirror.
Same here, into the garbage it goes. What did you switch to by the way, Deluge? I switched TO Transmission because it was open source and supposed to be pure.
This is a good illustration of why you should not install apps as administrator. Specifically, you should not install Mac OS packages, which allow for arbitrary pre- and post- install scripts to be executed as root.
Same is true for Windows and Linux.
There are privilege escalation bugs in any OS, but it is usually not a given. Throw the application into ~/Applications as a Mac bundle, worst that will happen is your account will be compromised. Much easier to detect and clean. Most trojans won't even succeed.
We are going to have these problems until the developer community realizes that executing a randomly downloaded package installer as a privileged user is giving away the keys to the kingdom.
Application stores is one solution, but really is not an open one. I'd rather see the apps distributed in a form similar to Apple app bundles, where a non-privileged user can just install the app into their home.
I think it's a poor illustration. You could install and run this app as a regular user (and never escalate to administrator) and the app's bundled malware would still absolutely destroy anything of value on your computer.
It's the stuff inside $HOME (and $HOME/Documents) that's valuable. Not system binaries in {/bin,/sbin,/Applications} that can be re-downloaded in a second.
The problem is that any non-sandboxed app runs with the same uid and full read/write permissions to all of $HOME as well as all the other running processes, even if it only needs read/write access to $HOME/Documents/Appname/ and none of the other pids.
First, obviously you can make an account for running the untrusted software, like Bittorrent clients (which are known to carry malware frequently).
Second, most malware requires and counts on having admin privileges on target machine. The task of auditing, cleaning and finding out that malware is present is significantly easier if malware is limited to a non-privileged account. With malware running as a non-privileged user you still have to clean up and recover, but you can easily switch an account, compare, audit and trace. The anti-malware tools also still have a chance when OS is not compromised, otherwise it's all lost the moment you ran a malicious post-install script.
The more common problem, however, is a regular app install. The goal of the application packager is to make their application work first, and preserve your environment second. So, in many cases even not malware does bad things to your OS. The scripts are usually written by devs that are fairly clueless, which leads to some pretty awful stuff in them. Almost 100% of the time the install/uninstall action is not idempotent, although it should be.
What really needs to happen is a shift in a mentality that accepts the idea that apps need to be installed as an administrator (unless the apps are a part of the main OS distro).
> most malware requires and counts on having admin privileges on target machine.
If you really believe this, run rm -rf ~ on your computer right now. Also rm -rf /Volumes/* (on OS X) or wherever your network/external drives are mounted on your OS. Since you don't have admin privileges, nothing bad happened right? Because that is the primary goal of ransomware.
Anyway, this specific malware doesn't even attempt to acquire root; it operates entirely as your local user. And there's no installer package, so why are you complaining about them?
His comment went right past you. What you care about the most on your computer is your personal data, and all of it sits under $HOME. Any script running as $USER can steal sensitive data, wipe out personal and work files, maybe even cloud storage services. None of that requires admin rights.
For things that are likely to carry malware, use a separate account. Probably a good idea for a Bittorent client in any case.
In practice, however, it is much easier to deal with malware if there's no admin rights. It matters even for a clueless user, since the OS mechanisms of detection can't be altered and more much so for a power user.
This specific malware installs a kernel module, as far I can tell. I am guessing it would be harder to encrypt data and not be noticed and removed quickly.
Of course, there are even more obvious reasons, like sharing a computer with... kids that tend to bring malware at every turn.
We really need to educate the devs and change the culture. There's no reason for something like a word processor and file sharing app to require full access to the system. That's why we have access controls in the first place.
That is only true if you have no interest in recovery post compromise. A user level account shouldn't be able to put the system in such a state that online recovery is impossible, whereas a system level account easily can - think loadable kernel modules. Only offline recovery works once you lose trust in the kernel. That is the difference between "Alright grandma, lemme remote in" and "Sorry old lady, better start looking for the factory install CDs". Lets not even get into how screwed we are with UEFI...
If you have to wipe the user account anyway, then wiping the system at the same time hardly adds any more effort -- in fact it's probably easier. Your system files are the easiest part of your system to recover, because the originals are readily accessible from the vendor.
I guess it depends. In the grandma scenario it adds a lot more effort. A corporate laptop in a standard AD environment, no problem. In a situation where you've customized the system (custom packages, sshd.conf tuning, flags in rc/csh/sysctl/resolv/loader/randomsbinutilityinstalled2yearsago.conf) it would be a lot more work than just reinstalling the OS. Use backups you say? What if I told you that you could use the very same backups to rollback changes to the user's home directory, in 5 minutes, and not have to reimage the entire machine? I'm just saying: even on a single user setup - there is a world of difference in what options you have open to you, depending upon whether you let the malware hit ring 0 or not.
Not sure why anyone with a compromised machine would rather have the risk of a lingering backdoor just to save 1-2 hours clean formatting and reinstalling
Because unless an unknown method of privilege elevation was used, it doesn't make sense. Do you throw a pinch of table salt over your shoulder as well? It also has a very strong Microsoft smell to it, where instead of doing root cause analysis on why Windows is misbehaving - you just reboot and cross your fingers.
> Because unless an unknown method of privilege elevation was used
You seem to think that is unlikely. Why? New privesc bugs are found on a monthly basis in Linux and Windows. Does Grandma stay on top of kernel patches?
Nobody who does sandbox security (hint: I do sandbox security) thinks UID separation is sufficient to cordon malware anymore.
Because of the single user pc context. I've never seen a dropper that didn't have ring 0 later pull down a payload that escalated privilege. I'm not saying that it isn't possible, but at best it is very uncommon. I understand the better safe than sorry position, but with the context in mind, what safety are you getting by just assuming UID separation failed and going through the rigmarole of reinstallation? The user data has already been exposed.
I just don't agree with the simplified decision tree of "Infected --> reinstall", which disregards your work in sandboxing. Why should I even bother with the additional complexity of capability mode in my software, if we're all just assuming our defense has no depth.
> I've never seen a dropper that didn't have ring 0 later pull down a payload that escalated privilege.
Probably because it isn't very useful to the attacker. Pwning the single user's account is sufficient. But I wouldn't bet on that being the case if it happened to me.
> what safety are you getting by just assuming UID separation failed and going through the rigmarole of reinstallation? The user data has already been exposed.
Most user data is not executable, so can probably safely be copied over. But if you don't wipe all executable software on the system it's hard to tell if some of it is still infected.
> Why should I even bother with the additional complexity of capability mode in my software, if we're all just assuming our defense has no depth.
Well, I'm talking about today's legacy desktop (and, to some extent, server) OSs, which have not prioritized user isolation because it hasn't really mattered ever since people stopped using timeshare systems.
Modern OSs that sandbox applications (e.g. ChromeOS, Android, iOS) are another story. I would expect that one Android app being malicious does not mean you have to wipe your phone, just uninstall the app. I would expect that ChromeOS can even recover from a full sandbox breakout, given secure boot.
But I don't trust Linux, Windows, or Mac OS desktops to be suitably hardened nor able to recover. And as wiping the whole system does not add very much cost over the cost of wiping the user account, it seems to me worth it to go all the way.
> Probably because it isn't very useful to the attacker.
Ring 0 is really important for building a botnet, which provides a very real incentive for the folks that actually write the droppers. Ideally (for the botnet owner) they establish persistence, then sell access by directing the bots to download additional malware under the control of botnet customers. Long story short: you don't get paid as much if you don't have ring 0.
> Most user data is not executable...
I was speaking from the perspective of the real purpose behind all this, protecting user data - and that the horse is already out of the barn. As far as cleanup, you are presupposing a loss of ring 0. If ring 0 is secure then killing all the user processes and performing a snapshot rollback of user space will definitely clear the malware.
> Well, I'm talking about today's legacy...
Ah, well then I agree. If your platform does not have user isolation, then you shouldn't rely on user isolation for security.
> And as wiping the whole system does not add very much cost...
Well we've got a catch-22. Because implementing security practices that do harden the system add a lot more cost to a hamfisted wipe. For example: On my laptop I've got five jails, a maze of netgraph nodes that result in a complex ruleset, host IPS, kerberos authentication and authorization, encrypted filesytems, close integration with TPM and various certificate based credentials. Just assuming that none of that works and doing a system wipe is a lot more work than simply popping in the latest Ubuntu dvd iso... consider the labor of rekeying alone.
So the advice to do a system wipe isn't bad, but it should be prefixed with: "If you've made no effort to secure your system and are completely relying upon the distro provider for security".
Sure, if you've set all that up and know what you're doing, then you're in a position to make your own judgment call and maybe you don't need to wipe the system. Earlier we were talking about "grandma" which I assumed was a metaphor for "person who doesn't know computers".
Don't forget the grandson part of the metaphor, he is the one who will be making that judgement call. Do you remember how everybody would blow into the Nintendo cartridges, even after Nintendo explained why it was a bad idea? I have a feeling that helpdesk folks will continue to advise a system wipe, even if your running the latest Windows 25 with its formally proven microkernel... just to be safe.
I went to a Mac developer's group a few years ago in Toronto. One of the devs was working on Mac antivirus software but basically had the attitude that it was unnecessary, and spent most of the meetup trashing Windows. Just really bizarre and inept behaviour. Not sure I trust his anti-virus software. Too bad I can't remember which one he was working on.
The strength of a chain is the strength of its weakest link, and the more "apps" are provided as the system the longer and more vulnerable is the chain.
When it comes to checksums with have the chicken egg problem plus the collision attack of md5.
MD5 has been the standard for too long (and is deprecated since 10 years for crypto checksum). And for next generation of softwares to install that don't do modern checksum how can they trust the download of the package required to check for whatever the new format?
Plus the new format is less likely to be checked without errors. A off by one character could easily be discarded in checking given the number of packages that are now required to be installed and the human limitation in focus.
Human are the limiting factors, and security is modeling the user in a kind of grotesque caricature of a robot that can check thousands of informations perfectly and remember 20 characters passwords for tens of appliances.
There is a tyranny of computer engineers regarding what is safe for people having a life not concerned about geeky technology that is a tad annoying.
People have the right to be human and to fail is human. The burden put on human to make the system safe in order to avoid costly for the bosses human interactions is way to high.
And since computer security always blame failure on human behaviour I begin to positively dislike it.
> There is a tyranny of computer engineers regarding what is safe for people having a life not concerned about geeky technology that is a tad annoying.
You know you can make that complaint about any tool or technology, right?
"Gosh why do I have to follow all these rules and observe traffic lights to drive a car?" (something that actually intimidates me, in fact, because I've never driven a car.)
"Why do I have to worry about cutting or burning myself or someone else while trying to cook a meal?"
"Why are all these procedures and protocols, like schools and banks and taxes, required to function at all in contemporary society?"
Until computers advance to the point of being artificially intelligent familiars that can figure out exactly what we want from a simple vocal command and do something even better, we're gonna have to put in a little effort from our end to make them work the way we want them to.
> You know all engineers do not always blame users?
> There are fields of engineering where an accident even due to human causes is systematically seen as an engineering problem.
In those other fields, such as automobiles, an accident by a person may cause death of another.
On a computers your careless may not cause someone else to outright die (which tends to cause a lax attitude on the part of users) but they can still cause someone else harm, like inadvertently leaking someone's financial information or causing malware on your device to participate in a DDOS attack on someone. Time and again it's been proven that users are often the weakest link in this field no matter how tight the security is. It's only understandable for the engineers to be annoyed.
Additionally, what does the malware do? "OSX.KeRanger.A" appears to be a name that Apple assigned it in their malware definitions, but Google doesn't know anything except the pages about Transmission.
I'm curious what sort of malware we're looking at. Botnet? General remote access/control? Harvesting keychains?
Of course, and if you only have the one backup drive, be wary of connecting it to an infected computer with read/write access. Link posted in another comment suggests that this malware has encryption of Time Machine backups in development (should be safe this time around?).
Safer option would be to create a write-only network share on another computer and copy files to that.
that feels like a pretty week standard for knowing if your machine is infected. I will look for a virus scanner myself, and seriously think about reinstalling if it finds anything
Yeah I know, it is the lowest effort. But I'm not running it on my main system. Worst case I'll have to reinstall (unless it messes with the hardware, firmware changes for example).
If you installed/updated via Homebrew-Cask [1], you should not be affected. 2.90 was not always compromised, and looking at Caskroom history, the checksum was only updated for the 2.84 -> 2.90 bump once [2].
Homebrew Cask is awesome, but I still think security is an issue here because you still have to trust the upstream binaries are safe, each built and hosted by totally different people. Verifying checksums is certainly better than not checking them, but you still haven't escaped from the trust-whatever-binary-you-downloaded-from-the-internet-style of doing things. I really wish package managers like Homebrew Cask offer some level of trust by building applications from source and signing them, like Debian.
You are absolutely correct. Homebrew-Cask favors convenience and availability of as many applications as possible, though we make reasonable efforts to avoid malicious actors by verifying checksums, download links, and (soon) GPG verification where possible.
You may be interested in https://www.macports.org for a build-from-source solution for OSS projects.
Can someone explain me what Xprotect.plist contains? Are those malware's that are recognized by Apple and are blocked and dealt with?
I saw some post on forum where dude said how his Xprotect now contains at the top OSX.KeRanger.A entry, and said how it means he got infected. It didn't made much sense to me, but I checked mine this morning and found the same entry? Does it mean I am infected too?
But I didn't download anything from their website like 3 months back, I just did the update to 2.90 in Thursday or Friday can't remember, and yesterday as soon as I saw the news I update everything and checked for malicious files and processes which weren't present on my machine.
If I recall correctly, they are file signatures that OS X uses to identify and remove malware. Not sure when and how the files on your drive are checked (potentially right before they are opened?), but the XProtect.plist file is automatically updated by Apple, and that's what you're seeing.
The entry doesn't exactly mean you're infected, but just that your copy of the file was updated.
I have the entry for instance, but I was never infected. You can check your copy of XProtect with this command (I have 2076):
defaults read /System/Library/CoreServices/CoreTypes.bundle/Contents/Resources/XProtect.meta Version
Edit: Looks like you do get a message before launching an app, if it's identified by XProtect/File Quarantine.
Yeah, I have 2076 too. Now after the update of Xprotect you get the message, but what if you ran the app for example on Friday (4th) and got infected then?
Checked on IRC, it seems that sparkles prevented infection for those who updated their app, like I did. Screw all this, as I read in one of the comments here, I will run transmission through Docker container on RPi running FreeBSD.
Now it bugs me cause i can't find info on when the file was edited... Stats and defaults read display 20th January. wierd thing is that I ran the app during problematic time interval but havent found any single process or file that was mentioned in Palo Alto Security page, no kernel_services process or any other misbehaving. Anyway I did Time Capsule backup 10 days ago, and haven't plugged the drive into my computer since, so if anything happens I will roll it back. We shall see, but it is kinda uncomfortable to keep using "maybe infected" machine. Who knows what else may be left, supposedly nothing... Thanks anyway for the help! Cheers
"Everyone running 2.90 on OS X should immediately upgrade to and run 2.92, as they may have downloaded a malware-infected file. This new version will make sure that the “OSX.KeRanger.A” ransomware (more information available here) is correctly removed from you're computer.
Users of 2.91 should also immediately upgrade to and run 2.92. Even though 2.91 was never infected, it did not automatically remove the malware-infected file.
"
Can anyone tell me if this also applies to brew's cask's builds? I needed to download CentOS the other day and wanted to go with a torrent. I got pretty pissed after I realized that BitTorrent installed some adware called Spigot. I tried to remove it as good as possible (I mainly killed the process, removed `Library/Application Support/Spigot` and ran a `sudo find / | grep -i Spigot`).
Ironically I decided to use the good, ol', trusted open source alternative transmission because I just read on HN that Transmission gets updated again...
Looks like it. The homebrew cask download url is using http instead of https, which was one of the problems stated in the forum discussion. There's currently an open PR to fix it https://github.com/caskroom/homebrew-cask/pull/19506/files.
For anybody stumbling upon this, installation via homebrew-cask was _always_ safe[1] thanks to checksum verification. The caskfile has been updated to https[2] and version 2.92[3].
Just an anecdatum: I got infected by this yesterday when I installed Transmission to download a Debian install CD. When I read about this at MacRumors I checked and had the kernel_service process running and the two hidden files hiding in Library.
I've unplugged and archived the TimeMachine backup disk and done the prescribed cleanup actions to remove he malware. I guess time will tell if it had any other tricks up its sleeve.
Do a search for any files in /Users and connected volumes which are suffixed with .encrypted. Apparently that's the interim filename suffix used by the malware.
The good of this mess is that I realized I only had one Time Machine backup going on that machine. I had turned off the remote backup a month ago while shoving backups around on the remote server to make more room and hadn't restored it. One backup is too few.
The weakness exposed is that if the remote were mounted, this malware would have nailed it too. I'll have to look at having the remote make filesystem snapshots on its end so malware can't corrupt my older backups.
This is really bad but there are two good security defenses that came out of that forum thread (which is better than not having them at all).
1. Apple revoked the certificate already. Thus people that have gatekeeper on are safer.
2. Sparkle (for auto updater) denied the malware infected update. Thus downloading from the main website is not necessarily safer, even with the recent mitm sparkle vulnerability.
I used to Transmission in Linux but switched to qBitTorrent instead when I switched to Windows 10. It has an OSX version if you don't trust Transmission anymore.
Popular Mac rumour/news site 9to5mac (that is rapidly decreasing in quality) actually posted about this malicious update few days ago.
Somehow I found it out of place, especially as they have never posted about TransmissionBT before. They sure did get lots of people to update after putting in on front page.
The headline should probably say "at least Mac". I hope we soon learn the source of the compromise, but nothing so far indicates that Linux distributions' packages would be affected by a Mac malware.
The headline should definitely say "at least Mac". It so annoying to hear about "computer viruses"... (protip: it's Windows, Osx, well, maybe even Linux virus).
I updated in app and don't seem to be, an official confirmation of how this happened and why in-app updates were seemingly not affected would go a long way.
I checked my version of Transmission and i'm still on 2.84. I guess I dodged a big bullet, but tonight i'll go through the diagnostics to see if any versions prior to 2.90 were infected. I may do it sooner if I get a quiet moment at work.
I'm also running the usual litany of tools to check for activity (Wireshark on my WAN Tap, Anti-virus, etc)
My Synology NAS uses transmissiond for its BT Client, so I will be contacting them to see if they are affected by this issue.
Something that is not entirely clear. Does updating to 2.9.2 attempt to clean KeRanger up automatically? Or is some manual cleanup still needed after updating?
Checked my install of 2.9.0 from auto-update, it's clean (none of the suspect files are in Contents/Resources). According to a post on the Transmission forums, when a person was (probably) delivered an infected binary, there was a checksum failure as you'd expect. So it seems as though you won't be infected if you used the auto-updater.
Oh dear god. Used 2.90 past week, when I saw the news I updated immediately, checked for all the files, found nothing. I hope my MacBook will stay fine tomorrow. I got it backed up on Time Machine anyway. Where do we go from here, since I lost the trust, what are the alternatives? And from now one, I'll go with Brew Cask for everything possible.
If they indeed used a legit code signing certificate, what is the fix? It seems very difficult to just blindly trust signed binaries anymore. Short of setting up a registry of vetted code signing certificates, it seems that signed code is just as easily manipulated as unsigned code. And even then, the keys to the certificate could be mishandled.
This article[1] says Transmission is doing to offer a way to check, but I'm not sure it's on the site yet. Apparently tomorrow is the ransomware activation date for people who installed the infected version on Friday.
I can't think of a more hostile opponent than an HIV virus. And we're still not sure if Transmission was spreading the virii intentionally, making the condom analogy even more fitting.
But humans and viruses aren't competing in the same game. A better metaphor for the adversary in that situation is the person you're having sex with poking a hole in your condom.
> from Greek analogia "proportion," from ana- "upon, according to" (see ana-) + logos "ratio," also "word, speech, reckoning" (see logos). A mathematical term used in a wider sense by Plato.
so, what do you think where logic comes from? I'll spare you the effort:
> [...] from properly feminine of λογικός (logikós, “of or pertaining to speech or reason or reasoning, rational, reasonable”), from λόγος (lógos, “speech, reason”).
There is no analogy without logic. I even fail to recognize a difference between speech and logic, speech without logic, by analogy, would be just noise.
Your extremely ignorant argument immediately implies that the clubs used to hit baseballs are necessarily related to flying mammals.
The facts that the english word analogy descends in a complicated manner from a greek word referring to mathematical proprotions, and the english word logic descends in a somewhat less complicated manner from a greek word referring to speech, and that those greek words shared their pronunciation, tell us nothing about the relationship between analogies and logic.
A standard chinese term for "analogy" (also, "metaphor") is 比喻 biyu. The term for "logic" is 逻辑 luoji (it is a loan word from english). Are you prepared to grant that, while analogies and logic are necessarily intertwined for people who speak english or greek, they are unrelated for people who speak modern mandarin chinese?
The similarity between according to" + "ratio," and “of or pertaining to speech or reason or reasoning, rational, reasonable” is just too simple and striking to be missed and forgotten.
No linguistic reasoning is needed however, to see that an analogy essentially needs logic to work in any language. It helps however. EG in English tongue still language in an idiomatic metaphorical sense, analog to the original meaning of logos, tongue, metaphorically referring to speech.
My argument is not ignorant, anyway, it's arrogant.
I don't know about the second part of your argument, where you claim you don't need any linguistic reasoning to see that analogy needs logic (I'm personally not convinced). However, your argument about the connection-in the context you are putting it in-of the words logic and analogy is absolutely false.
Αναλογία means the comparative association of two (antithetical) objects in quantity, size, etc.
I'm sorry, the initial analogy, that a condom is easy to pierce was just an example for the general assumption, that security is easy if the thread is not purposefully malignant, for varying measures of malignant.
Someone who doesn't even understand that and that the linguistic argument is of the table, and still keeps going, shouldn't try to talk about logic, as right as some of the individual arguments might be. Ana-logically, I shouldn't be writing any more about this.
> There is no analogy without logic. I even fail to recognize a difference between speech and logic, speech without logic, by analogy, would be just noise.
Sure. That's why sometimes we see such bad analogies, the logic doesn't match so well the actual idea. When the idea you are trying to convey isn't that complicated at all you're probably better off using just pure logic and not complicated constructs such as analogies.
As for speech, indeed, there's a deep logic bounding the language structures we use. The problem with language (logic and meaning wise) is that language is relative to the environment and to the speakers of the conversation. You have to figure out the "language game" where the conversation is being held. As for logic, we expect no subjectivity (if there is subjectivity it's bad logic).
Running software in VMs to stay safe is not a new idea. So is it effective in this case?
Just because attackers can break out of VMs, doesn't mean that they always do. I wager most malware out there isn't set up to do that.
Locking your car door won't keep a dedicated attacker out of your car. Simple ceramic shards from a sparkplug will get them through the window with barely any effort at all. Nevertheless, locking your car doors is an effective way to reduce your risk, as it dissuades more opportunistic attackers.
As far as I know, malware generally won't run in VMs. Because its enemies use VMs to study it. The side benefit is that VM breakout is rare. Except from the NSA ;)
Maybe closer to "everyone knows safes can be cracked". Compared to lockpicking, or the other, more brute force ways to get past doors, the skill needed to find and exploit a new (0day) vulnerability is relatively specialized, albeit abundant enough for worry.
Am I the only one who saw the app and thought "Why the heck is TPB releasing an app?" Makes them more of a target, less stable platform, more easily interfered with , ect.
Questions like
- how did the compromised binary get there? Was the source code hijacked or was the binary altered after it had been built?
- Were the SHA256 hashes on the site also compromised (btw: Having hashes on the site is good enough for making sure you're not installing a corrupted binary. It doesn't do anything against intentional alterations of the binary though. These hashes need to be stored on an external site)?
- How did the compromise happen?
- what steps were taken to ensure that the same compromise doesn't happen to new binaries posted?
- Did the attacker leave any foothold on the compromised system(s)?
- How were such footholds removed?
All questions that need to be answered before it's safe to upgrade transmission either from the website or with the AutoUpdate feature. A red warning telling me that one binary was infected and that I have to download another binary isn't good enough.
I know the transmission people are volunteer developers and no PR people and I can totally accept that, but there's some things that just need to be made clear before we can safely update to later versions (and thankfully, 2.8 keeps running just fine)