> "Every single electrical and radiological device can be utilized to transmit data. What's the baud of opening and closing HVAC vents and reading them from space satellites?"
Also reminds me of the stories of reverse engineers using, lights, motors, speakers, etc to exfiltrate boot images.
> "If hacking a Jeep is as straightforward as hacking a server, and servers are routinely breached, then where are all the hacked cars? It's a bit like the Fermi Paradox."
One barrier to entry is the actual cost of acquiring a vehicle. You can buy a lot of iPhones to hack on for the price of a Tesla.
> "We think of Teslas as cars just like we think of an iPhone as a phone, but a more accurate account of reality is that they're both just computers."
It's actually kind of worse than that, cars are actually rolling datacenters with multiple computers and multiple networks. (CAN bus[es], Ethernet, proprietary networks)
//Disclaimer: I work for GM, but not on any of this.
> One barrier to entry is the actual cost of acquiring a vehicle. You can buy a lot of iPhones to hack on for the price of a Tesla.
Assuming you're just a random person and not a serious organization (which would have money to buy cars), can't you just get a job at an auto repair shop to fool around with their computer systems in the evening?
That's not all that practical. Modern cars are rolling datacenters. Most have 50-100 computers in them that are on several different networks. Features are distributed over several nodes ("ECUs") and everything is statically linked and scheduled. (i.e. almost no OS akin to what you would find on a PC or smartphone) It's very difficult to do anything complex without having special tools and know-how. Which someone at a shop just won't ahve, because a shop does not debug the software. They just fix hardware issues, which as far as ECUs are concerned just means changing faulty parts for new ones.
Truth. Most of the time, you just get the factory reset code if you think there is something borked in the OBCs. That or just replace the computers entirely. Alldata is the HN of techs (kinda):
Probably easier and faster to just buy a used car yourself to tinker with. A used Ford Taurus can be ~4k.
You can dive into polylith forums and find guys that thinker with the code in the OBC to 'improve' some of the aspects of the car. Think Dieselgate, but more hacker-y/backfire-y.
I know things change over time, but the old advice on hackers was that most of them are not as malicious as they seem. For instance, many viruses have been accidentally more damaging than the creator intended.
Hacking a car to kill people is a lot different than kicking someone out of a video game. Even people who SWAT don't anticipate their rival/victim being killed.
They want the person to be miserable, not dead. It's cruel and vindictive, but it's not homicidal. Or at least not most of the time.
Could be paranoid nonsense. But white hat researchers have demonstrated a disturbing number of attack vectors that could be used for this sort of thing.
Do we know that it isn't happening all the time? We are already so conditioned to driving being one of the most dangerous things yet we absolutely take it for granted. It would be easy enough for it to more or less blend in. Especially as I'd assume the current traffic investigation work force in the field isn't going to have the chops to do this kind of work.
This may be happening, and just being written off as user error like all the rest of the massive number auto accidents we live with everyday.
Alan Turing was doing statistical modeling to make sure we didn’t blow up more U boats than could be mistaken for bad luck. That was more than seventy years ago so who knows.
> Hacking a car to kill people is a lot different than kicking someone out of a video game. Even people who SWAT don't anticipate their rival/victim being killed.
> I know things change over time, but the old advice on hackers was that most of them are not as malicious as they seem. For instance, many viruses have been accidentally more damaging than the creator intended.
That hasn't been true in a while. Most malware these days is about money or political goals.
The world isn't short of homicidal maniacs. All that is needed is a perverse ideological motivation and a single person could, theoretically, cause unprecedented harm.
I think this is a really important issue and glad someone is working on awareness. It extends far beyond cars but those are probably the most 'frightening' example. We need to
(1) Admit we have a big problem.
(2) Admit it will be difficult to solve and there aren't easy solutions.
(3) Start working on solutions.
Unfortunately, these steps never seem politically popular. But it will be tough. Except in very rare scenarios (NASA), software security and correctness aren't treated as important in the scheme of things, relative to functionality and features. Nobody wants to invest in actually secure, carefully-audited code. (They might claim they do, but their standards for "secure" and "careful" would be litigation-worthy in any field but software.)
For example, I believe in freedom of software and ownership rights, which seems to mean that I believe people should be allowed to run open-source code on their own cars' computers. But this impacts public safety. How do we develop reasonable regulations similar to those for physical "street-lega" modifications?
I have reason to believe that a competent hobbyist programmer is less dangerous than the electronic engineer, pretend-to-be-programmers who wrote the original firmware on the vehicle.
Based on years of working with electronic engineers' code. (Apologies to those of you who are competent in both)
I appreciate your point and am inclined to agree. But my agreement doesn't matter. Your point is only one starting point for a challenging and nuanced discussion over the legality and safety of people installing their own software on their own cars.
My broader point is that this (among others) is a discussion we should already be having and eventually must have -- if not amongst "ourselves", then with legislators and laypeople. Yet the state of software security, awareness, and incentives is such that it could be years or a decade until we do...
Many years ago, I knew some of the people involved with the Ford Electronic Engine Control IV, the "EEC IV", used in 1980s Fords. This used a custom CPU, an Intel 8061, which was an Intel 8051 with some extra timer features. The program was etched onto the CPU silicon. There is no way to change that program without replacing the whole unit. There's an external ROM with some tables, different for each engine model, but it's a true ROM, not something that can be rewritten. If you want to replace it, it takes a wrench.
The level of paranoia and testing that went into that program was very high. Any bug meant bringing back hundreds of thousands of Fords in a recall. That didn't happen, and there are still many EEC IV vehicles running today, 30 years later. That's what it was like in the days of hardcore embedded programming.
I'd argue that safety critical vehicle software should not be downloadable. If a fix is needed, the vehicle has to go back to the dealer for a new memory module. This would discourage "shit early, shit often" software development, and encourage manufacturers to keep the safety critical systems, like ABS, totally disconnected from the entertainment system.
Autonomous vehicles should not communicate with each other. Waymo's cars don't. Most of the schemes for "car to car communication" are motivated by marketing or surveillance, not driving. Aircraft systems don't communicate with the ground much, and when they do, the uploaded data is very simple. (It's common in commercial aircraft to hardware disable maintenance functions unless the "weight on wheels" switch is on, indicating the plane is on the ground and parked.)
The deeper problem is Internet access. Firmware in ROM can still have remotely-exploitable security bugs. That Ford team only had to test for states that the software(/hardware) would end up in on its own, rather than defending against malicious external stimulus pushing it into a previously-unexplored state.
Given that it seems like a foregone conclusion that these things are going to end up with some WAN interface [0], pure ROM is out of the question. The best we could probably do would be to assert that the flash could only be updated through a local debug port, with protections against eg full RCE allowing remote re-flashing.
Given physical access, it's trivially easy to sabotage a car - the new problem here is having it done remotely. More important would be some kind of auditing so that a local debug device could be assured of reading out the exact code image, knowing how many times it had been flashed and ideally checksums, and being sure that a reflash reset the system completely.
(btw, loved "shit early, shit often")
[0] I mean, I'd personally rather they not. But it seems like a foregone conclusion of trends/marketing/sloppy design/implementation shortcuts/can't stop the signal.
True story: I looked into a company in the vehicle automation field that had OTA upgrade on and didn't bother to check whether or not the vehicle was in motion or not during the upgrade. You really can't make this stuff up.
I did not mean that to illustrate rock-bottom, merely as a datapoint. Yes, things are much worse than that. But this one is pretty easy to understand without further context.
My car is >10 years old, and thankfully still running like a champ. It'd be nice to get a new one someday, if only for the updated safety features like curtain airbags, but I honestly don't feel very comfortable with anything much more recent, for the exact reasons you've described.
My inner curmudgeon is on board with the idea of only allowing important firmware updates to happen at the dealer. But the reality is that with so many cars on the road, OTA update is a better way to make cars safer.
In my case, for example, I have a 2014 and a 2015 car which are both more than a year overdue for a software updates. One is related to the braking system; the other to a problem where if you stomp on the gas in an emergency (like to avoid a crash), the car stalls.
The problem is that the dealers in my area don't have the capacity or the will to do software updates. One dealership only has maintenance hours between 9am and 4pm Monday through Thursday, and the ONE guy who is allowed to do software updates is only in on Saturdays from 9-noon occasionally.
The other dealership doesn't take appointments at all. It's first-come, first-served, 8am-5pm Monday through Saturday. I stopped going there because even at 7am, there's a huge line of people waiting for service. The final straw was when I sat there from 7am until 3pm for a very simple non-engine part swap that was covered under warranty.
The next nearest dealer is 138 miles away.
tl;dr version: OTA updates are good because there are places where you can't get a software update done at the dealer in a reasonable time.
If we’re talking about the self driving cars here like mentioned in the article, the cars could go to a dealer / workshop on their own. You’ll only have to tell the car “I don’t need you for the next x hours”. So you go ahead and have the firmware updated.
Unfortunately there’s no quick fix about the current cars having the “guided” self driving principle.
Unfortunately, the current climate is quite opposite, what with computers randomly* deciding "you don't need me for the next couple of hours, I'll try the install-revert-install-revert-install-revert with this week's update, NOW."
(randomly, as in "where do you want to go today? meh, nevermind.")
> I'd argue that safety critical vehicle software should not be downloadable.
I'm not sure I'd go that far. OTA "automatic update" probably not a good idea. Downloadable/flashable at the dealer OK, no need to go as far as swapping out ROM modules.
Following avionics software engineering standards should be absolutely required. Entertainment/nav systems that require network access should be 100% air-gapped from the vehicle control systems.
> Autonomous vehicles should not communicate with each other. Waymo's cars don't. Most of the schemes for "car to car communication" are motivated by marketing or surveillance, not driving.
ehhh, cars shouldn't be communicating with cars they can't see. but i'd like to them to communicate over free space optical, aimed at the level of other vehicles. brake lights could be modulated to contain current braking force, and reason for braking. turn signals could contain the destination lane. head/tail lights the current speed, and target speed.
Communicating a few bits over radars is a possibility, like the way TCAS works for aircraft. There's a simple fixed-format message, agreed upon internationally, which is used to negotiate "I go up, you go down" decisions to avoid collisions. One advantage of data over radar is that you have range and bearing info on who's talking to you. At least you know where the info is coming from.
Tesla can already change its cars' performance with OTA updates. You know what that means, right? It means that all software can be changed with a single OTA update. And much of this system's security depends on how secure Tesla's servers are.
How cars were designed 30 years ago is completely irrelevant, because that's not how all or most new self-driving cars will be designed. They will be designed so that the software can be changed on the go.
> Autonomous vehicles should not communicate with each other. Waymo's cars don't. Most of the schemes for "car to car communication" are motivated by marketing or surveillance, not driving. Aircraft systems don't communicate with the ground much, and when they do, the uploaded data is very simple. (It's common in commercial aircraft to hardware disable maintenance functions unless the "weight on wheels" switch is on, indicating the plane is on the ground and parked.)
You seem to be out of touch what happens today. Car2Car or Car2Infrastructure is about as aircraft to aircraft communication is used in TCAS. It is about what car (object) is at which position, so that collision may be detected as early as possible to prevent them.
Also next generation cars will have their maintenance modes locked down as far as possible. OBDII will be closed down as far as possible. All those dongles, which are hip right now, will be rendered useless. But of course the tuning scene has still a lot of incentives like in the past to thwart all this and to reverse engineer. It will be a cat-and-rat game like it always was.
Because the cars are self-driving they require updates to improve the autonomous software that controls them, including improvements related to security.
As for surveillance related to V2V communications, I fail to see how it meaningfully adds any surveillance over just monitoring the other electronic emissions from the car like those that emanate from the media centre or even the phones of those within the car.
> Because the cars are self-driving they require updates to improve the autonomous software that controls them
This doesn’t follow.
> including improvements related to security.
This should not, under any circumstances, be a reasonable justification. Cars should not have any external attack surface, period. The ECU and any other critical electronics should not be networked at all. Unfortunately, that ship has long since sailed.
It should be possible for cars to be updated, but it should require physical access, and any mandatory update (e.g. due to a safety failure) should result in the manufacturer incurring heavy costs, e.g. you have to give every owner $1000. We have to internalize the externalities of shitty software, especially in stupendously dangerous machines like cars.
I don't understand why it doesn't follow. If there is an adversarial attack that causes the autonomous software that controls the self-driving cars to crash then they need to be updated.
> This should not, under any circumstances, be a reasonable justification.
That presupposes that humans cannot make mistakes, which they do. Whether it is hackable software defined radio, or just plain oversights.
The levels of abstraction that programmers deal with do not allow for the necessary security you propose.
> Cars should not have any external attack surface, period.
What you advocate for is a car with no ports, no radios, and a perfectly secure component manufacturing chain. It isn't doable. There is a reason the ship sailed.
There's a very simple update mechanism for any car (or anything that doesn't get put in space or dropped to the bottom of the ocean, really). Recalls.
They're expensive, annoying, etc. But they're safe (or safer, at least), and if you screw up badly enough that a safety-critical system needs an update, they're necessary.
I don't think anyone is arguing that the infotainment system should be disconnected from the world, but the core driving systems (steering, brakes, transmission, etc) should be air-gapped and inaccessible without physical access to the interior of the car. Also, that access could (and maybe should) be disabled when the car's engine is running, preventing an attacker from plugging in a wireless device and then subverting the system with the car in motion.
I agree that that's a better solution. But in what utopia is that going to happen. It would have to be government mandate to stop all over the air updates because no single company would take the extra cost when the competition isn't. This would only be done by a government after an attack takes place.
Well, in a true utopia it'd be unnecessary, because no one would hack the cars. Which would be nice, but... unlikely.
In any case, I agree that it's unlikely unless there's a severe disaster related to OTA updates, but it's still the best solution from a security and safety perspective.
I don't think so, just that the radio and other non-critical components mustn't be connected to the critical components. Essentially the radio becomes like any other portable device, that just happens to be physically installed in the dashboard.
> If there is an adversarial attack that causes the autonomous software that controls the self-driving cars to crash
If it is discovered that any model of car is vulnerable to this, it needs to be recalled and scrapped.
> The levels of abstraction that programmers deal with do not allow for the necessary security you propose
This is true of the kind of shit pumped out by low-quality commodity hardware and consumer-focused software companies. Anything controlling millions of multi-ton missiles should ideally be formally verified top to bottom. Exceeding NASA vehicle code standards are the sort of thing that would be appropriate here. However, just the plain old standards we apply to the embedded code in ECUs plus full hardware isolation (i.e. no “start from your phone” bullshit) would be sufficient.
> What you advocate for is a car with no ports, no radios,
No radio attached in any way to the critical hardware is sufficient. The goal here is to defend against bulk remote attacks (the kind that are actually dangerous in a noticeable economic sense) and bugs, not evil maids.
If anyone cares to, I’m curious about this and it would be great to benefit from your insight. I haven’t had anything to do with autonomous systems, but from my layman perspective it seems as though there would be legitimate benefits to inter-vehicle communication assuming a sufficiently sophisticated network / platform—
For example, cars being able to broadcast their immediate path and do some dynamic route mapping based on which lanes and areas will be in use by the time it arrives; being able to agree with nearby vehicles about routes that are partially colocated so they could join together bumper to bumper for that leg and realize fuel efficiencies (I have no idea if that would actually be useful but I’m brainstorming possibilities); being able to negotiate a car parking space in advance of arrival...
Is it that there actually aren’t many applications where inter-car communication is useful, or is it more that the benefits of communications like these are outweighed by risks to security, or am I missing important pieces of the puzzle?
> I'd argue that safety critical vehicle software should not be downloadable. If a fix is needed, the vehicle has to go back to the dealer for a new memory module. This would discourage "shit early, shit often" software development, and encourage manufacturers to keep the safety critical systems, like ABS, totally disconnected from the entertainment system.
Better yet, the software should be uploaded to an intermediate agency, that is responsible for the testing of cars (could be part of the US Department of Transportation for example). Only this agency can (after sufficient testing) upload the software to the actual cars.
With most car brands you're lucky if the car repair shop even bothers to tell you when they installed an ECU update. There is, obviously, no change log available anywhere. Sometimes people whipser behind backs, though.
While I agree with the author about the undervalued risks of autonomous vehicles in the light of careless security practices his proposed solution is just insane.
In my opinion the demand for a government controlled kill switch in every piece of hardware that is somehow able to harm people is much more threatening in so many ways than the insecurity the author is trying to reduce. Just a few:
1. Based on the asumption, that ones current government/state is for the good of all people, what gives him the confidence that this will stay that way? What if your beloved government goes rogue? That's a lot power for an autocratic regime.
2. Why even trust the government in the first place with that kill switch? They are the same people which are careless about infrastructure critical ITSec since decades.
3. An univervsal security module which is highly standardized is a very profitable target. While the author is aware that finding an attack vector of one particular vehicle can mean that all vehicles of that type can be compromised, he doesn't come to the conclusion that the same logic applies to his security module.
I’m the author. I was nodding in agreement with you for your first two points. I don’t think I’ve come to a perfect solution, but I want you to know that I actually have many of the same reservations that you have and that I’m open to changing my mind.
Here is where I still sit, however: the government had predator drones and will soon have killbots able to take out individuals based on facial recognition software.
If we can’t trust them to turn off our cars we’re fucked anyway.
Let's say a security vulnerability has been discovered. An attacker has wormed their way into as many cars as they can. They send a 'go' signal from their command and control servers. Across the world, cars start looking for opportunities to kill their passengers and bystanders.
We send the shutdown signal. The safety modules wake up and take over from the compromised computers. What do they do?
They could brake. However, the car might be in a turn, on a rainy day. Braking could send the car into a skid and kill its passengers.
Maybe they brake, but slowly. However, the car might be behind someone who just suddenly changed lanes and slammed on the brakes. Braking hard might be the right move in that circumstance.
Maybe they do something conditional on the sensor input they get. However, if the control computer can do something to 'blind' the safety computer, that doesn't help. For example, can the control computer issue firmware updates to the camera sensors? Can the control computer fill a sensor's bus line until it can't respond?
I can't think of any solution to this that doesn't involve duplicating a substantial part of the control computer.
1. Have an emergency break button somewhere in the car that the user could press at any time to have the vehicle break. Note that we already have those in all trains, buses etc.
2. Have a warning light up next to the button asking the user to press the break button as soon as they feel safe to do so. Note that we already have those service lights in all regular cars. Simply add a buzzer as well for people asleep etc.
3. Have the circuit that breaks upon button activation be sealed off from the internet (make it hydraulic). The only action of the "good C&C" is to turn a light on.
Thanks for sharing your concerns. But regarding your last two sentences, I'm sorry that I've to tell you that I think we are already fucked. I don't think it's a good idea to speed up the process of being fucked even more.
Unpopular opinion ahead. Having worked in the security industry for a few thousand years (computer years), I can say I would never own a car that can talk to the internet. I plan to move far away from cities very soon for this and several other reasons. People will argue about this and meanwhile the "impossible" will happen, repeatedly. I just replaced the engine and transmission in my non internet vehicle and hope to get another 500k miles.
I hear you, I just bought a new car and internet connectivity was a deal-breaker for me. Part of the reason I was in the market was I wanted to make sure to get one before it was impossible [1].
The dealers don't know about the cars at this level of detail. I had to use these questions to pry the data out of them:
1. Is this a connected car? Can I unlock it or start the engines from my smartphone?
2. Is that feature an option or is the feature part of the base model?
3. Can I get the connected feature later? Would I have to bring the car in to get something installed or can you enable them from your computer?
#3 is a really important question. Subaru (and perhaps others) ship all their cars with the hardware for connectivity, but the actual feature requires a subscription. The car is always connected to the internet, because the dealer can start your subscription remotely, but the salesmen don't understand that implication. They're thinking solely in terms of features you're getting, not what hardware the car has.
[1] It's almost impossible now to buy a new car without passive keyless entry that isn't a bottom-tier economy model, despite the well documented security problems with many of those protocols.
My parents had cars for a long time that you could steal by breaking the window and doing something moderately easy with a commonly available tool and the steering column. They never had their car stolen and if they did, their insurance would have replaced it. At a time when crime was higher in the US and it was significantly easier to fence cars and successfully live on illicitly obtained monies than it is today.
Given the above, convince me I should care about how hard it is to break into passive keyless entry cars?
The difference is in how easy it is to scale an attack.
Once an attacker can remotely hack a single car, they can hack all cars that have an identical configuration, with little additional cost.
What happens then? Even if insurance companies could replace all affected cars simultaneously (very unlikely), they’d have to replace them with a model that isn’t affected.
Passive keyless entry is not remotely hackable, it's locally hackable, that's not scalable and besides which cars sold today (with very few exceptions) can not self drive, so even if you could remotely unlock it you'd still need someone local to drive away with it.
You’re of course right about passive keyless entry and perhaps the GP has that confused with other features that do require an internet connection.
Anyway, even if it isn’t autonomous, suppose a car has a smartphone app that allows you to turn on the heating before getting in. And then someone exploits that and gains control over the heating. They could then proceed to drain the batteries or the fuel tank by leaving it on over night, let’s say.
Not exactly a threat to national security, but still a major inconvenience.
Not sure it was confusion, but rather intended as an example of how "high end" features spread to the bottom of the market quickly, such that in a few years nearly all new cars may be internet-connected.
> You’re of course right about passive keyless entry and perhaps the GP has that confused with other features that do require an internet connection.
I did not confuse anything. I only mentioned passive keyless entry in a footnote, as an example of an insecure technology that you can't really avoid anymore. You still have a chance to avoid "connected car" features, but in my estimation the days are numbered for that.
Numerous internet connected cars are remotely hackable and you can take over engine controls, steering, breaking. This was performed on live highways multiple times. DOT investigated at least one of the incidents involving some SUV's.
You don't have to convince them it's hacked, you just have to convince them that your car is not where you left it and that you really don't know where it is. Which is all you would actually know, like with any case of auto theft.
> Given the above, convince me I should care about how hard it is to break into passive keyless entry cars?
I don't really care if you care or not, it's your car, but I care.
Also, the issue with passive keyless entry isn't just theft of the car itself, its more often theft of its contents. It makes break-ins much easier to do undetected.
I ended up getting a 2017 Honda Accord V6 with Sensing. I got everything that I wanted, but had to compromise and get passive keyless entry.
The Toyotas Camrys also seemed good, except I couldn't find a V6 to test drive and they didn't have Android Auto.
Just a note: I'm not super-paranoid about my car getting hacked. I didn't want a cellular modem because I know I won't use whatever features it enables, and I didn't want my car to be ransomed for a bitcoin. I didn't want passive keyless entry because it didn't seem like much of a convenience, and it weakens security against petty theft.
My criteria, in priority order:
Must haves:
* V6 engine
* No cellular modem
* No passive keyless entry (not really available anymore, so I had to drop it as a dealbreaker)
Toyotas. Toyotas are really dumb cars with wheels and an engine. Granted they’ve got smarter recently but I like they haven’t gone too fancy. It’s still a dumb car that survives anything like a Nokia.
Your idea is sound, hopefully you have plans to service your car entirely by yourself?
From my personal experience, the service departments will happily download data from your car and sell it to the highest bidder. My case was simple: the dealer updated the mileage record in Carfax using the odometer reading from my warranty-provided oil change. The car is leased, so I'm 99% sure I had no way to opt-out.
Sounds innocent, but my insurance company was watching. They extrapolated the mileage and decided I would cross the 7,500 mi/yr threshold - which triggered a premium hike. Funny thing - I didn't exceed 7,500 miles that year, but they already have my money now.
What else could a dealer read off your non-internet connected car when you bring it in?
This is just my own methodology, but I ask the local police / sheriff who the most reliable mechanic is. I then validate the number of tattoos on their team members. They must have a lot of tattoos and they have to be grumpy and their shop must be in a state of disarray. If they meet my criteria, then I have them do the work I can not do myself. This usually works out to be much less costly than working with a car dealership.
This is just my own unorthodox methodology. Your mileage may vary.
This is a good method. My mechanic has giant WE THE PEOPLE and eagle tattoos, the shop's office is a disaster zone, and he does a hell of a great job at a reasonable price. Nice guy though, so he'd fail the grump test.
Heh. I want a self-driving car. But I don't want an Internet-connected self-driving car.
This trend of doing everything over the Internet for no good reason other than business model is growing from just ridiculous and user-hostile into something that's actually dangerous to people's lives.
Last I heard, Alphabet's solution to the problem of self-driving was a tying the cars very tightly to Google Maps - reliably using just camera and lidar to make driving decisions apparently still isn't sufficient for self-driving but having a map showing the logic of traffic flows makes the process much easier (quick googling seems to indicate many self-driving solutions are similar here).
Thus it is going to be hard to avoid a network connected car and it seems likely that network will be the Internet.
> Thus it is going to be hard to avoid a network connected car and it seems likely that network will be the Internet.
Well, I agree that maps as a second source of information can be important for autonomous vehicles. However, I don't understand why "map" would imply "network connected". Offline navigation systems with detailed maps have existed and still exist for more than twenty years now. I fear that the "offline autonomous vehicle" will solely fail to manifest because of business decisions (online being "more convenient" for both the end-user and the company), not because of technical limitations.
Well, sure keeping maps offline could allow the connection to not be constant but the maps would require very frequent updating since being out of date could have dangerous consequences, and the updating would be most easily done by network.
Edit: which is to say, maps aren't really secondary parts of current systems but more like "co-primary" parts. The cars aren't planning identify traffic lights where they don't expect them.
Relying on a map at runtime for identifying traffic lights g is preposterous. A self-driving car that ignores any temporarily modified or new signaling would be a disaster.
Hey, I can understand not liking it but so far, maps are absolutely "integral" to the operations of self-driving cars. Maybe they'll be less "preposterous" later.
"Almost all of the fully autonomous vehicles currently allowed on public roads are still under the direct supervision of human pilots, and they’re only driving on roads that have been heavily studied and mapped in three dimensions."
or
"Cars will only be able to drive themselves if they have access to high-precision maps. The digital material contained in today’s navigation systems is not enough. To be able to drive itself safely, a car needs to know its position on the road down to the centimetre. "
It's integral to a certain class of self-driving cars. Waymo relies on centimeter-level mapping of the environment, and sure, could not possibly operate without a map. The car establishes its exact environment in the world using the maps, and I presume then looks for things then in that environment to track.
Comma.ai, on the other hand, feeds their AI the camera feed and the sensor signals from the car, and it responds, as far as I know, almost entirely based on that stimulus. Of course, Comma.ai's car is presumably less predictable, it relies on a black box to "think", but you could feed it the general concept of what path to take from A to B, even a set of waypoint GPS coordinates of where to turn, and hypothetically, such a car could navigate to that destination otherwise offline, or with the grade of maps reasonably available offline. It's intended to drive like a human drives: Based on the information it perceives in the world around it.
Comma.ai doesn't appear to have cars that even approximately drive themselves (it's got adaptive cruise control, sure, but so does every car company now). It has a camera. Sure they have a proposal to not use maps, but they don't have a result.
For me it would be I only want my vehicle to update when I say so on a network I trust that is probably firewalled. I would prefer this rather than my car updating OTA via someone else's wifi or even the cellular network. This would reduce the risk of a hostile actor taking control of the vehicle when you are most vulnerable. When you're actually driving it.
In a similar way I want an assistant in my phone, but I don't want it to be internet connected either. It's going to know the most personal things about me so I want absolute discretion.
Do you really need AI though? I am using a simple branching tree structure for commands and queries I know I want, and since it's for my use I already know those commands, and they tend to match my conversational style to begin with.
For the purposes of outside knowledge queries you might not be able to come up with in advance, there's good cause to outsource those rare requests out to the Internet: Just do it intelligently. Require a prefix instruction for an outside request.
For instance, I went ahead and implement Wolfram's API for knowledge queries. They have a great "spoken answer" endpoint, which replies with a string meant to be piped straight to speech output. So I "ask wolfram how tall abraham lincoln was", my program hands everything AFTER "ask wolfram" to the Wolfram API, and Wolfram's API gives me a string back with exactly what I asked.
Now sure, I'm not entirely offline at that point, but everything regarding my personal data, home automation devices, etc. is under my control, and any time I reach out, it's specifically using a command authorizing it to do so.
Of course, caveats before you think my project sounds impressive: A. It's written in Visual Basic. B. Speech recognition isn't working (yet).
Old Microsoft Speech API would be a good fit here. I miss it. Back in 2007 I made myself a voice control interface for changing music playback. Trainable, completely off-line. Worked like a charm.
Well, that's the point of the article: "connected" is a spectrum, not a binary option. Iranian centrifuges weren't "connected", and yet the virus destroyed them.
Specifically, look in the article:
>First allow me to address what I think won't work:
I understand your concern but right now 30k Americans die in car accidents. Will (potentially) hackable self-driving cars be any more dangerous than today? If you're that concerned about being involved in a car crash it seems to me that you should never leave the house.
The issue isn't the 30k people dying in one set of crashes. It is the systemic issue caused if all of our transportation and shipping capacity went offline at the same time as all ICE refuel capacity in a nation was taken offline at the same time.
If that happens, car crashes aren't the problem - it's the widespread famine that follows in a week's time.
It's not even the tinfoil-hat issue, it's standard emergency issues. Think of the damage a hurricane does to wide swaths of coastal land. Think of being trapped in Napa last fall during the firestorms. Think of a bad blizzard or a lucky lightning bolt to the right transformer. At least once a year, I think there is a sizable portion of the US population, let alone world population, that needs unconnected emergency ready transport in under 2 days notice. Expecting people to pay $30k+ for a car and not have that baked-in is a no-go.
This is why I think it's crazy that AT&T is trying to take down all their copper POTS lines which have traditionally been seen as an important asset in a regional emergency.
Potentially, yes. Instead of 30k spread across one year, the very possible opportunity for 30k in one mass-hack exists. I will defer to others to debate this. Perhaps financial regulators, safety departments, transportation regulators, insurance companies, etc. This is a very complex topic that would quickly turn into banter here. Everyone will have to decide for themselves the risk factors as it pertains to them.
> the very possible opportunity for 30k in one mass-hack exists
It's fortunate that terrorists are both very incompetent and very low in number. How else do you explain the fact that there's been exactly 1 very serious and successful foreign terror attack on US soil (the highest value target in the world) in the past 50-ish years?
Currently there's the very possible opportunity of a power grid/infrastructure attack that could kill tens of thousands. But nobody should be truly worried about it.
Terrorists just aren't that good at what they do. Why would they somehow be better at hacking cars than they are at anything else?
Most news organizations report on foreign and domestic terror differently. When the average American thinks of a terrorist it isn't a white guy with a U-Haul.
Terrorists? Try 12 year old angst filled kids that get bullied in school and/or at home. How many 10 to 17 year old angst filled kids have access to the internet?
That's 30k potential deaths compared to 30k deaths right now, every year. If terrorist attacks on cars killed 20k/year it would still be much safer than our current situation. We should mandate that every vehicle death be broadcast on the front page of every national media outlet[1]; that might change our perceptions.
I contend that 40 years from now our grandchildren will be astounded that we got into such dangerous vehicles before self-driving cars.
Crash all the self-driving ambulances, crash all the gasoline tanker trucks, crash all the trucks that ferry containers out of ports. You might not kill many people today, but when there's no shipping and no ambulance service people will start dying.
If you wanted to really disrupt society you don't need this level of sophistication or a lot of cash. A few thousand $ and a couple of months preparation time would be more than enough. Keep in mind that destruction and creation are extremely asymmetrical when it comes to the level of effort required.
Facetiousness aside, this is where I hope car crashes would get to. They should be so rare that they do make the local news. They should be so rare we can treat them like airplane crashes and investigate each case to the same degree.
> I understand your concern but right now 30k Americans die in car accidents.
And a few thousdand died on September 11th, nearly 20 years ago. The collective fear from that made the world a far worse place than the actual death toll. If there's a terrorist attack that targets internet connected cars, what do you think the collective reaction might be?
You'll have a much easier time not owning a car that can talk to the internet if you stay in the city. I currently walk, bike, and take public transit. I could not do that if I moved out of the city.
Unpopular? Isn't this the take-away from the article?
I thought that the article taught me nothing new at all, but then realized it did: it taught me how little the issue is understood by the regulators.
I'd say that an attack like that is just a matter of time, if I didn't think that a mass-destruction scenario that leads to a legislative change wouldn't happen sooner due to a bug.
I'm sure you're aware of this, but for others, the ODB-II port generally connects to completely insecure internal networks - so anything that physically connects to it may compromise it.
Yup, and some insurance companies are convincing people to connect a cell phone fob to their OBD-II in exchange for lower insurance rates. In exchange, they have access to your cars computer, GPS coordinates, basically anything you do, they track. That also puts your CANBUS on their network.
Since organizations do not observe an attacker's failure, the market generally does not reward extreme competence in cyber defense...
Blackhats, in stark contrast, are sexy.
Bounty hunters are sexy. Why doesn't the government start paying bounties? Paying bounty hunters to run honeypots would convert many black hats into hunters of black hats. This could well create an ecosystem where the more knowledgeable hackers directly prey on the script kiddies for fun and profit. Taking useful idiots out of the ecosystem strikes me as desirable. Such a program would also be useful for recruitment.
In a way, this is analogous to such bounties in the transition of the wild west into a more normal society.
The analogy falls down a bit, because while rats are born rats, a hacker is first a person. People can witness people becoming examples, and decide not to do that. That said, problems can arise, due to national borders, limited jurisdictions, and differences in access to socioeconomic resources.
I'd be curious to see the manual vs automatic accident rates per miles driven. I have a feeling automatic are a lot higher. I mean, these days, if somebody's driving a manual either they want to or had no other choice. I wonder if that correlates with driving ability at all.
Although I can't access the article, the abstract says that teens with ADHD had higher attention to driving when using a manual transmission vs an automatic.
That's funny you mention ADHD. I was diagnosed as a teenager and I've always felt more focused while driving a manual than an automatic—it's not as easy to zone out. I've also heard of doctors "prescribing" manuals for people with ADHD. That's just second-hand, though.
Cars are different though, they aren't general-purpose computers (even if they are based on them). So you can have a chip that checks the checksum of every file, checks if it's all signed with the correct certificate, and shuts down the car even if one bit is off.
That's quite different from your laptop or a smartphone, which can run pretty much any code.
It would be very very very hard to hack something like that. As in, steal the signing keys from Tesla hard without being noticed. I wouldn't be surprised if their signing server is air-gapped.
Cars are also different in that software updates are harder to pull off. There are millions of cars out there on the road right now that have internet connections, and only minimal security between human-convenience internet like map software, and the embedded systems that do things like steering and brakes. And the average lifespan of a car is what, 15 years? Whatever the vulnerabilities, they're going to be there for a long time.
And even if there's a firmware update mechanism available, the manufacturers' abilities to maintain older software will also degrade, like legacy systems do everywhere.
Car software is designed with safety in mind. While there is some overlap in techniques with security, they are not the same goal. It is possible to be (and I think most cars are) very safe, but not very secure.
> What I learned was that there were not only no regulations there were no plans for them either. While I think my MP took my concerns seriously and did what she could, I came to understand that political will lags public outcry
This interested me more than the idea of car-bombs, are we moving towards a post-law society? Thinking about it, I haven't seen a single piece of legislation regarding self driving cars, cryptocurrencies, or shared computing in my country; and are transport, currency, and communication not the underpinnings of civilisation?
Then I noticed he'd recently published an article on Zero Width characters being used for fingerprinting which got some attention on HN in the last 30 days: https://news.ycombinator.com/item?id=16046329
"car crash" is the only non-debated aspect of the situation. No scare quotes needed.
(The crash wasn't caused by remote control hacking -- that's an absurdly complicated and high-profile way for well-funded professional murderers to murder a single relatively minor person. If we was murdered, it was by some kind of mechanical sabotage or by impairing his mental state with some drug.)
tldr: Cyber attack is 1000x easier than defense. There are enough Tesla cars on the road today that can be hacked into and turned into WMDs. Lots of ideas added in on how to bolster defense.
Car bombs are bad, but they are not "WMDs." Nevermind that it is cheaper to find a suicidal driver, or trick a driver into going on a suicide mission than it is to buy a Tesla and hack it into a car bomb.
Many imagined terrorist threats are threats only in a vacuum. A rifle and a tall building resulted in 50+ deaths and 500+ wounded. Any technologically complex attack has to beat those numbers.
The only notable attack with an actual WMD was the sarin gas attack on the Tokyo subway. It killed about one fourth the number of people as a man with a rifle in Las Vegas. It took a secretive cult organization to implement. That's why nobody uses WMDs for terror attacks.
Trucks and rifles do enough damage without even having to acquire bomb-making skills. Autonomous vehicles that are programmed to avoid hitting pedestrians are likely to make it harder to use vehicles as a weapon, not more deadly.
Fast forward 10 years; the features in the Model S have trickled down to whatever car is 2027's Toyota Camry.
You figure out how to hack it over the internet and load a program that uses its cameras to identify a pedestrian and drive straight towards it at full speed. You turn on this program at some busy time of day. There are times of days in which there are tens of thousands of late model Camry's on the road in the US, and this is the very first example I could think of when considering the problem raised in the article.
The article is not talking about car bombs. It's talking about attacks like "make every car of model X that's currently on the road accelerate to 120mph and then ram the nearest wall". This would kill a number of people on the order of magnitude of the number of cars of model X in existence (assume each one only kills one person, the driver), but even that can easily be on the order of hundreds of thousands to millions. For example, according to https://en.wikipedia.org/wiki/Toyota_Corolla there were 40 million Corolla's sold over the span of 47 years, which means there are certainly model years out there with close to a million vehicles sold.
More than 250 million Windows PCs are sold every year and the installed base is over 1.5 billion. The largest botnets number in low single digit millions, or less than 0.5%.
This is an interesting data point, and I agree that it's hard to explain. Why doesn't a newly-discovered 0-day in Windows lead to pretty much every internet-connected Windows PC being infected?
(That said, there are issues around actually reaching these machines, because a lot of them don't have routable IPs and may not load your attack web page. Similar issues may arise for cars, which sure would be nice as a mitigation.)
Even so, 0.5% of 1 million is about 5000 people. Not quite WMD territory, but also way out of typical "car bomb" territory... Maybe the mitigating factors would mean the actual fraction would be even less. But it would be nice if we didn't have to hope so.
Well, it's not car bombs we're talking about here.
Assume you have a fleet of self driving cars on the road deployed throughout the nation. A vulnerability is discovered that allows an attacker total control. They deploy that vulnerability, and cause every single car in the fleet to accelerate and aim for pedestrians.
So the next bit we have to ask is:
How many self driving cars would be on the road at the time this attack gets deployed.
Multiply this with:
How likely the driver/occupant will die in this event, how likely is this event going to kill or harm pedestrians, and how much structural damage would be done?
Play with those figures for a bit. Personally, what I'd consider reasonable guesses at these numbers results in some very large bodycounts.
I know this is somewhat off topic, but what's the deal with "Look no further than the Clinton email breach to see how much a single hacker can change the world."?
To my knowledge, there's no evidence that Clinton's email was breached. The DNC, sure, but the Clinton email controversy wasn't based on any actual known breach. It's sort of alarming to see this weird retconning of history.
>because obscurity is a valid defensive measure. It's how passwords and authorization tokens work.
This is extremely dangerous thinking. Passwords and tokens are not in the 'security by obscurity' category because you can observe how the entire system works, review its source code, read its deployment configuration, and do packet captures of the encrypted valid traffic and still not have a way to gain access to the system.
Security by obscurity refers to hiding how the system actually works, and that's a lot harder to keep a secret because you are one compromised device away from revealing all of that and it can't be easily changed.
Do not call passwords and tokens "security by obscurity" or claim security by obscurity practices (e.g. Running on non-standard port numbers) is on the same level as passwords/tokens/keys.
There is a reason the strongest crypto is using public algorithms and only private keys. Obscuring a system just means that your silly bugs don't get found and exposed early.
> They rebutted that individual cars are much easier to hack and after they are first used in a terror attack we will get the political will to fix the problem.
Was that meant to be a joke? Sure, they can establish a new standard that will apply to all the new car models coming out four years later. But who actually expects hundreds of millions of cars that are already on the market, to receive a software overall with a new architecture?
This is why it has always bothered me that almost no one seems to bring this problem to the forefront - certainly not carmakers. They're all too focused on how awesome self-driving technology will be and how it will save us from drunk drivers. Thus, disregarding the fact that once we have 100 million to 2 billion self-driving cars on the road, that will be a huge market for cyber criminals, from ransomware and cryptojacking (hello powerful GPU computer + free solar power charging!) to assassinations.
And before anyone says "how much harder it is to hack a car than a PC", consider the fact that most cars today aren't actually connected to the internet. And most of those that are, only have their entertainment systems connected to the internet. Self-driving cars will be able to receive OTA updates that will improve their engine, steering, and brake performance = the OTA software has access to everything.
Combine this level of access to the high level of recklessness in the name of profits carmakers seem to be showing today, when they advertise features such as "unlocking your doors through an app".
EFF's former chair and someone who worked on Google's Waymo, has some decent ideas about how to protect self-driving cars, if only carmakers would listen:
Airgapping was my first instinct too, but the problem is we're dealing with state-level actors. Airgapping doesn't work with them. They're patient, well funded organizations. Trying to rely on never having a single type of car (any of which could have a hundred thousand copies on the road) hacked is a fools errand.
We need ways of disabling autonomous devices and detecting when they get hacked, not trying to win an impossibly hard game.
The disabling system becomes another attack surface, and unless it is pretty independent, is itself disabled by a sophisticated attack. But scared as we are of external attack, allowing the government to shut off all cars is like letting Mubarak shut off the internet in Egypt. That's a bigger danger than foreign enemies in many countries.
The human mind is also hackable... it just takes a considerably longer time to hack it compared to installing software on a computer. Id argue that the internet and social media has increased the speed at which the mind can be hacked by a malicious actor. Im not so sure which is worse now, autonomous vehicles or human drivers? I think its much harder to detect when a human has been "hacked" than a fleet of cars. Cars could have subsystems build in for detecting anomolies, axuilary computers and manual overdrive settings. Detecting when a human has been "hacked" requires invasion of privacy, constant surveillance, reliable friends and family, and a whole bunch of other variables. Preventing a human from being hacked might require arduous and expensive changes and experiments in legal systems, incentive mechanism, censorship, education,etc...
> One of the problems I've had over the past year and a half is how to communicate this idea without:
> 1. Sounding like a crank.
> 2. Giving ideas to terrorists or hostile foreign governments.
#1 is this article's biggest problem. These problems are real, but I think the author sounds like a crank. (Based on the way this article is written, I suspect that the author actually is a crank--obsessed with security, but fundamentally lacking in the relevant skills to do anything about it.)
If the author is serious about this, here's what you do.
1. Become/join a non-profit organization. You want to be frequently quoted in the press, but not as "well-known software security expert Zach Aysan" but as "Zach Aysan, president of the Organization for Global Security."
2. Build your reputation by finding and earning credit for security problems. Do ethical reporting, but when the issues get fixed, exhibit them in a flashy way.
3. White papers, not blog posts. This "article" starts with a subtitle "with apologies to Elon Musk", followed by a personal dedication to Zach's father. This is not the tone of a white paper from a think tank.
The subtitle of the post should be the thesis statement of the article. "Self-driving cars lack adequate security protections." The first paragraph should be an executive summary of the argument of the post. Each paragraph should support the thesis statement.
Have someone read your articles; update your work based on their feedback. Thank them in the footer.
4. Separate "how to" articles from arguments. This article is long because it is both attempting to persuade the reader that we haven't invested enough effort into securing critical systems and also to give a list of proposals. These should be separate articles, with one article arguing why security is important, and another article giving a list of proposals.
> Separate "how to" articles from arguments. This article is long because it is both attempting to persuade the reader that we haven't invested enough effort into securing critical systems and also to give a list of proposals.
The solutions are also very very wrong, he proposes a state enforced kill switch for every device capable of autonomous operation. Abusing that system seems like a much greater source of havoc than individual compromises of certain brands.
His main point is pretty insightful and well argued, giving life and death powers to very fragile software on a massive scale will end in disaster. The computer industry is woefully unprepared to deliver hardware, software and systems with the provable integrity many of these applications require.
> giving life and death powers to very fragile software on a massive scale will end in disaster.
I dunno, we already use software to run trains and it works OK. Gone are the days of track circuits and physical protuberances on the track to open the brake tube when a train tries to go where it shouldn't. Modern positive train control uses 802.11-based wireless networks communicating between trains, and as scary as hacked-up WiFi routers sound for life-and-death applications, it has a better safety record than the easier-to-understand previous-generation technologies.
Complex software systems _can_ be successfully implemented. I might even be inclined to trust them more than cars driven by people that got an hour of sleep last night, or people whose financial incentives line up in such a way as to make driving 60mph on residential streets a smart business decision.
I don't know, but I had the same idea years ago, and I later read about an author using the idea in a 2006 book (Daemon, by Daniel Suarez). I'd say this shouldn't be an issue, and it also solves number two: if we can think of it, so can hostile governments* and mad men.
That said, the article does (after he gets to the point of "weapons of mass destruction") seem to get a little obsessive and, yeah, crank-y. But that has little to do with the concern itself.
* Not "hostile foreign governments" because every government is foreign (and perhaps even hostile) to someone.
Reminds me of Snowden leaks - most of the stuff was pretty much obvious to anyone with any actual understanding of technology, and yet saying it out loud would get you labeled as a crackpot.
I think you have to be paranoid, really paranoid, to be able to think about this kind of thing. "Normal" humans seem to have psychological stability mechanisms that operate pathologically when confronted with events that are too far outside of their world-view. It's fine to disbelieve Bigfoot. It's dangerous to disbelieve e.g. Stuxnet.
Even the author:
> It wasn't until I'd read through the code [of Stuxnet] myself that I finally believed that it had actually happened.
Is it a failure of imagination? Head-in-the-sand-ism? Just not paranoid enough?
To everyone reading this, if this article sounds like a crank, please re-evaluate your world-view. This is the clearest and most well-written description of the problem I've yet read.
Dude thank you. You're a freakin' hero. You've identified what I call the "Maximum Overdrive" problem and actually gotten people to take it seriously. That is amazing and I applaud your efforts. Thank you.
I'm not sure why this is being downvoted. It seems entirely reasonable to assume that people just can't bring themselves to think about some things logically.
The NSA warrantless surveillance program, something much more egregious than anything revealed by Snowden, was reported in the plain old regular press - the New York Times, back in 2005.
James Bamford has made a long and distinguished career of writing about the NSA with his first book coming out in 1982.
I guess, a few years ago anyone who would assume that Facebook can be used to propagate fake news and influence presidential elections would've been called a crank. I wish we have more "cranks" like the author. And of course when it comes to a chance of massive coordinated terrorist attack it's better to have many false positives (no matter how annoying they are) than overlook a real problem.
I'm not an expert, but I've known for over half my life not to trust Intel/Microsoft et al - but many mainstream "industry experts" felt safe saying the opposite, until the proverbial "laying in the street" came to fruition.
Defense is harder than attack, but the attackers can be attacked back. Jail isn't worth a few million dollars vs. working in silicon valley.
I think that is also a big reason why many potential black hats do not become black hats. The only 'sustainable' black hats are ones associated with a criminal organization or nation states.
I don't think the author is wrong about the threat. Unfortunately, his apparent solution--a government mandated standardized "safety module" to intermediate the Internet-connected bits of the vehicle from the non-connected bits--is likely even more risky. Monocultures can be devastated by discovering a single flaw. And there are guaranteed to be many flaws in any non-trivial computer system. Imagine if a straightforward exploit for Spectre that totally compromised a system leaving a hard-to-detect back door via some simple Javascript had existed on January 1, and had been injected into a few popular websites via ad networks... for that matter, how do we know such a thing didn't happen?
First step would be to stop public funding of 0day purchase and development in NSA and co. People believe that more 'cyberweapons' would be good against cyberthreats from 'others' but meanwhile all use the same systems. If these systems do not get fixed because the NSA wants to use its purchased 0days then everybody has a problem. What is needed is joint effort for secure open source solutions.
Also, for computer security programmers need to overcome convenience practices and start behaving responsible, but also people in management/politics should not make technical decisions if they can't understand the implications.
During the late 90's and early 00's I often wondered why with all of the Microsoft hate why someone didn't create a simple virus that did something destructive like just format windows hard drives. There must have been other incentives at play that caused this to never occur. Perhaps in the same logic if you can infect every Tesla it is more valuable to not crash them, but instead scrub the data and sell that back to [insert company] or something.
If there was ever something that would cause a formal programming guild to sprout I would be willing to bet that it would form its roots around security.
There were plenty of destructive viruses. Some (e.g. CIH) would erase the MBR of the disk making it unbootable. Others like ILOVEYOU would overwrite user's data directly. Blaster specifically had a message about Windows' poor security and tried to DDoS Windows Update.
Sometimes viruses are written to by self-limiting. MyDoom, which caused about 10% of all email traffic for a time, was programmed to deactivate on a certain date. Also once viruses get to a certain level of infamy they get a lot of attention. Blaster was mitigated in just days, so by the time it started its DDoS it was already mostly wiped out.
From the other virology: a virus that outright kills its host doesn't get very far. A virus that keeps the host limping along, creating copies of the virus in the process, is far more likely to spread...which is, IIRC, precisely what happened in the early 2000s. (And indeed, what would be the incentive for disabling Windows hosts? "W1nd0z3 suxx0rz" would barely count as one, given the lack of general-public alternatives at the time - the afflicted would pay a tech grunt to repave with the same Windows again, been there, done the repaving.)
Still cars are already computers but so far nothing like that is happening... In theory the fact of being autonomous should be a protection because you could add some control system that can't be updated and can make choices reagreless of what the other systems are telling to do.
Am I the only one who thinks it is ironic that Elon Musk gives so much money to research regarding the existential threat of AI to humanity, while also basing a business on putting AI in devices that can so easily kill humans?
Pretty sure he has a guilty conscience. He wants to build an autonomous self driving car, which will have to get pretty close to general AI and he wants to put it in multi-ton robots. He knows this could be dangerous and scary. At least he is giving money to research the problem rather than just forging ahead with the potentially dangerous part.
Personally I think the robot uprising concern is at least 50 years premature. But the "what if someone could take the controls" concern is a concern we should have been already been considering yesterday.
With the continued reporting of these possible attack vectors, it makes me feel that there's a lot to be said about having a carburettor and points running your car, instead of a computer with network connectivity.
Some solutions are strangely misguided. UDP is insecure but TCP is somehow magically secure? Certificates are not to be trusted? Can't agree with all the conclusions but the specific paranoia is well founded.
> This article is geared towards people with a STEM background. For something shorter try this article in The Weekly Standard.
From personal experience I can tell you that people with STEM backgrounds can also be too impatient to read an article this long.
edit:
> (Bio) After years of building startups and advising organizations large and small on data science and cyber security, I'm turning my focus to improving public literacy on technology.
Starting your article like this is a bad way to do it.
You may have a point or two, but you're being downvoted because of the tactless way you've stated them. Also since when is a longer read only for "STEM people"?
https://www.wired.com/2017/02/malware-sends-stolen-data-dron...
Also reminds me of the stories of reverse engineers using, lights, motors, speakers, etc to exfiltrate boot images.
> "If hacking a Jeep is as straightforward as hacking a server, and servers are routinely breached, then where are all the hacked cars? It's a bit like the Fermi Paradox."
One barrier to entry is the actual cost of acquiring a vehicle. You can buy a lot of iPhones to hack on for the price of a Tesla.
> "We think of Teslas as cars just like we think of an iPhone as a phone, but a more accurate account of reality is that they're both just computers."
It's actually kind of worse than that, cars are actually rolling datacenters with multiple computers and multiple networks. (CAN bus[es], Ethernet, proprietary networks)
//Disclaimer: I work for GM, but not on any of this.