I'm quite skeptical of Tesla's reliability claims. But for exactly that reason, I welcome a company like Lemonade betting actual money on those claims. Either way, this is bound to generate some visibility into the actual accident rates.
I was curious what the break-even is where the insurance discount covers the $99/mo FSD subscription. I got a Lemonade quote around $240/mo (12k mi/yr lease on a Model 3), so 50% off would save ~$120/mo - i.e. it would cover FSD and still leave ~$21/mo net. Or, "free FSD is you use it".
I believe, at the end of the day, insurance companies will be the ones driving FSD adoption. The media will sensationalize the outlier issues of FSD software, but insurance companies will set the incentives for humans to stop driving.
If it autonomous or self-driving then why is the person in the car paying for the insurance? Surely if it's Tesla making the decisions, they need the insurance?
Generally speaking, liability for a thing falls on the owner/operator. That person can sue the manufacturer to recover the damages if they want. At some point, I expect it to become somewhat routine for insurures to pay out, then sue the manufacturer to recover.
The product you buy is called "FSD Supervised". It clearly states you're liable and must supervise the system.
I don't think there's law that would allow Tesla (or anyone else) to sell a passenger car with unsupervised system.
If you take Waymo or Tesla Robotaxi in Austin, you are not liable for accidents, Google or Tesla is.
That's because they operate on limited state laws that allow them to provide such service but the law doesn't allow selling such cars to people.
That's changing. Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
If the car that did a hit-and-run was operated autonomously the insurance of the maker of that car should pay. Otherwise it's a human and the situation falls into the bucket of what we already have today.
You can sell autonomous vehicles to consumers all day long. There's no US federal law prohibiting that, as long as they're compliant with FMVSS as all consumer vehicles are required to be.
My current FSD usage is 90% over ~2000 miles (since v14.x). Besides driving everywhere, everyday with FSD, I have driven 4 hours garage to hotel valet without intervention. It is absolutely "Full Self-Driving" and "Autonomous".
FSD isn't perfect, but it is everyday amazing and useful.
Liability is a separate matter from autonomy. I assume you'd consider yourself autonomous, yet it's your employer's insurance that will be liable if you have an accident while driving a company vehicle.
If the company required a representative to sit in the car with you and participate in the driving (e.g. by monitoring and taking over before an accident), then there's a case to be made that you're not fully autonomous.
> it's your employer's insurance that will be liable if you have an accident while driving a company vehicle
I think you're mixing some concepts.
There's car insurance paid by the owner of the car, for the car. There's workplace accident insurance, paid by the employer for the employee. The liability isn't assigned by default, but by determining who's responsible.
The driver is always legally responsible for accidents caused by their negligence. If you play with your phone behind the wheel and kill someone, even while working and driving a company car, the company's insurance might pay for the damage but you go to prison. The company will recover the money from you. Their work accident insurance will pay nothing.
The test you can run in your head: will you get arrested if you fall asleep at the wheel and crash? If yes, then it's not autonomous or self driving. It just has driver assistance. It's not that the car can't drive itself at all, just that it doesn't meet the bar for the entire legal concept of "driver/driving".
"Almost" self driving is like jumping over a canyon and almost making it to the other side. Good effort, bad outcome.
Disagree. I appreciate their viewpoint tethering corporate claims to reality by illustrating Tesla is obfuscating the classification of their machines to be autonomous, when they actually aren't. Their comments in other thread chains proved to be fruitful when lacking agitators looking to dismiss critique by citing website rules, like the post adding additional detail to how Tesla muddles legal claims by cooking up cherry-picked evidence that work against the driver despite being the insurer.
> Quite likely this year we will have federal law that will allow selling cars with fully unsupervised self-driving, in which case the insurance/liability will obviously land on the maker of the system, not person present in the car.
This is news to me. This context seems important to understanding Tesla's decision to stop selling FSD. If they're on the hook for insurance, then they will need to dynamically adjust what they charge to reflect insurance costs.
Without LIDAR and/or additional sensors, Tesla will never be able to provide "real" FSD, no matter how wonderful their software controlling the car is.
Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Waymo and others are providing a taxi service where the driver is not a human. You don't pay insurance when you ride Uber or Bolt or any other regular taxi service.
> Also, self driving is a feature of a vehicle someone owns, I don't understand how that should exempt anyone from insuring their property.
Well practically speaking, there’s nothing stopping anyone from voluntarily assuming liability for arbitrary things. If Tesla assumes the liability for my car, then even if I still require my “own” insurance for legal purposes, the marginal cost of covering the remaining risk is going to be close to zero.
It isn't fully autonomous yet. For any future system sold as level 5 (or level 4?), I agree with your contention -- the manufacturer of the level 5 autonomous system is the one who bears primary liability and therefore should insure. "FSD" isn't even level 3.
(Though, there is still an element of owner/operator maintenance for level 4/5 vehicles -- e.g., if the owner fails to replace tires below 4/32", continues to operate the vehicle, and it causes an injury, that is partially the owner/operator's fault.)
Wouldn't that requirement completely kill any chance of a L5 system being profitable? If company X is making tons of self-driving cars, and now has to pay insurance for every single one, that's a mountain of cash. They'd go broke immediately.
I realize it would suck to be blamed for something the car did when you weren't driving it, but I'm not sure how else it could be financially feasible.
No? Insurance costs would be passed through to consumers in the form of up-front purchase price. And probably the cost to insure L5 systems for liability will be very low. If it isn't low, the autonomous system isn't very safe.
The way it works in states like California currently is that the permit holder has to post an insurance bond that accidents and judgements are taken out against. It's a fixed overhead.
Because the operator is liable? Tesla as a company isn't driving the car, it's a ML model running on something like HW4 on bare metal in the car itself. Would that make the silicon die legally liable?
Yeah, Tesla gets to blame the “driver”, and has a history of releasing partial and carefully curated subsets of data from crashes to try to shift as much blame onto the driver as possible.
And the system is designed to set up drivers for failure.
An HCI challenge with mostly autonomous systems is that operators lose their awareness of the system, and when things go wrong you can easily get worse outcomes than if the system was fully manual with an engaged operator.
This is a well known challenge in the nuclear energy sector and airline industry (Air France 447) - how do you keep operators fully engaged even though they almost never need to intervene, because otherwise they’re likely to be missing critical context and make wrong decisions. These days you could probably argue the same is true of software engineers reviewing LLM code that’s often - but not always - correct.
Its neither self-driving, nor autonomous, eventually not even a car! (as Tesla slowly exits the car business). It will be 'insurance' on Speculation as a service, as Tesla skyrockets to $20T market cap. Tesla will successfully transition from a small revenue to pre-revenue company: https://www.youtube.com/watch?v=SYJdKW-UnFQ
The last few years of Tesla 'growth' show how this transition is unfolding. S and X production is shutdown, just a few more models to shutdown.
The point is if the liability is always exclusively with the human driver then any system in that car is at best a "driver assist". Claims that "it drives itself" or "it's autonomous" are just varying degrees of lying. I call it a partial lie rather than a partial truth because the result more often than not is that the customer is tricked into thinking the system is more capable than it is, and because that outcome is more dangerous than the opposite.
Any car has varying degrees of autonomy, even the ones with no assists (it will safely self-drive you all the way to the accident site, as they say). But the car is either driven by the human with the system's help, or is driven by the system with or without the human's help.
A car can't have 2 drivers. The only real one is the one the law holds responsible.
The coder and sensor manufacturers need the insurance for wrongful death lawsuits
and Musk for removing lidar so it keeps jumping across high speed traffic at shadows because the visual cameras can't see true depth
99% of the people on this website are coders and know how even one small typo can cause random fails, yet you trust them to make you an alpha/beta tester at high speed?
Not an expert here, but I recall reading that certain European countries (Spain???) allow liability to be put on the autonomous driving system, not the person in the car. Does anyone know more about this?
That is the case everywhere. It is common when buying a product for the contract to include who has liability for various things. The price often changes by a lot depending on who has liability.
Cars are traditionally sold as the customer has liability. Nothing stops a car maker (or even an individual dealer) from selling cars today taking all the insurance liability in any country I know of - they don't for what I hope are obvious reasons (bad drivers will be sure to buy those cars since it is a better deal for them an in turn a worse deal for good drivers), but they could.
Self driving is currently sold as customers has liability because that is how it has always been done. I doubt it will change, but it is only because I doubt there will ever be enough advantage as to be worth it for someone else to take on the liability - but I could be wrong.
Tesla have their own Insurance product which is already very competitive compared to other providers. Not sure if lemonade can beat them . Tesla's insurance product has similar objective in place already where it rewards self driving over manual driving.
Tesla is cooperating with Lemonade on this by providing them necessary user driving data.
If Tesla didn't want Lemonade to provide this, they could block them.
Strategically, Tesla doesn't want to be an insurer. They started the insurance product years ago, before Lemonade also offered this, to make FSD more attractive to buyers.
But the expansion stalled, maybe because the state bureaucracy or maybe because Tesla shifted priority to other things.
In conclusion: Tesla is happy that Lemonade offers this. It makes Tesla cars more attractive to buyers without Tesla doing the work of starting an insurance company in every state.
> But the expansion stalled, maybe because the state bureaucracy or maybe because Tesla shifted priority to other things.
If the math was mathing, it would be malpractice not to expand it. I'm betting that their scheme simply wasn't workable, given the extremely high costs of claims (Tesla repairs aren't cheap) relative to the low rates that they were collecting on premiums. The cheap premiums are probably a form of market dumping to get people to buy their FSD product, the sales of which boosts their share price.
It was not workable. They have a loss ratio of >100% [1], as in they paid out more in claims than received in premiums before even accounting for literally any other costs. Industry average is ~60-80% to stay profitable when including other costs.
They released the Tesla Insurance product because their cars were excessively expensive to insure, increasing ownership costs, which was impacting sales. By releasing the unprofitable Tesla Insurance product, they could subsidize ownership costs making the cars more attractive to buy right now which pumped revenues immediately in return for a "accidental" write-down in the future.
That is not true. Since Tesla was losing money on their insurance to boost sales the customers were not paying for it since they were receiving a service for below cost.
The people paying were actually the retirement funds who fronted Tesla's cash reserves when they purchased Tesla stock and the US government paying for it in the form of more tax credits on sales that would not have otherwise materialized without this financial fraud. But do not worry, retirement funds and the US government may have lost, but it boosted Tesla sales and stock valuation so that Elon Musk could reach his KPIs to get his multiple tens of billions of dollars of payout.
The math should've mathed. Better data === lower losses right? They probably weren't able to get it to work quite right on the tech side and were eating fat losses during an already bad time in the market.
It'll come back.
Lemonade or Tesla if you find this, let's pilot, i'm a founder in sunnyvale, insurtech vertical at pnp
You'd be very surprised. Distribution works wonders. You could have a large carrier taking over Tesla's own vehicles in markets they care about. The difference then would be loss ratios on the data collection, like does LIDAR data really beat Progressive Snapshot?
The two are measuring data for different sources of losses for carriers.
I own a Model Y with hardware version 4. FSD prevented my from getting in an accident with a drunk driver. It reacted much faster to the situation than I could have. Ever since, I’m sold that in a lot of circumstances, machines can drive better than humans.
Hacker News likes to keep conversations focused on the topic at hand. I doubt anyone here thinks politics are irrelevant. We just understand basic courtesy. If your goal is indeed to influence change, you do a massive disservice to the cause by acting immature and injecting your politics into other conversations.
Well, as everyone points out: Musk uses Tesla’s stock to fund things and Tesla’s stock is decoupled from fundamentals like revenue so that means that buying his car is decoupled from funding things. Practically a syllogism.
> mass human displacement campaign (a.k.a. Genocide)
genocide /jĕn′ə-sīd″/
noun
The systematic and widespread extermination or attempted extermination of a national, racial, religious, or ethnic group. The systematic killing of a racial or cultural group.
If you have some point to make about deporting 5 years olds or whatever, don't you think it would be more persuasive without provoking a tangential discussion about your idiosyncratic definition of genocide regardless of whatever organizations agree with you?
> Multiple humanitarian organizations define mass displacement as genocide and/or ethnic cleansing.
You're mixing two things here to your advantage. Genocide is (or can be) ethnic cleansing but ethnic cleansing is not genocide. So your "and/or" does some work for you there and makes you correct. However, you said genocide not "genocide and/or ethnic cleansing". You've moved the goalposts.
It'd be odd to redefine any word that ends in '-cide' from actual killing.
> The holocaust literally started with mass deportations/detentions.
Which was ethnic cleansing.
> Then the nazis figured out that it was easier to kill detainees.
AEB has been around since ages. Even my 2010 Mazda had it. It's nowhere near Tesla's capabilities tho. Not sure what are you trying to achieve with such dunks?
Hmmm. The source for the "FSD is safer" claim might not be wholly independent: "Tesla’s data shows that Full Self-Driving miles are twice as safe as manual driving"
I would be surprised if that was what they were actually looking at. They are an established insurance company with their own data and the actuaries to analyze it. I can't imagine them doing this without at least validating a substantial drop in claims relating to FSD capable cars.
Now that they are offering this program, they should start getting much better data by being able to correlate claims with actual FSD usage. They might be viewing this program partially as a data acquisition project to help them insure autonomous vehicles more broadly in the future.
They are a grossly unprofitable insurance company. Your actuaries can undervalue risk to the point you are losing money on every claim and still achieve that.
In fact, Tesla Insurance, the people who already have direct access to the data already loses money on every claim [1].
> "Tesla’s data shows that Full Self-Driving miles are twice as safe as manual driving"
Teslas only do FSD on motorways where you tend to have far fewer accidents per mile.
Also, they switch to manual driving if they can't cope, and because the driver isn't paying attention this usually results in a crash. But hey, it's in manual driving, not FSD, so they get to claim FSD is safer.
FSD is not and never will be safer than a human driver.
Successful enough for me and many other people I know. End to end from my house to grocery store, kids schools, friends houses, etc. Multiple times per day for the past year.
It’s not perfect but I’d consider it a smashing success for something I rely on for safely transporting my family every day.
Lemonade purchased Metromile and significantly increased prices. 2.5x if I recall correctly. This has forced me to move to Geico. Now, since prices have increased and new self driving car insurance is giving a discount, are you effectively paying same old rate?
Just curious about this, this was Lemonade's integrated insurance to the Tesla right? How's Geico like for you? Probably just fine right? Any differences?
The whole point of self-driving cars (to me) is I don't have to own or insure it, someone else deals with that and I just make it show up with my phone when I need it.
Imagine this for a whole neighborhood! Maybe it'd be more efficient for the transport to come at regular intervals though. And while we're at it, let's pick up other people along the way, you'll need a bigger vehicle though, perhaps bus-sized...
Half-jokes aside, if you don't own it, you'll end up paying more to the robotaxi company than you would have paid to own the car. This is all but guaranteed based on all SaaS services so far.
> if you don't own it, you'll end up paying more to the robotaxi company than you would have paid to own the car
Maybe for you, I already don't own it and have not found that to be true. I pretty much order an uber whenever I don't feel like riding my bike or the bus, and that costs <$300 most months. Less than the average used car payment in the US before you even consider insurance, fuel, storage, maintenance, etc.
I also rent a car now and then for weekend trips, that also is a few hundred bucks at most.
I would be surprised if robotaxis were more expensive long term.
Also, a real nightmare for the municipal trade unions. (Do you know why every NYC subway train needs to have not one but two operators, even though it could run automatically just fine?)
First, these cities should be fixed by removing the traffic magnets. It's far past the point where we used the old obsolete ideology of trying to supply as much traffic capacity as possible.
But anyway, your statement is actually not true anywhere in the US except NYC. Even in Chicago, removing ALL the local transit and switching to 6-seater minivans will eliminate all the traffic issues.
This only works in neighborhoods that are veritable city blocks, with buildings several stories tall standing close by. Not something like northern Houston, TX; it barely works for places like Palo Alto, CA. You cannot run buses on every lane, at a reasonable distance from every house.
The point of a car is takes you door to door. There's no expectation to walk three blocks from a stop; many US places are not intended for waking anyway. Consider heavy bags from grocery shopping, or similar.
Public transit works in proper cities, those that became cities before the advent of the car, and were not kept in the shape of large suburban sprawls by zoning. Most US cities only qualify in their downtowns.
Elsewhere, rented / hailed self-driving cars would be best. First of all, fewer of them would be needed.
Focusing only on price, renting a beafy shared "cloud" computer is cheaper than buying one and changing every 5 years. It's not always an issue for idle hardware.
Cars are mostly idle and could be cheaper if shared. But why make them significantly cheaper when you can match the price and extract more profits?
Cars and personal computers have advantages over shared resources that often make them worth the cost. If you want your transport/compute in busy times you may find limitations. (ever got on the train and had to stand because there are no seats? Every had to wait for your compute job to start because they are all busy? Both of these have happened to me).
you made too many false assumptions if you came up with those routes. Experts have run real numbers including looking at what happens in the real world. https://humantransit.org/category/microtransit - (as I write this you need to scroll to the second article to find the useful rebuttal of your idea)
Yeah, yeah: "Major US Public Transit Union Questions “Microtransit”" Read it. Go on. It's pure bullshit.
The _only_ issue with the old "microtransit" is the _driver_. Each van ends up needing on average MORE drivers than it moves passengers. It does solve the problem of throughput, though.
But once the driver is removed, this problem flips on its head. Each regular bus needs around 4 drivers for decent coverage. It's OK-ish only when the average bus load is at least 15-20 people. It's still much more expensive and polluting than cars, but not crazily so.
That's how some people feel about airplanes. Presumably you're not one of them. For some people, the inconvenience of being responsible for a car would outweigh the benefit of setting up their stuff inside of one.
For the vast majority of people who own a car, continuing to own the car will remain the better deal. Most people need their car during "rush hour", so there isn't any savings from sharing, and worse some people have "high standards" and so will demand the rental be a clean car nicer than you would accept - thus raising the costs (particularly if you drive used cars) Any remaining argument for a shared car dies when you realize that you can leave your things in the car, and you never have to wait.
For the rest - many of them live in a place where not enough others will follow the same system and so they will be forced to own a car just like today. If you live in a not dense area but still manage to walk/bike almost everywhere (as I do), renting a car is on paper cheaper the few times when you need a car - but in practice you don't know about that need several weeks in advance and so they don't have one they can rent to you. Even if you know you will need the car weeks in advance, sometimes they don't have one when you arrive.
If you live in a very dense area such that you almost regularly use transit (but sometimes walk, bike), but need a car for something a few times per year, then not owning a car makes sense. In this case the density means shared cars can be a viable business model despite not being used very much.
In short what you say sound insightful, but reality of how cars are used means it won't happen for most car owners.
Or, if they are Hertz, they might have one but refuse to give it to you. This happened to my wife. In spite of payment already being made to Hertz corporate online, the local agent wouldn't give up a car for a one-way rental. Hertz corporate was less than useless, telling us their system said was a car available, and suggesting we pay them hundreds of dollars again and go pick it up. When I asked the woman from corporate whether she could actually guarantee we would be given a car, she said she couldn't. When I suggested she call the local agent, she said she had no way to call the local office. Unbelievable.
Since it was last minute, there were... as you said, no cars available at any of the other rental companies. So we had to drive 8 hours to pick her up. Then 8 hours back, which was the drive she was going to make in the rental car in the first place.
This is the nightmare scenario for me. A forever subscription for the usage of a car.
Subscription for self driving will almost be a given with so many bad actors in tech nowadays, but never even being allowed to own the car is even worse.
I think this is purely psychological. The notion of paying for usage of some resource that you don't own is really rather mundane when you get down to it.
Subscription for changes to maps and the law makes sense. I'd also pay for the latest safety improvements (but they better be real improvements). However they are likely to add a number of unrelated things and I object to those.
If OSM is up to date - many places it is very outdated. (others it is very good).
Law - when a government changes the driving laws. Government can be federal (I have driven to both Canada and Mexico. Getting to Argentina is possible though I don't think it has ever been safe. Likewise it is possible to drive over the North Pole to Europe), state (or whatever the country calls their equivalent). When a city changes the law they put up signs, but if a state passes a law I'm expected to know even if I have never driven in that state before. Right turn on red laws are the only ones I can think of where states are different - but they are likely others.
Laws also cover new traffic control systems that may not have been in the original program. If the self driving system can't figure out the next one (think roundabout) then it needs to be updated.
Yeah I'm actually very curious about this, it's the first I've heard.
I'd like to know what data this is based on, and if Tesla is providing any kind of subsidy or guarantee.
There's also a big difference between the value of car damages and, well, death. E.g. what if FSD is much less likely to get into otherwise common fender benders that don't harm you, but more likely to occasionally accidentally drive you straight into a divider, killing you?
We don't know if 50% makes it actually cheaper than other car insurance companies, or the coverage is comparable, or if they have comparable service. Or if they sell your location information to marketers.
No it does not. A 50% discount and the insurance still having industry average profit, or at least being profitable at all, would tell you that. Selling at a loss does not indicate your costs are actually lower. You need to wait until we learn if it is actually at a loss.
Ah yes, posting well documented video evidence of reality is bias. How silly of me. The only unbiased take is to ignore my lying eyes and make logically unsound arguments in favor of endangering the public. That is what unbiased people do.
I also like how you completely avoided addressing my argument in favor of a attempted ad hominem.
I am utterly baffled by what you are trying to argue with this post except for distracting from the weakness of your original argument with more attempted(?) ad hominems. Must be my bias showing.
Lowering by $200? Full coverage on two recent model cars here and that's nearly three quarters of my monthly insurance bill. Insane what people are paying for insurance these days.
It would be interesting to see if Lemonade requires a Driver Monitoring System (DMS) to see if the driver/operator is actually paying attention (or, like sleeping / watching Netflix / whatever) while at the driver's seat.
Anybody know??
Tesla FSD is still a supervised system (= ADAS), afaik.
This (instant torque) is exciting for about the first week of electric car ownership, it gets old very fast. I have far more fun driving my much slower gas-engined cars.
There’s much more to enjoying cars than speed in a straight line, which I do not disagree at all most EVs are exceptional at.
Booting the go pedal at every stop sign or light just feels like being a bit of a childish jerk after a short while on public roads once the novelty wears off.
> automatically tracking FSD miles versus manual miles through direct Tesla integration.
No thanks. I unplugged the cellular modem in my car precisely because I can't stand the idea that the manufacturer/dealer/insurance company or other unauthorized third parties could have access to my location and driving habits.
I also generally avoid dealers like the plague and only trust the kind of shops where the guy who answers the phone is the guy doing the work.
I have Lemonade for my home insurance. It's been reliable for several years and the customer service is great. I don't have a self-driving car but I wouldn't hesitate to sign up. Their rates are very affordable.
I've had their Home Insurance since they started up and grabbed their car insurance a couple years ago. Competitive price, excellent customer service, no notes.
One's first thought is that they ought to be running away from underwriting this as fast as they can go. But then one realizes that it is all profit -- they need never pay a claim, because in accidents involving autonomous vehicles, it will never be possible to establish fault; and then one sees that the primary purpose of most automations is to obscure responsibility.
I think there's a narrow unregulated space where this could be true. I'm exercising my creativity trying to imagine it - where automations are built with the outcome of obscured responsibility in mind. And I could understand profit as a possible driving factor for that outcome.
As an extreme end of a spectrum example, there's been worry and debate for decades over automating military capabilities to the point where it becomes "push button to win war". There used to be, and hopefully still is, lots of restraint towards heading in that direction - in recognition of the need for ethics validation in automated judgements. The topic comes up now and then around Tesla's, and impossible decisions that FSD will have to make.
So at a certain point, and it may be right around the point of serious physical harm, the design decision to have or not have human-in-the-middle accountability seems to run into ethical constraints. In reality it's the ruthless bottom line focused corps - that don't seem to be the norm, but may have an outsized impact - that actually push up against ethical constraints. But even then, I would be wary as an executive documenting a decision to disregard potential harms at one of them shops. That line is being tested, but it's still there.
In my actual experience with automations, they've always been derived from laziness / reducing effort for everyone, or "because we can", and sometimes a need to reduce human error.
You're not making any sense. In terms of civil liability, fault is attached to the vehicle regardless of what autonomous systems might have been in use at the time of a collision.
reply