Hacker Newsnew | past | comments | ask | show | jobs | submit | keeda's commentslogin

Here are my thoughts, which are not fully formed because AI is still so new. But taking this line of thought reductio ad absurdum, it becomes apparent that the elites have a critical dependency on us plebs:

Almost all of their wealth is ultimately derived from people.

The rich get richer by taking a massive cut of the economy, and the economy is basically people providing and paying for services and goods. If all the employees are replaced and can earn no money, there is no economy. Now the elite have two major problems:

a) What do they take a cut of to keep getting richer?

b) How long will they be safe when the resentment eventually boils over? (There's a reason the doomsday bunker industry is booming.)

My hunch is, after a period of turmoil, we'll end up in the usual equilibrium where the rest of the world is kept economically growing just enough to keep (a) us stable enough not to revolt and (b) them getting richer. I don't know what that looks like, could be UBI or something. But we'll figure it out because our incentives are aligned: we all want to stay alive and get richer (for varying definitions of "richer" of course.)

However, I suspect a lot will change quickly, because a ton of things that made up the old world order is going to be upended. Like, you'd need millions in funding to hire a team to launch any major software product; this ultimately kept the power in the hands of those with capital. Now a single person with an AI agent and a cloud platform can do it themselves for pocket change. This pattern will repeat across industries.

The power of capital is being disintermediated, and it's not clear what the repercussions will be.


Important snippet that has bearing on the adoption of AI as a whole:

> “Disorganized data silos” have been an issue for Copilot, analysts wrote.

This is true in almost every large organization, and will affect every enterprise AI product out there. There was a relevant subthread just a couple of days ago recounting this exact, same dynamic: https://news.ycombinator.com/item?id=46861209

In fact, Palantir's secret sauce may not be their tech, but their "Forward Deployed Engineers" model (i.e. a rebranding of "partner engineers embedded within their customers' organizations"). Because it turns out that's a lot of what they do is navigating these bureaucratic and political hurdles to unlock access to the data: https://nabeelqu.co/reflections-on-palantir

It gets even worse if you consider this data is going to be extremely messy, with multiple bespoke, partially-duplicate / overlapping, potentially conflicting versions of the data with varying levels of out-of-datedness, scattered across these silos. (I would know, in a past life, I worked on a months-long project called, self-explanatorily enough, "Stale Docs".)

Yeah, untangling these bureaucratic webs and data horrors is not a quarter-long or year-long project, so investors are gonna be waiting a long time for the impact of AI to be visible. On the bright side, as TFA also hints at, AI providers themselves have been severely capacity-constrained. So hopefully by the time these issues get sorted out enough new capacity would be coming online to actually serve that traffic.

In the meantime, I expect a prolonged period of AI companies feverishly splurging on AI CapEx spend even as Wall Street punishes them repeatedly for the lack of impact of AI being reflected anywhere.


> The TSA was NEVER about security. It was designed as a jobs program and make it look like we were doing something for security.

To a great extent, it is security, even if it's mostly security theater, in the sense that it is security theater that people want.

A large portion, maybe even the majority, of travelers simply won't feel safe without it. I've had and overheard multiple conversations at the airport where somebody felt uncomfortable boarding a plane because they saw the screening agent asleep at the desk. Pro-tip, trying to explain security theater to the concerned passenger is not the right solution at this point ;-)

Even Bruce Schneier, who coined the term "security theater" has moderated his stance to acknowledge that it can satisfy a real psychological need, even if it's irrational.

We may be more cynical and look upon such things with disdain, but most people want the illusion of safety, even if deep down they know it's just an illusion.


> A large portion, maybe even the majority, of travelers simply won't feel safe without it. I've had and overheard multiple conversations at the airport where somebody felt uncomfortable boarding a plane because they saw the screening agent asleep at the desk.

I’d hazard that this may be true now, but this feeling was created by the same “security measures” we’re discussing.

Anyway, such major population-wide measures shouldn’t be about stopping people being “uncomfortable” - they should be about minimising risk, or not at all. If you start imposing laws or other practices every time a group of people feel “uncomfortable”, the world will quickly grind to a halt.


> I’d hazard that this may be true now, but this feeling was created by the same “security measures” we’re discussing.

Slight tangent but I recall travelling within the Schengen Zone for the first time and just walking off the plane and straight into a taxi. When I explained what I did to someone she asked "but what about security? How do they know you've not got a bomb?" I don't think I had the words to explain that, if I did manage to sneak a bomb onto the plane into Madrid, I was probably not going to save it for the airport after I landed...


Er, I don't get it. I do the same thing at every airport in the US: walk off the plane and straight into a taxi.

I think they're talking about international travel and not having to go through border control within the Schengen space even though you're traveling to different countries.

Yes, but border control isn't security. I don't go through security when I arrive in the US either. (I do have global entry but that just means I usually go through immigration faster.) If I have a connecting flight after arriving in the US I do sometimes have to go through security again with my carryon but that's a function of airport layout.

Looks like even OP was confused about it so I guess it wasn't something to be made sense of.

Just to be clear: I understand the difference. What I couldn’t do was explain to someone who has no concept that customs are not a security check. Or that you don’t need customs for (effectively) internal flights. I suspect part of this is that in the UK, we don’t get many internal flights (beyond connections), so people don’t have an experience of just walking off a plane and out of the airport.

Yes, I meant you were confused about the nature of the comment/question (like you mentioned in a sibling response somewhere). :)

I flew once from Iraq to Sweden (in a private capacity). There was zero controls other than stamping the passport, passport control but no customs inspection. No check of bags and no question of what I might have been doing in Iraq or why I would go from there to Sweden. It was shocking. Just welcome to sweden and off to the street.

Hopefully they haven't changed. It's nice to see a place still left without the paranoia.


Border entry at airports is concerned with a) smuggling and b) immigration control. Passport control may have been all you saw but there was almost certainly heavy profiling and background checking going on behind the scenes. If you had matched a more suspicious pattern than "high-power passport without suspicious history flying an unusual route", you likely would have faced more scrutiny.

I think the point is that some people expect security even where it would be pointless.

Basically this. She was confusing Customs with Security, I think.

Neither did I, thus why I didn’t really know how to respond.

> If you start imposing laws or other practices every time a group of people feel “uncomfortable”, the world will quickly grind to a halt.

I mean, yes, quite an apt description of our reality. This has basically been the modus operandi of the whole of American society for the last 3 decades.

Can't have your kids riding bikes in the neighborhood. Can't build something on your own property yourself without 3 rounds of permitting and environmental review. Can't have roads that are too narrow for a 1100 horsepower ladder truck. Can't get onto a plane without going through a jobs program. Can't cut hair without a certificate. Can't teach 6 year olds without 3 years of post grad schooling + debt. Can't have plants in a waiting room because they might catch on fire. Can't have a comfortable bench because someone who looks like shit might sleep on it.

Can't can't can't can't ...


It's an interesting thought experiment to consider how you would organise your ideal society.

I lived in Switzerland for a time and there are many notorious rules (e.g. don't shower or flush your toilet after 10pm; don't recycle glass out of working hours) governing day-to-day behaviour which initially seem ridiculous and intrusive. However, what you quickly realise is that many of these are rooted in a simple cultural approach of "live your life as you wish, just don't make other people's life worse" - an approach I came to appreciate.


This is it. It’s amazing how accepting people of this reality and how resigned they are about it.

Yeah, those people are welcome to drive if it makes them feel safer. Meanwhile lets focus on actually making sure planes are safe.

The problem with allowing "feels unsafe" to drive policy is that you get this: https://news.ycombinator.com/item?id=46866201 ; a lot of Americans (and other nationalities) get that "feels unsafe" feeling when they see a visible minority. Or a Muslim. Or someone who isn't a Muslim but (like a Sikh!) is from the same hemisphere as the Middle East.

You get one set of people's rights compromised to salve the feelings of another set, and this is not right.

The worst thing is that indulging it doesn't lessen the fear either. It just means people reach for something else to be "afraid" of.


Oh for sure, as a non-white, bearded person I've had more than my fair share of "random" screenings!

My dad, with similar features, had the additional (mis)fortune of several work trips to the Middle East and China on his passport. He was "randomly" selected pretty much every time on his US trips, until about ~10 years ago.

Hmmm, now I wonder. Like most other people I had suspected that the "random" screenings of people fitting a certain profile were just biases of the agents creeping in. But could it be, given that the whole process is rather public in view of the rest of the people in line, this is also part of that security theater... i.e. maybe the agents are sometimes pandering to the biases of the travellers?


[flagged]


The moment you encode your biases in policy, you create vulnerabilities.

What I’m hearing is that if I want to get something past your security policy, I need to route it through the Netherlands, possibly via a travel agency.


You don't have to profile people to police, and that's very poor policing. You need to assess actual risk, not fake proxy risk like the color of people's skin.

The problem with profiling is that it sucks both ways. People who are regular degular get fucked for the sake of fake policing, and then real threats are more likely to slip through.


> Nah I disagree. A charter plane with 200 Dutch tourists is lower risk than a flight coming out of Bolivia.

What a weird and random thing to say. There's literally no data that can support for or against and neither have a history of terrorism.

Ironically KLM ('dutch') has had more terrorism per flight than any Bolivian airline. Both minimal (Bolivia 1954, KLM 1973,1994). There's literally no other piece of data between these countries that I could find to support this "lower risk".

Further, the travel advisory for Denmark and Netherlands cite terror risk while Bolivia cites civil unrest.

While being woke is not helpful, neither is 'winging it' based on 'what feels white, ahem, right'. At least do a Google search.


How about a flight coming out of Bolivia with 200 Dutch tourists on board? Is it more or less risky than a flight coming out of the USA with 200 Donald Trumps on board? Is there a list?

It is mostly security, but not to residents of the country. Those can enforce their rights. In my country, I can argue with airport security, and win. Foreigners can’t, so they follow whatever rules. A few times when landing in the US, security was extremely rude, I think just looking for an excuse (things like throwing your laptop a few feet away, while staring at you, etc). You take it bc you’re not home, and the cost of ruining your vacation is not worth it.

What I’m trying to say is that , while a lot of it is theater, TSA may be more effective security against foreigners but you as a resident don’t notice because you can opt out. Try going to the UK and telling them you can’t raise your arms while being a US citizen.


Reasonable hypothesis but not correct in the US.

The point where you present your ticket+ID is before and separate from the physical screening. It could be anywhere from a few meters to dozens of meters separating them.

At the screening stage, the agents do not know who you are or your nationality.


It's not about being recognized, it's about when you are asked to be patted down, having the courage to lie "I can't raise my arms over my head", knowing the risk of being caught is at worst not making this flight. For a foreigner it might be getting banned permanently from the country. Same concept as self censorship. You do what you're told and then you go enjoy your vacation.

Understood and reasonable but one correction:

> when you are asked to be patted down, having the courage to lie "I can't raise my arms over my head"

You only get a pat down if you trigger additional screening or opt out. Not being able to raise my arms is NOT opting out. Therefore, no pat down.


I don't think I've ever made it through the physical screening without betraying my accent at some point. Sure you can work your way out of an accent, but it's not easy, and requires years of practice, and probably the most reliable (but fuzzy) low-scrutiny indicator of someone who "aint from around here" in a multicultural society where looks are ~useless for such determinations.

I tried to opt out in the UK last time I was there a few years ago. The agent looked at me, confused, and said "so... you don't want to get on the plane?". She told me the the UK didn't allow opt-outs.

This was the only time I've gone through the machine since they were introduced.


Airport security in India is particularly infuriating on this point. Everything gets scanned and fed through over and over again, and everyone gets wanded and patted down over and over again, with maximum ‘fuck you’ to any passenger that dares to question the sanity of restarting your entire screening - because you left your belt on.

Meanwhile, I haven’t even had a western airports metal detector even fire on the same belt in years.


Most western countries also haven't had multiple attempted [0][1][2] and committed [3][4] mass casualty terror attacks nor a direct conventional conflict that for all intents and purposes was a war [5] in the past 2 years.

And airport security in Israel makes Indian airport security feel like a breeze and I found Turkish airport security to be similar to India's (I remember landing in IST a couple years ago post-COVID and how the news monitors all blared about the 3-6 Turkish soldiers who died in Turkish controlled Syria the day previously).

All three are in very tenuous neighborhoods where the risks of mass casualty terror attacks remains a very real possibility and no on-duty officer wants to be the one who's name comes up in an inquiry into a terror attack should they happen.

Also, from what I remember you are either a Chinese national or someone who has travelled significantly to China. It's the equivalent of a Russian national or Russian-origin person traveling to Poland or Estonia post-2022. Anyone with that profile falls under stricter scrutiny in India due to reciprocal treatment of Indian-nationals and Indian-origin people from Arunachal [6][7] and Ladakh [8] as well as the multiple recent India-China standoffs.

[0] - https://en.wikipedia.org/wiki/2025_Delhi_car_explosion

[1] - https://en.wikipedia.org/wiki/2025_Nowgam_explosion

[2] - https://en.wikipedia.org/wiki/2024_Bengaluru_cafe_bombing

[3] - https://en.wikipedia.org/wiki/2024_Reasi_attack

[4] - https://en.wikipedia.org/wiki/2025_Pahalgam_attack

[5] - https://en.wikipedia.org/wiki/2025_India%E2%80%93Pakistan_co...

[6] - https://indianexpress.com/article/world/who-is-prema-thongdo...

[7] - https://idsa.in/publisher/comments/china-ups-the-ante-in-aru...

[8] - https://www.indiatoday.in/news-analysis/story/why-china-is-e...


> And airport security in Israel makes Indian airport security feel like a breeze

Not just in Israel, but even at other airports for flights to Israel! I was surprised to find that flights to Israel from JFK and EWR actually have a secondary security screening at the gate. In fact, the entire waiting area is walled off with only 1 or 2 controlled entries and exits. If you have to leave the area to go to the bathroom, well, you're just going to get screened again when you come back.

And they are very thorough. They WILL rummage through your carry on and purse and shoes.

(I wasn't even traveling to Israel, I was at an adjacent gate but got in the wrong line by mistake, haha!)


India's airport "security" is one of the best examples of underemployment and security "theatre".

The needless repetition and duplication of tasks achieves little actual "security" and is more a jobs program for a population that is desperately underskilled, underemployed and borderline unemployable. Never mind the fact that airports like Bombay are literally meters away from slums, which are a far greater security risk than actual passengers.

Your list of citations is entirely meaningless because Indian airports are no more or less secure than the average airport in the west. What India manages to do extremely well is annoy the daylights out of travellers for mindless bureaucratic reasons.

Please can you explain how security stamping the back of your boarding pass meaningfully adds to "security" and how fifteen checks of your passport could have avoided a single one of the incidents you list?


Note that for the most part, air travel into/out of the UK is international, so the constraints are stricter.

> The agent looked at me, confused, and said "so... you don't want to get on the plane?"

Brit here.

That's simply the British way of "calling you out" on your bullshit. Had you given a legitimate reason not to be scanned (and I can't think of one offhand), then I assure you, they would have been quite nice and helpful; certainly so in comparison to American standards of airport security staff!


I've felt more uncomfortable with the UK Border Force than US CBP. It's been a few years since I've been to the UK, but there was usually more tension for Non-European foreigners at the Airport than non-Americans at the US airports.

One issue is that security theater creates demand for itself. Do things that induce worry and a tendency towards paranoia in the more susceptible parts of the population and then you will gradually raise the general alertness of the population. This then manufactures a desire for these measures. It largely rides off of people's general unwillingness to entertain just how many of the measures are ineffective or nonsensical. "It can't all be pointless. Surely some of it must make us safer." It's not an unreasonable belief in itself, but everyone having that attitude lets security theater grow cancerously.

We people are extremely poor judges of our own emotions, particularly in hypotheticals.

Normalize having two lines; one with tsa, one without. See which airplane people actually board after a while. Let us put our time and money on the line and we’ll see what we really think. It’s the only way to tell.

I’m sure in a world with tsa for buses and trains some people would say the same things they do now about our tsa.


Let's not mix "emotions" with "think". If I am afraid (emotion) about something happening, I will be afraid where the maximum damage can be done - in the queue before the security check (think). Most airports optimized that to reduce the queues, but there are still at least tens of people in a very narrow space.

But I personally do not care that much, because I think most terrorists are dumb or crazy, and you can't fix all dumb or crazy. Some of the dumb and crazy become terrorists, some become CEO-s, some do maintenance of something critical. If something really bad happens I would not feel much better if it was a "dumb CEO" that caused it or it was a "dumb terrorist".


> Even Bruce Schneier, who coined the term "security theater" has moderated his stance to acknowledge that it can satisfy a real psychological need, even if it's irrational.

What about the real psychological need of not wanting to be surveilled that also quite a lot of people have?


Personally I agree with that sentiment. But unfortunately, as the success of Facebook and Google have shown, most people really don't care about their privacy.

> But unfortunately, as the success of Facebook and Google have shown, most people really don't care about their privacy.

In particular concerning Facebook:

The very radical stances that people have concerning Facebooks (i.e. both the success of Facebook, and the existence of people who are radically opposed to social media sites) rather shows that both stances are very present in society and the trenches between these stances run deep.


The situation re: psychological safety becomes very apparent when you mention to foreigners how often guns accidentally make it through TSA in peoples bags - and get discovered on screening on the return flight.

Saucers for eyes, saucers! Hah

The reality is that screening raises the bar enough that most casuals won’t risk it unless they’re crazy, which is worth something, and makes most people feel comfy, which is also worth something.

It’s like using a master lock on your shed, or a cheap kwikset on your front door.


Here we are specifically discussing the gold star on a USA driver license. When there is already the whole TSA kwikset fiasco in place. The gold star indicates that a person provided some pieces of paper that may be fabricated to a very busy DMV clerk. This is somehow meant to prove they would never do anything malicious.

Or... you could slip the TSA person a $50 and say "keep the change". Legally.

There is no risk in submitting false documents. They reject valid documents all the time. They don't report you to authorities when they reject your documents.

So neither avenue is like even a cheap lock. They are more like door knobs that keep the door closed until you twist the knob that is designed to be easy to twist.


> no risk in submitting false documents

Except the risk you'll miss your flight, which in most cases is the screw that is turned.

My wife and I both have RealID driver's licenses. She had to get a replacement, and apparently the machines used to print them for mailing out later (as opposed to going down to their office and getting a replacement in person) are just ever so slightly off - so her license won't scan. She was given a surprising amount of harassment on a flight not long ago over this matter. She got me to take a photograph of her passport and send it to her so she could show it on the return trip - where her license again failed to scan. This is a fairly well-documented problem. Reports from all over the country have it, and it always seems to be certain license printers that just fail.

So now she carries her Global Entry card, which is otherwise only used for access to the expedited line for land and sea border crossings but is a valid RealID in itself, for domestic flights. It scans correctly.


So there are two kind of security, one is preventing innocents who mistakenly brings things like gun or flammable liquid like gasoline. The other is preventing people who actually want to do harm like terrorism. There is no doubt TSA is effective for first group. However the evidence against second group is kind of murky as no country has ever caught anyone in the second group till now.

I think it's human nature to point at something you don't like and if it isn't 100% perfect then point to it and say it's flawed and must be taken down.

Repeated examples on HN

- TSA effectiveness

- AI Writing code free bug

- Self driving cars get into accidents


You are missing an important element. You can decide for yourself whether AI-produced code is worth the price. You don't get to decide whether the TSA is effective enough to pay for it.

Maybe you are willing to pay 15% for AI that saves you 20%. Even if it isn't very effective, you come out slightly ahead. Or maybe you pay 85% for something you deem to be 90% effective

With TSA you pay 300% for something you might judge to be 2% effective and you don't have a choice.


- TSA fails its own Red Team exercises 95% of the time.

- Self driving cars have measurably fewer accidents.

If you're confusing the two, I suggest you look into the data.

*Not sure on AI code yet.


If you offer the public FDA-inspected cinnamon for a 20% premium over not-inspected-and-may-contain-dangerous-levels-of-lead cinnamon, a lot of people will pay the premium. But a large percentage of people will opt for the cheaper cinnamon.

If you let it be known that the FDA inspection amounts to a high school dropout trying to read a manifest on a shipping container full of imported cinnamon, a lot more people will opt for the cheaper cinnamon. But a significant percentage will still pay the premium.

There is very little about that inspection that protects people, and just because something is not inspected doesn't mean it has lead in it. If you really want to be safe, you should run your cinnamon through your own detection lab.

What we need is an iPhone app that can detect guns, explosives, anthrax, covid, Canadians, and any other airplane hazard. Then let people carry that personal TSA sniffer onto the plane. They can feel safe and secure and the rest of us can save a fortune in taxes.


> What we need is an iPhone app that can detect guns, explosives, anthrax, covid, Canadians, and any other airplane hazard.

No doubt! Then strap it to our arms and call it a Pip Boy.

https://thedirect.com/article/fallout-season-2-us-canada


I would just let the airlines pick if they want TSA screening or not. Customers could buy flights with whatever security level they want.

If you fly intrastate in Alaska there is no screening on commercial flights (it seems TSA must not be required on non-interstate flights). Technically it's still illegal to bring a gun but no one would know one way or the other. It really didn't bother me that there was no security, in fact, it felt great, and at least I could be sure if a bear met us on the tarmac someone would probably be ready.

I know of one other story I heard secondhand from someone experiencing it, of a small regional airline in the South, where if you checked a gun, the pilot just gives it back to the passenger...


Security is a classic example of a public good where this doesn't work well. The cheapest ways to secure an airport (sharing queues, staff, protocols, machines, training, threat models) are going to also benefit those who opt out, creating a tragedy of the commons.

>small regional airline in the South, where if you checked a gun, the pilot just gives it back to the passenger...

If the passenger is white. They would call the cops on anyone else. The state dept of terrorism would get involved if they were 1/1000 middle eastern.


White people and passenger planes tend to get along well. They invent them.

Effectiveness and theatricality aside, that wouldn’t work: the risk that the TSA ostensibly controls for is primarily that of planes being used as weapons against non-passengers, and only secondarily passenger security/hijacking.

If it's about satisfying a psychological need, then it should be compared as such to satisfying other psychological needs. Like, say, not getting groped by strangers.

Security Theater Blanket

Taxpayers haven’t agreed to fund theater they agreed to fund safer travel. The failed audits of TSA are totally unacceptable

The purpose of the system is what it does.

If enough people actually cared about the failed audits, we’d invest in making sure they didn’t fail.

As it is, it’s settled in this funky middle ground that seems to maximize cost/incompetence/hassle which is generally the picture of America overall.


Taxpayers don't universally agree it's ONLY theater, HN is biased echo chamber just like any other group.

> A large portion, maybe even the majority, of travelers simply won't feel safe without it.

Nonsense. Most of that is just because it’s been normalised - because it exists and the people manning it make such a song and dance about it. Going from that to nothing would freak some people out, but if it were just gradually pared back bit by bit people wouldn’t need it anymore.

Here in Australia there’s no security for a lot of regional routes (think like turboprop (dash-8) kind of routes) starting from small airports, because it’s very expensive to have the equipment and personnel at all these small airports, and on a risk-benefit analysis the risk isn’t high enough. Some people are surprised boarding with no security, but then they’re like, “Oh, well must be OK then I guess or they wouldn’t let us do it”…

We also don’t have any liquid limits at all for domestic flights, and don’t have e to take our shoes off to go through security domestically or internationally, and funnily enough we aren’t all nervous wrecks travelling.


I've been applying this principle of behavior to... ahem... current events. I feel like this helps contextualize the behavior of the majority during the current economic and political turmoil. People can't help but pretend this wasn't coming for years, and they certainly can't admit to having a part in it.

Yeah security people (computer or otherwise), are mostly crypto fascists with hardons for humiliating people and telling them what to do.

It's been proven from time to time that the strength of a security system is mostly determined by its strongest element, and defense in depth, and making people jump through hoops contributes comparatively little.

That's why you can go reasonably anywhere on the web, and have your computer publically reachable from any point in the world, yet be reasonably safe, provided you don't do anything particularly dumb, like installing something from an unsafe source.

That's why these weird security mitigation strategies like password rotation every two weeks with super complex passwords, and scary click-through screens about how youll go straight to jail if you misuse the company computer are laughable.


A growing part of me doesn't care, and doesn't want to coddle fascist mental illness.

If it was "Glass Iraq or make people take off their shoes", then I'll take the shoes...

But honestly? Fuck these people. We have extended them unlimited credit to make social change, and they always want more and worse changes. Their insecurities are inexhaustible. We need to declare them bankrupt of political capital. We need to bully them and make it clear their views aren't welcome, frankly.

We are 25 years deep into "Letting the terrorists win", and I'm fucking sick of it.


Fascinating study where they trained an AI model on a large dataset and then used interpretability analysis to figure out what biomarkers it had "learned" to look for.

"This is the contribution we want to highlight: interpretability as a tool for hypothesis triage. Foundation models learn from data at a scale humans can't match. If we can inspect what they've learned we can use that to guide experimental priorities, turning AI models into a source of testable hypotheses rather than a black box."


> astronomical overprovisioning

???

Literally all the cloud providers have been reporting severe capacity crunches for the past few quarters -- to the tune of backlogs of triple-digit billions each. As a reminder, a backlog or "Remaining Performance Obligation" (RPO) is money their customers have committed to them but they could not realize because they didn't have enough capacity to serve their workloads. Which is why they are all committing to double-digit billions each in AI CapEx spend over the next few quarters.

And most of them (aside from Oracle, which is trying to borrow its way into this gold rush) are investing money from their double digit billions in profit (per quarter!) into this spend... money that they could have otherwise comfortably held on to for something more palatable to share-holders.

Revenue and return on investment is a valid concern to bring up in this whole GenAI shebang; demand is not.


Hmm, I wonder if it would be cheaper to hire a couple of software engineers to vibe-code custom SaaS apps on top of the company's existing data layer instead of paying for a hundred different SaaS subscriptions.

Financial considerations aside, one advantage of having in-house engineers is that you can get custom features built on-demand without having to be blocked on the roadmap of a SaaS company juggling feature requests from multiple customers...


I'm at a large company that is building connections between all of its different financial systems. The primary problem being faced is NOT speed to code things, the primary problem at large companies is getting business aligned with tech (communication) and getting alignment across all the different orgs on data ownership, access, and security. AI currently doesn't solve any of this. Throw in needing to deal with regulation/SOX compliance and all the progress you think AI might make, just doesn't align with the problem domains.

Totally makes sense. Turns out that a lot of what Palantir's "Forward Deployed Engineers" do is navigating these bureaucratic and political obstacles to get access to the data: https://nabeelqu.co/reflections-on-palantir -- which may be Palantir's real secret sauce, rather than the tech itself.

Agreed. The SWEs already receive a steady supply of conflicting demands from every possible business unit; the value add for these teams is a working PMO to prioritize the requests coming in.

This is also generally true for all mid to large businesses I've ever worked at.

The code they write is highly domain-specific, implementation speed is not the bottleneck, and their payroll for developers is nothing compared to the rest of the business.

AI would just increase risk for no reward.


> getting business aligned with tech (communication) and getting alignment across all the different orgs

This is what a CEO is supposed to do. I wonder if CEOs are the ones OK with their data being used and sent to large corps like MS, Oracle, etc.


I haven't seen what you're suggesting from a CEO at a large company that's primary business is non-software related. At some point in a businesses life theres an accumulation of so many disparate needs and systems that there can be many many layers of cross org needs for fulfilling business processes. This stuff is messy.

I think I saw it asserted that its easier for a new company, which definitely makes sense as you don't carry along all the baggage.


I work in large projects like this, the CEO doesn't get involved in the little "computer project" except during the project kickoff. Even then, it's just to "say a few words about the people I admire on this team". In large global companies these projects are delegated 3 or 4 levels below the CEO at the highest.

Makes me wonder if they are getting ripe for disruption. Not by a new business model, but a new operating model where a CEO will be tech/ai-aware and push through all these kinds of things.

There's definitely a market for on-prem solutions that don't involve sending all your data to someone else, while reaping the benefits.

> one advantage of having in-house engineers is that you can get custom features built on-demand without having to be blocked on the roadmap of a SaaS company juggling feature requests from multiple customers...

Many larger enterprises do both – buy multiple SaaS products, and then have an engineering team to integrate all those SaaS products together by calling their APIs, and build custom apps on the side for more bespoke requirements.

To give a real world example: the Australian government has all these complex APIs and file formats defined to integrate with enterprises for various purposes (educational institutions submitting statistics, medical records and billing, taxation, anti-money laundering for banks, etc). You can't just vibe code a client for them – the amount of testing and validation you have to do with your implementation is huge–and if you get it wrong, you are sending the government wrong data, which is a massive legal risk. And then, for some of them, the government won't let you even talk to the API unless you get your product certified through a compliance process which costs $$$. Or, you could just buy some off-the-shelf product which has already implemented all of that, and focus your internal engineering efforts on other stuff. And consider this is just one country, and dozens of other countries worldwide do the same thing in slightly different ways. But big SaaS vendors are used to doing all that, they'll have modules for dealing with umpteen different countries' specific regulations and associated government APIs/file formats, and they'll keep them updated since they are forever changing due to new regulations and government policies. And big vendors will often skip some of the smaller countries, but then you'll get local vendors who cover them instead.


Hmm, 8M paid M365 Copilot users leaked in August, and at last week's earnings call the number was 15M.

Assuming the leak was accurate, almost doubling usage in 4 months for an enterprise product seems like pretty fast growth?

Its growth trajectory seems to be on par with Teams so far, another enterprise product bundled with their M365 suite, though to be fair Teams was bundled for free: https://www.demandsage.com/microsoft-teams-statistics/


I mean, they're literally pointing out the negative effects of AI-assisted coding?

> We found that using AI assistance led to a statistically significant decrease in mastery. On a quiz that covered concepts they’d used just a few minutes before, participants in the AI group scored 17% lower than those who coded by hand, or the equivalent of nearly two letter grades. Using AI sped up the task slightly, but this didn’t reach the threshold of statistical significance.

This also echoes other research from a few years ago that had similar findings: https://news.ycombinator.com/item?id=46822158


Dude you falling for so obvious corpo-psyops is so sad. Tobacco companies literally published research that said cigarettes were dangerous too, that didn't stop them from lying to Congress and saying cigarettes weren't totally safe.

Some of you are the reason why there needs to be a new luddite movement (fun fact, the luddites were completely correct in their movements; they fought against oppressive factory owners that treated their fellow humans terrible, smashing the very same machines they used themselves. Entrepreneurs were literally ushering in a new hell on Earth where their factors were killing so many orphans (because many people refused to work in such places originally, until forced by dying in the streets or dying from their labor in such places) they had to ship the bodies of children across towns to not draw suspicion). Until the entrepreneurs started killing them and convincing the king reagent to kill them with the state, they had massive support. Support so high that when suspected luddites were escaping from the "police" you could hear entire towns cheering them on helping them escape).

People rightfully hate this stuff and you refuse to see, the evidence says it's terrible but hey let's still sell it anyway what's the worse that can happen?


Well, this is what Anthropic's CEO told Congress in 2023, the message was not quite "AI is just peachy": https://www.judiciary.senate.gov/imo/media/doc/2023-07-26_-_...

Or here's his more recent statements on the potential disruption from AI: https://www.cnbc.com/2026/01/27/dario-amodei-warns-ai-cause-...

Anthropic is pretty much the only major frontier AI lab that keeps saying "AI is dangerous, we should proceed with caution." It sounds like you're in violent agreement.

If your stance is AI development should not be continued at all, well, the history of Luddites should tell you what happens when an economic force meets labor concerns in a Capitalistic world.

The genie is out of the bottle and there's no putting it back. Our only choices now are to figure out how to tame it, or YOLO it and FAFO.


Another study from 2024 with similar findings: https://www.mdpi.com/2076-3417/14/10/4115 -- a bit more preliminary, but conducted with undergrad students still learning to program, so I expect the effect would be even more pronounced.

This similarly indicates that reliance on LLM correlates with degraded performance in critical problem-solving, coding and debugging skills. On the bright side, using LLMs as a supplementary learning aid (e.g. clarifying doubts) showed no negative impact on critical skills.

This is why I'm skeptical of people excited about "AI native" junior employees coming in and revamping the workplace. I haven't yet seen any evidence that AI can be effectively harnessed without some domain expertise, and I'm seeing mounting evidence that relying too much on it hinders building that expertise.

I think those who wish to become experts in a domain would willingly eschew using AI in their chosen discipline until they've "built the muscles."


Interestingly, this observation holds even when you scale AI use up from individuals to organizations, only at that level it amplifies your organization's overal development trajectory. The DORA 2025 and the DX developer survey reports find that teams with strong quality control practices enjoy higher velocity, whereas teams with weak or no processes suffer elevated issues and outages.

It makes sense considering that these practices could be thought of as "institutionalized skills."


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: