Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Software Survival 3.0 (steve-yegge.medium.com)
91 points by jaybrueder 14 hours ago | hide | past | favorite | 78 comments




Here's what I don't get about the "AI can build all software" scenario. It extrapolates AI capabilities up to a certain, very advanced point, and then, inexplicably, it stops.

If AI is capable enough to "build pretty much anything", why is it not capable enough to also use what it builds (instead of people using it) or, for that matter, to decide what to build?

If AI can, say, build air traffic control software as well as humans, why can't it also be the controller as well as humans? If it can build medical diagnosis software and healtchare management software, why can't it offer the diagnosis and prescribe treatment? Is the argument that there's something special about writing software that AI can do as well as people, but not other things? Why is that?

I don't know how soon AI will be able to "build pretty much anything", but when it does, Yegge's point that "all software sectors are threatened" seems to be unimaginative. Why not all sectors full stop?


AI doesn't "want to" control air traffic. It doesn't have any desire or ambition. That's what the humans are for.

It is merely a tool like a hammer. The hammer doesn't build the house, it is the human who wields the hammer that builds the house.


I didn't say it wants to do anything. It also doesn't want to build software. But why would it be the case that an AI told to build an air traffic control software could successfully do that, but an AI told to make sure planes arrive where they're supposed to safely and on time won't be able to figure out the rest?

Now, I'm not saying it's impossible for there to be something that makes the first job significantly easier than the second, but it's strange for me to assume that an AI would definitely be able to do the former soon, yet not the latter. I think it could be reasonable to believe it will be able to do neither or both soon, but I don't understand how we can expect the ability line to just happen to fall between software and pretty much everything else.


This somewhat falls apart the second you realize that current models are already choosing which tools to use all the time. You can argue that's not "desire" but I'm not sure you'd convince me.

Frankly - even the other end of your argument is weak. Humans don't particularly want to control air traffic either (otherwise why are we having to pay those air traffic controllers their salaries to be there?). They do it as a function of achieving a broader goal.


I’ve heard this called “technical deflation” and it works similarly to how economic deflation can play out, causing you to forego actions in the present because you think they’ll be easier in the future (or in this case, possibly not needed at all). Time will tell if this results in a “software deflationary spiral” or not.

Yes, closely related to the question why AI services insists on writing software in ineffective high level languages intended for humans to read, such as Python. Which then needs to be compiled, use large standard libraries etc. Why not output the actual software directly, intended for a computer to run directly?

Same reason humans write in higher-level languages instead of machine code? Each additional unit of program text costs energy at write time, so there's a bias toward more compact _representations_ of programs, even if they're less efficient at runtime.

Because the written code, variable names, structure, and comments also serve as context for the LLM.

This is why LLM written code is often more verbose than human written code. All of those seemingly unnecessary comments everywhere, the excessively descriptive function names, the way everything is broken down into a seemingly excessive number of logical blocks: This is all helpful to the LLM for understanding the code.


Maybe for existing models, but I don't think that's necessarily the case. AI tools that generate or manipulate images or video don't need human words in the format. Also, I'm not sure how much meaningful names help beyond a certain program size. Assembly is quite understandable locally - even for humans, but especially if you're an AI model trained specifically on it - and globally, even codebases in high-level programming languages are too large to grasp just as code.

There are two ways to answer your questions. You are asking how do we choose between (1) generate+run (AI generate software for some task, then we run the software to do that task) and (2) agentic execution (AI simply completes the task).

First way to look at this is through the lens of specialization. A software engineer could design and create Emacs, and then a writer could use Emacs to write a top-notch novel. That does not mean that the software engineer could write top-notch novels. Similarly, maybe AI can generate software for any task, but maybe it cannot do that task just as well as the task-specialized software.

Second way to look at this is based on costs. Even if AI is as good as specialized software for a given task, the specialized software will likely be more efficient since it uses direct computation (you know, moving and transforming bits around in the CPU and the memory) instead of GPU or TPU-powered multiplications that emulate the direct computation.


> Similarly, maybe AI can generate software for any task, but maybe it cannot do that task just as well as the task-specialized software.

Yes, maybe, but assuming that is the case in general seems completely arbitrary. Maybe not all jobs are like writing software, but why assume software is especially easy for AI?

> Even if AI is as good as specialized software for a given task, the specialized software will likely be more efficient since it uses direct computation

Right, but surely an AI that can "build pretty much anything" can also figure out that it should write specialised software for itself to make its job faster or cheaper (after all, to "build pretty much anything", it needs to know about optimisation).


> Friction_cost is the energy lost to errors, retries, and misunderstandings when actually using the tool. [...] if the tool is very low friction, agents will revel in it like panthers in catnip, as I’ll discuss in the Desire Paths section.

This is why I think Ruby is such a great language for LLMs. Yeah, it's token-efficient, but that's not my point [0]. The DWIM/TIMTOWTDI [1] culture of Ruby libraries is incredible for LLMs. And LLMs help to compound exactly that.

For example, I recently published a library, RatatuiRuby [2], that feeds event objects to your application. It includes predicates like `event.a?` for the "a" key, and `event.enter?` for the Enter key. When I was implementing using the library, I saw the LLM try `event.tilde?`, which didn't exist. So... I added it! And dozens more [3]. It's great for humans and LLMs, because the friction of using it just disappears.

EDIT: I see that this was his later point exactly! FTA:

> What I did was make their hallucinations real, over and over, by implementing whatever I saw the agents trying to do [...]

[0]: Incidentally, Matz's static typing design, RBS, keeps it even more token-efficient as it adds type annotations. The types are in different files than the source code, which means they don't have to be loaded into context. Instead, only static analysis errors get added to context, which saves a lot of tokens compared to inline static types.

[1]: Do What I Mean / There Is More Than One Way To Do It

[2]: https://www.ratatui-ruby.dev

[3]: https://git.sr.ht/~kerrick/ratatui_ruby/commit/1eebe98063080...


Stuff like this makes me feel like I'm living in a different reality than the author

I wonder if interacting with AI all day, whether you work with it or just talk to it, has a negative impact on your perception of reality...

>"Stuff like this makes me feel like I'm living in a different reality than the author"

A quick look at gastown makes me think we all are.


His article introducing it [1] was very interesting...

[1] https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...


Steve is quite clear in his posts on Gas Town how seriously he expects you to take it, how much he thinks it represents the real world we live in.

This post is trying to talk to the real world.

I see a ton of truth & reality in this post. This post does what I love from Yegge, it points to calibrate by. I think it incredibly insightfully ascertains a very real shift in sellable software value that is underway, now that far more people suddenly can talk to computers.


Ya, the post is almost laser focused on "SaaS" as a concept. Which is fine, but there are MANY other types of software that will see different effects.

Some interesting long-term, directional ideas about the future of software dev here but the implied near-termness of SaaS being disintermediated ignores how management in large orgs evaluates build vs buy Saas decisions. 'Build' getting 10x cheaper/easier is revolutionary to developers and quite possibly irrelevant or only 'nice-to-have' to senior management.

Even if 10x cheaper, internally built Saas tools don't come with service level agreements, a vendor to blame/cancel if it goes wrong or a built-in defense of "But we picked the Gartner top quadrant tool".


I've made many business cases for internally-built SaaS tools, and they always rest on the idea that our probability of success is higher if we staff a team and build the _exact thing_ we need versus purchasing from a vendor and attempting an integration into our business.

It's far more challenging to win the 'build' argument on a cost savings approach, because even the least-savvy CIO/CTO understands that the the price of the vendor software is a proof point grounded in the difficulty for other firms to build these capabilities themselves. If there's merit to these claims, the first evidence we'll see is certain domains of enterprise software (like everything Atlassian does) getting more and more crowded, and less and less expensive, as the difficulty of competing with a tier-1 software provider drops and small shops spring up to challenge the incumbents.


Agreed. I was going to add (but didn't) that the first evidence of an 'AI unlock' on SaaS wouldn't be internal builds but many new, much cheaper competitors appearing for leading SaaS tools. Your point about the best arguments to internally build SaaS being 1) integration savings, and 2) better fit, is spot on. But senior management has to balance those potential benefits (and the risk an internal effort fully delivers on-time & budget) against sticking with 'the devil we know' which works (imperfectly) today.

In my experience, a bigger blocker to C-level approving internal SaaS development is it diverts capital and scarce attentional bandwidth to 'buying an upside' that's capped. Capped how? Because, by definition, any 'SaaS-able' function is not THE business - it's overhead. The fundamental limit on a SaaS tool's value to shareholders is to be a net savings on some cost of doing business (eg HR, legal, finance, sales, operations, support, etc). No matter how cheap going in-house makes a SaaS-able activity, the best case is improving margins on revenue. It doesn't create new revenue. You can't "save your way" to growth.


Yeah I don't buy the build internally argument. Its not necessarily the building of an internal tool that is the problem it's the maintenance and service level guarantees that you get from a vendor that are arguable more valuable so you can focus on the thing that matters for your company. Product market fit is now more important than ever and there are so many additional options now; focus on the "right" thing is more valuable now than ever before.

Isn’t this a form of what he labels the “human coefficient”?

Some businesses prefer tools built by other businesses for some tasks. The author advocates pretty plainly to identify and target those opportunities if that’s your strength.

I think his point is to recognize that’s moving toward a niche rather than the norm (on the spectrum of all software to be built).


> implied near-termness of SaaS being disintermediated

Also, is this even true? The author's only evidence was to link to a book about vibe coding. I'd be interested to hear anecdotes of companies who are even attempting this.

Edit: wow, and he's a co-author of that book. This guy really just said "source: me"


> Gas Town has illuminated and kicked off the next wave for everyone

That sounds pretty hyperbolic. Everyone? Next “wave”?


Sounds like a decent into madness to me and I'm somewhat pro AI.

The guy is high on his own supply. This entire thing reads like a fever dream.

Sounds like a cult to me.

> First let’s talk about my credentials and qualifications for this post. My next-door neighbor Marv has a fat squirrel that runs up to his sliding-glass door every morning, waiting to be fed.

Some of the writings here feels a little incoherent. The article implies progress will be exponential as matter of fact but we will be lucky to maintain linear progress or even avoid regressing.


> If you believe the AI researchers–who have been spot-on accurate for literally four decades

LOLWUT?

Counter-factual much?


I feel like this isn't adequately accounting for the fact that existing software is becoming easier to refactor as well. If someone wants a 3d modelling program but is unsatisfied with the performance of some operation in Blender, are they going to vibe code a new modelling program or are they going to just vibe refactor the operation in Blender.

I'm not a business dude but even I can see one problem in his argument: He tacitly equates "agents use your software" with "your software survives". But having your software invoked by an agent doesn't magically enrich you. Up to now, human users have made software success by one of:

1. Paying money for the software or access to it.

2. Allowing a fraction of the attention to be siphoned off and sold to advertisers while they use the software.

I don't think advertisers want to pay much for the "mindshare" of mindless bots. And I'm not sure that agents have wallets they can use to pony up cash with. Hopefully someone will figure out a business model here, but Yegge's article certainly doesn't posit one.


Microsoft CEO said in a podcast they are preparing for the moment the biggest paying customers for Windows, Office and their cloud products will be agents, not humans.

They looove big hyperbolic quotes. If they don't do those, what are their real value? It's purely narcissistic. It's also a survival technique. Come up with wow factor so the board doesn't think you're behind.

Reads like this and that social network for AI agents makes me depressed. Authors have to cut the LLM dosage.

I'm not convinced of this post's hopeful argument near the end. If you are doing SaaS as a way of making money and don't have a deep moat aside from the code itself, it will probably be dead in a few years. The AI agents of the future will choose free alternatives as a default over your paid software, and by the way said free alternatives are probably made using reliable AI agents and are high-quality and feature complete. AI agents also don't need your paid support or add-on services from your SaaS companies, and if everyone uses agents, nobody will be left to give you money.

As a technical person today, I wouldn't pay a $10/month SaaS subscription if I can login to my VPS and tell claude to install [alternate free software] self-hosted on it. The thing is, everyone is going to have access to this in a few years (if nothing else it will be through the next generation of ChatGPT/Claude artifacts), and the free options are going to get much better to fit any needs common enough to have a significant market size.

You probably need another moat like network effects or unique content to actually survive.


This is my take as well.

He spends a lot of words talking about how saving cognition is equivalent to saving resources, but seems to gloss over that saving money is also saving resources.

Given the token/$ exchange rates is likely only going to get better for actual money over time...

If his predictions come true it seems clear that if your software isn't free, it won't get used. Nothing introduces friction like having to open up a wallet and pay. It's somewhat telling that all of his examples of things that will survive don't cost money - although I don't think it's the argument he meant to be making given the "hope-ium" style argument he's pushing.

---

Arguably, this is good long term. I personally think SaaS style recurring subscriptions are out of control, and most times a bad deal. But I also think it leaves a spot where I'm not sure what sort of career will exist in this space.


Yes I feel like AI was first gunning for software engineer but we're seeing it shift towards SaaS replacements and small apps.

>I debated with Claude endlessly about this selection model, and Claude made me discard a bunch of interesting but less defensible claims. But in the end, I was able to convince Claude it’s a good model

Convinced an LLM to agree with you? What a feat!

Yegge's latest posts are not exactly half AI slop - half marketing same (for Beads and co), but close enough.


A thought I had after reading that sentence: So many people that are very pro-AI also increasingly seem to speak with near infinite confidence. I wonder how much of that comes from them spending too much time chatting with AI bots and effectively surrounding themselves with digital yes-men?

You have discovered a fact. Now trace this to its ultimate conclusion. Where does all this lead? Keep in mind also the whole "AI is God" movement that's emerging, where the non-AI-worshippers are depicted as being cockroaches. Who is creating and encouraging all this and what is their endgame?

I wish people would stop up voting AI Nostradamus articles...

"If you believe the AI researchers–who have been spot-on accurate for literally"

I do not understand what has happened to him here... there was an entire "AI winter" in the 90's to 2000's because of how wrong researchers were. Has he gone completely delusional? My PhD supervisor has been in AI for 30 years and talks about how it was impossible to get any grant money then because of catastrophically wrong predictions had been.

Like, honest question. I know he's super smart, but this reads like the kind of ramblings you get from religious zealots or scientologists, just complete revisions of known, documented history, and bizarre beliefs in the inevitably of their vision.

It really makes me wonder what such heavy LLM coding use does to one's brain. Is this going to be like the 90's crack wave?


Yeah, I full-stopped on that sentence because it was just so bizarre. I can understand making a counter-to-reality claim and then supporting the claim with context and interpretation to build toward a deeper point. But he just asserts something obviously false and moves on with no comment.

Even if he believes that statement is true, it still means he has no ability to model where his reader is coming from (or simply doesn't care).


Maybe it's not for you as a human to understand.

Why presuppose that a human wrote this, as opposed to a language model, given the subject?


Mmm, good point!

I didn't read the article. And I do think if you read words from someone who did crypto rugpull you don't value your time and intelligence.

I know this doesn't 'contribute to the discussion.' But seriously this guy's latest contribution to the world was a meme coin backed project...


While I have zero interest in defending or participating in the financialization of all things via crypto, there is a bit of nuance missing here.

BAGS is a crypto platform where relative strangers can make meme coins and nominate a recipient to receive some or all of the funds.

In both Steve Yegge and Geoffrey Huntley's cases, tokens were made for them but apparently not with their knowledge or input.

It would be the equivalent of a random stranger starting a Patreon or GoFundMe in your name, with the proceeds going to you.

Of course, whether you accept that money is a different story but I'm sure the best of us might have a hard time turning down $300,000 from people who wittingly participate in these sorts of investment platforms.

I don't immediately see how those left holding the bag could have ended up in that position unknowingly.

My parents would likely have a hard enough time figuring out how to buy crypto, let alone finding themselves rugpulled by a meme token is my point so while my immediate read is that pump and dump is bad, bad relative to who the participants are is something I'm curious to know if anyone has an answer for


If someone anonymous starts a Patreon to support your software project I'll assume that someone is you and it will take very strong evidence to change my mind.

It's so funny tho. If you post on reddit saying "my friend had a fight with his wife last night..." absolutely no one would believe it's really your friend. But somehow you say "uh so there is someone anonymous who launched a meme coin for my project..." people believe it's really someone anonymous.


That's a reasonable assumption to make and one I wondered about too!

I'm just saying that there's no evidence I'm aware of that would prove or disprove that the creators were involved.

Personally, I think crypto types are bizarre enough that I could believe they would do something like that unannounced.

In my mind, it's the same behaviour as the infamous Kylie Jenner "get her to $1 billion" GoFundMe from a few years back: https://www.businessinsider.com/kylie-jenner-gofundme-fans-c...


In big orgs, 'agents can build it' rarely changes the buy vs build decision. The pragmatic moat I see isn’t the code, it’s turning AI work into something finance and security can trust. If you can’t measure and control failure-cost at the workflow level, you don’t have software.

I read that MoltBook website and now I can't help but see the similarities in responses on posts like this to just general chatter in MoltBook. I'm not really sure what to make of that yet.

It was trained on it so it's purely just doing a good job.

> I debated with Claude endlessly about this selection model, and Claude made me discard a bunch of interesting but less defensible claims. But in the end, I was able to convince Claude it’s a good model

This is not a good way to do anything. The models are sychophantic, all you need to do in order to get them to agree with you is keep prompting: https://www.sfgate.com/tech/article/calif-teen-chatgpt-drug-...


> Let me know what you think by complaining that AI sucks on HN!

At least you complied with the next sentence :)

EDIT: Whoa, I didn't check your link before I posted. That's terribly sad. While I agree that LLMs can be sycophantic, I don't think Yegge was debating with Claude about drug use in this situation. Other references might have worked better to support you claim like this first page result when I search for "papers on llm sycophancy": https://pmc.ncbi.nlm.nih.gov/articles/PMC12592531/


I don't think that AI sucks. I just think that it is something that is too maleable to debate with

You need to explicitly tell it to debate you. Also you cite and example of someone using ChatGPT to discuss personal issues. We are talking about technical discussions here.

I'm frankly exhausted from AI takes from both pessimists and optimists--people are applying a vast variety of mental models to predict the future during what could be a paradigm shift. A lot of the content I see on here is often only marginally more insightful than the slop on LinkedIn. Unfortunately the most intelligent people are most susceptible to projecting their intelligence on these LLMs and not seeing it: LLMs mirror back a person's strengths and flaws.

I've used these tools on-and-off an awful lot, and I decided last month to entirely stop using LLMs for programming (my one exception is if I'm stuck on a problem longer than 2-3 hours). I think there is little cost to not getting acquainted with these tools, but there is a heavy cognitive cost to offloading critical thinking work that I'm not willing to pay yet. Writing a design document is usually just a small part of the work. I tend to prototype and work within the code as a living document, and LLMs separate me from incurring the cost of incorrect decisions fully.

I will continue to use LLMs for my weird interests. I still use them to engage on spiritual questions since they just act as mirrors on my own thinking and there is no right answer (my side project this past year was looking through the Christian Gospels and some of the Nag Hammadi collection from a mystical / non-dual lens).


Yep. I've been around long enough to not give a fuck about any technology until it has been around for at least a decade. We're not there yet.

I think that's a very extreme take in the software industry. Sure you don't need to pick up every new trend, but a ridiculous amount has changed in the past 10 years. If you only consider stuff from 2016, you're missing some incredible advancements.

You'd be missing stuff like: - Containers - Major advancement in mainstream programming languages - IaC

There's countless more things that enable shipping of software of a completely different nature than was available back then.

Maybe these things don't apply to what you work on, but the software industry has completely changed over time and has enabled developers to build software on a different scale than ever previously possible.

I agree there's too much snake-oil and hype being sold, but that's a crazy take.


Weeeeelllll...

Post-CFEngine (Puppet, Ansible, Terraform) and cloud platform (CloudFormation) infrastructure-as-code is over a decade old.

Docker's popularisation of containers is just over a decade old.

But containers (and especially container orchestration, i.e. Kubernetes) are still entirely ignorable in production. :-D


It's not that I refuse to acknowledge they exist, just don't give a fuck. I mean do I really care about Kubernetes CNI? Nope it doesn't actually make any money - it's an operational cost at the end of the day. And the whole idea of Kubernetes and containers leads to a huge operational staffing cost just to keep enough context in house to be able to keep the plates spinning.

And it's not at all crazy. We sold ourselves into over-complex architecture and knowledge cults. I've watched more products burn in the 4-5 year window due to bad tech decisions and vendors losing interest than I care to remember. Ride the hype up the ramp and hope it'll stick is not something you should be building a business on.

On that ingress-nginx. Yeah abandoned. Fucked everyone over. Here we go again...


where these tools really shine is in the hand of someone who knows what they want soup-to-nuts, knows what is correct and what is not, but just doesn't want type it all out and set it all up. For those people, these tools are a breath of fresh air.

I remember reading a comment a few days ago where someone said coding with an agent (claude code) made them excited to code again. After spending some time with these things i see their point. You can bypass the hours and hours of typing and fixing syntax and just go directly to what you want to do.


I'm constantly reminded how software is all around us, we don't even notice it.

Operational excellence survives, no matter the origin.


I've been using Claude and it's a game changer in my day to day. The caveat being of course that my tasks at a small "feature" level and all interactions are supervised. I see no evidence that this is going to change soon...

My other thought, that I can't articulate that well is....what about testing? Sure LLMs can generate tons of code but so what? If your two sentence prompt is for a tiny feature that's one thing. If you ask Claude to "build me a todo system" the results will likely rapidly diverge from what you're expecting. The specification for the system is the code, right? I just don't see how this can scale.


My take, and also perhaps hope, is that the kind of software has best survival chances that is developed by reasonable, down to earth people who understand human needs and desires well, have some overall vision, and create tools that just work and don't waste the user's time. Whether that is created with the help of AI or not might not matter much in the end.

On a side note, any kind of formula that contains what appears to be a variable on the left hand side that appears nowhere on the right hand side deranges my sense of beauty.


I feel like we are in universal paperclips, a game about turning all matter in the universe into paperclips.

We are entering the absurd phase where we are beginning to turn all of earth into paperclips.

All software is gonna be agents orchestrating agents?

Oh how I wish I would have learned a useful skill.


>If you believe the AI researchers–who have been spot-on accurate for literally four decades

Is this supposed to be a joke?


Steve Yegge used to be a decent engineer with a clear head and an ability to precisely describe problems he was seeing. His "Gooogle Platform Rant" [1] is still required reading IMO.

Now his bloviated blogposts only speak of a man extremely high on his own supply. Long, pointless, meandering, self-aggrandising. It really is easier to dump this dump into an LLM to try to summarize it than spend time trying to understand what he means.

And he means very little.

The gist: I am great and amazing and predicted the inevitable orchestration of agents. I also call the hundreds of thousands of lines of extremely low quality AI slop "I spent the last year programming". Also here are some impressive sounding terms that I pretend I didn't pull out of my ass to sound like I am a great philosopher with a lot of untapped knowledge. Read my book. Participate in my meme coin pump and dump schemes. The future is futuring now and in the future.

[1] https://gist.github.com/chitchcock/1281611


Also in a recent interview he implied that anyone who disagrees is an “effing idiot”

This is my take as well.

Steve Yegge has always read a bit "eccentric" to me, to say the least. But I still quote some of his older blog posts because he often had a point.

Now... his blog posts seem to show, to quote another commenter here, "a man's slow descent into madness".


Holy fuck is this guy blowing smoke up his own ass.

He needs an editor, I’m sure he can afford one.

I look forward to him confronting his existence as he gets to be as old as his neighbor. It will be a fun spectacle. He can tell us all about how he was right all along as to the meaning of life. For decades, no less.


This guy was into the bagscoin BS that was going on on X this past month. Wouldn't trust a word he says.

This is one of those instances where bullshit takes more effort to debunk than it does to create.

We already went over how Stack Overflow was in decline before LLMs.

SaaS is not about build vs. buy, it's about having someone else babysit it for you. Before LLMs, if you wanted shitty software for cheap, you could try hiring a cheap freelancer on Fiverr or something. Paying for LLM tokens instead of giving it to someone in a developing country doesn't really change anything. PagerDuty's value isn't that it has an API that will call someone if there's an error, you could write a proof of concept of that by hand in any web framework in a day. The point is that PagerDuty is up even if your service isn't. You're paying for maintenance and whatever SLA you negotiate.

Steve Yegge's detachment from reality is sad to watch.


The human capacity for self-delusion will never cease to amaze me.

So much Noise...

Too many people are running a LLM or Opus in a code cycle or new set of Markdown specs (sorry Agents) and getting some cool results and then writing thought-pieces on what is happening to tech.. its just silly and far to immediate news cycle driven (moltbot, gastown etc really?)

Reminds me of how current news cycle in politics has devolved into hour by hour introspection and no long view or clear headed analyis -we lose attention before we even digest that last story - oh the nurse had a gun, no he spit at ICE, masks on ICE, look at this new angle on the shooting etc.. just endless tweet level thoughts turned into youtube videos and 'in-depth' but shallow thought-pieces..

its impossible to separate the hype from baseline chatter let alone what the real innovation cycle is and where it is really heading.

Sadly this has more momentum then the actual tech trends and serves to guide them chaotically in terms of business decisions -then when confused C suite leaders who follow the hype make stupid decisons we blame them..all while pushing their own stock picks...

Don't get me started on the secondary Linkedin posts that come out of these cycles - I hate the low barrier to entry in connected media sometimes.. it feels like we need to go back to newspapers and print magazines. </end rant>


Steve Yegge is hella smart, and I've spent many hours digging into his recent work on GasTown and Beads, but he needs to read up on business strategy.

I'd recommend starting with Stratechery's articles on on Platforms and Aggregators[0], and a semester long course on Porter's Five Forces[1].

[0]https://stratechery.com/2019/shopify-and-the-power-of-platfo...

[1]https://en.wikipedia.org/wiki/Porter%27s_five_forces_analysi...


"These systems are, in a meaningful sense, crystallized cognition, a financial asset, very much like (as Brendan Hopper has observed) money is crystallized human labor."

The latter part of this sentence is basically the labor theory of value. Capital Vol. 1 by Karl Marx discusses this at length deriving the origin of money, though I believe others like Ricardo and Smith also had their own versions of this theory.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: