Hacker Newsnew | past | comments | ask | show | jobs | submit | with's commentslogin

The logo is slightly creepy

Neat study, but I always chuckle at these, because has there ever been verified science that shows exercise is unhealthy? (besides overtraining)

The general consensus should just be exercising is good for you, that's it, done.


>... has there ever been verified science that shows exercise is unhealthy?

Yes, the extremes of endurance have certainly been shown to have a negative effect on heart health, and possibly also colon health, but the amount of exercise required to get into the danger zone here is so high almost no one that isn't a competitive athlete would achieve it. (Although, amateur marathon runners might.)


You could look at the inverse: Not exercising causes the brain to look older. Knowing all of the ways not exercising is harmful is probably a good thing.

But I agree, it would be better if everyone exercised!


And I chuckle at these types of comments.

We know that exercise is good for us, but studying it is how we better understand the different ways it is beneficial for us in a controlled setting.

I see these comments online a lot, just because something might be common knowledge, doesn't mean we fully understand it, nor should be stop studying it.


Good point, but has anyone shown that gravity doesn’t work in general relativity conditions? We move the needle of proof, such that the burden to disprove becomes harder. That seems fine by me, it’s nicer and nicer to see the benefits of exercise.

True, you can do almost anything if find is allowlisted.

find / -exec sh -c 'whatever u wanna do' \;


It's true that most problems can be solved with context + prompt. I have actively seen teams within large organizations complicate it into complex "agentic orchestration" just to impress leadership who lack the expertise to realize it's not even necessary. Hell, there are various startups who make this their moat.

Good for promo projects though, lol


The metaphor is broken. The snowball grows passively over time naturally, but being a founder requires you to actively create value in your startup. Snow doesn't choose to stick to your ball based on PMF, and the entire piece romanticizes grinding without once mentioning customers, revenue, or whether you're solving a real problem people will pay for.

I think it's dangerous sentiment to say if you create a snowball (startup) and just keep pushing it forever it is guaranteed to grow to something large. Some might say "duh, of course", but I still think a lot of people don't understand this.


Yeah, but its a metaphor of the creation process. It's perhaps a bit on the light side when it comes to obstacles, but it's not a bad metaphor of the business creation journey.

I would perhaps point out this is not a VC business journey, that snowball looks very different.

And sure, the business starts in a easy environment (lots of snow on the ground) but the idea of starting alone resonates.

And it leaves out the sun. That pesky sun which melts the snow causing 9 out of 10 snowballs to melt. The sun, which melts the snow around you even as you struggle to push. Your direction is meaningless if you insist on pushing away from the snow.


> The snowball grows passively over time naturally

Only if you push it down the mountain. Then it’s also susceptible to crashing and breaking down.

Normally what you do is you have to push the snowball manually. The bigger it gets, the more people you need to push it in a coordinated manner.

I think it’s excellent metaphor.


You’re entitled to your opinion, but I don’t think that’s what I wrote.

> I think it's dangerous sentiment to say if you create a snowball (startup) and just keep pushing it forever it is guaranteed to grow to something large.

You first have to find somewhere that involves pushing it mostly downhill instead of uphill. Otherwise this turns into the tale of Sisyphus.


Yes. But a hill is easy to push up when it’s small and sticks to the snow. As it gets bigger, you can still go uphill but you just have to be strategic about it (as mentioned in the story). But small snowballs can go uphill all day long, they just have to make it to the top of the hill before they get too big.

> But a hill is easy to push up when it’s small and sticks to the snow.

Depends how steep it is. In this metaphor, I guess we're talking about something like product market fit.


Stay away from metaphors. They are higher than your rational level.

It's ironic that doomscrollable social media feeds are built for low attention spans, because this website is the opposite. Gave up after 20 seconds.

Yann Lecun is a legend, no doubt, but, his critique is starting to become outdated. He wants the whole world to slow down and wait for his preferred paradigm, but everyone is out shipping instead. We have LLMs that pass BAR exams now and do multi-step "reasoning".

I am an LLM reasoning skeptic too, but the problem is, he's dismissing real measurable progress while not proving his own alternative approach.


He's starting to sound the like stochastic parrot crowd, shaking fists from the side line while everyone and their dog throws these systems into a pot and sees what they can cook up.

> The bifurcation is real and seems to be, if anything, speeding up dramatically. I don't think there's ever been a time in history where a tiny team can outcompete a company one thousand times its size so easily.

Slightly overstated. Tiny teams aren't outcompeting because of AI, they're outcompeting because they aren't bogged down by decades of technical debt and bureaucracy. At Amazon, it will take you months of design, approvals, and implementation to ship a small feature. A one-man startup can just ship it. There is still a real question that has to be answered: how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.


> how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.

Ultimately, it's the same way you ship human-generated code at scale without causing catastrophic failure: by only investing trust in critical systems to people who are trustworthy and have skin in the game.

There are two possibilities right now: either AI continues to get better, to the point where AI tools become so capable that completely non-technical stakeholders can trust them with truly business-critical decision making, or the industry develops a full understanding of their capabilities and is able to dial in a correct amount of responsibility to engineers (accounting for whatever additional capability AI can provide). Personally, I think (hope?) we're going to land in the latter situation, where individual engineers can comfortably ship and maintain about as much as an entire team could in years past.

As you said, part of the difficulty is years of technical debt and bureaucracy. At larger companies, there is a *lot* of knowledge about how and why things work that doesn't get explicitly encoded anywhere. There could be a service processing batch jobs against a database whose URL is only accessible via service discovery, and the service's runtime config lives in a database somewhere, and the only person who knows about it left the company five years ago, and their former manager knows about it but transferred to a different team in the meantime, but if it falls over, it's going to cause a high-severity issue affecting seven teams, and the new manager barely knows it exists. This is a contrived example, but it goes to what you're saying: just being able to write code faster doesn't solve these kinds of problems.


> There is still a real question that has to be answered: how do you safely let your company ship AI-generated code at scale without causing catastrophic failures? Nobody has solved this yet.

It's very simple. You treat AI as junior and review its code.

But that awesomely complex method has one disadvantage, having to do so means you can't brag about 300% performance improvement your team got from just commiting AI code to master branch without looking.


I swear in a month at a startup I used to build what takes a year at my current large corp job. AI agents don't seem to have sped up the corporate process at all.

> AI agents don't seem to have sped up the corporate process at all.

I think there's a parallel here between people finding great success with coding agents vs. people swearing it's shit. But when prodded it turns out that some are working on good code bases while others work on shit code bases. It's probably the same with large corpos. Depending on the culture, you might get such convoluted processes and so much "assumed" internal knowledge that agents simply won't work ootb.


I’ve driven an EV for 5 years now, and I still occasionally think it’s something wrong with my car, instinctively lol

Great application of first principles. I think it's totally reasonable also, at even most production loads. (Example: My last workplace had a service that constantly roared at 30k events per second, and our DLQs would at most have orders of hundreds of messages in them). We would get paged if a message's age was older than an hour in the queue.

The idea is that if your DLQ has consistently high volume, there is something wrong with your upstream data, or data handling logic, not the architecture.


What did you use for the DLQ monitoring? And how did you fix the issues?

We strictly used AWS for everything and always preferred AWS-managed, so we always used SQS (and their built-in DLQ functionality). They made it easy to configure throttling, alerting, buffering, concurrency, retries etc, and you could easily use the UI to inspect the messages in a pinch.

As far as fixing actual critical issues - usually the message inside the DLQ had a trace that was revealing enough, although not always so trivial.

The philosophy was either: 1. fix the issue 2. swallow the issue (more rare)

but make sure this message never comes back to DLQ again


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: