Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's a logical presumption. Researchers discover things. AGI is a researcher that can be scaled, research faster, and requires no downtime. Full stop if you dont find that obvious you should probably figure out where your bias is coming from. Coding and algorithmic advance does not require real world experimentation.


> Coding and algorithmic advance does not require real world experimentation.

That's nothing close to AGI though. An AI of some kind may be able to design and test new algorithms because those algorithms live entirely in the digital world, but that skill isn't generalized to anything outside of the digital space.

Research is entirely theoretical until it can be tested in the real world. For an AGI to do that it doesn't just need a certain level of intelligence, it needs a model of the world and a way to test potential solutions to problems in the real world.

Claims that AGI will "solve" energy, cancer, global warming, etc all run into this problem. An AI may invent a long list of possible interventions but those interventions are only as good as the AI's model of the world we live in. Those interventions still need to be tested by us in the real world, the AI is really just guessing at what might work and has no idea what may be missing or wrong in its model of the physical world.


If AGI has human capability, why would we think it could research any faster than a human?

Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.

It might scale up, it might not, we don’t know. We won’t know until we reach it.

We also don’t know if it scales linearly. Or if it’s learning capability and capacity will able to support exponential capability increase. Our current LLM’s don’t even have the capability of self improvement or learning even if they were capable: they can accumulate additional knowledge through the context window, but the models are static unless you fine tune or retrain them. What if our current models were ready for AGI but these limitations are stopping it? How would we ever know? Maybe it will be able to self improve but it will I’ll take exponentially larger amounts of training data. Or exponentially larger amounts of energy. Or maybe it can become “smarter” but at the cost of being larger to the point where the laws of physics mean it has to think slower, 2x the thinking but 2x the time, could happen! What if an AGI doesn’t want to improve?

Far too many unknowns to say what will happen.


> Sure, you can scale it, but if an LLM takes, say, $1 million a year to run an AGI instance, but it costs only $500k for one human researcher, then it still doesn’t get you anywhere faster than humans do.

Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.


This assumes that all areas of research are bottlenecked on human understanding, which is very often not the case.

Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.

An LLM would not be able to do 24/7 work in this case, and would only save a few hours per day at most. Scaling up to many experiments in parallel may not always be possible, if you don't know what to do with additional experiments until you finish the previous one, or if experiments incur significant cost.

So an AGI/expert LLM may be a huge boon for e.g. drug discovery, which already makes heavy use of massively parallel experiments and simulations, but may not be so useful for biological research (perfect simulation down to the genetic level of even a fruit fly likely costs more compute than the human race can provide presently), or research that involves time-consuming physical processes to complete, like climate science or astronomy, that both need to wait periodically to gather data from satellites and telescopes.


> Imagine a field where experiments take days to complete, and reviewing the results and doing deep thought work to figure out the next experiment takes maybe an hour or two for an expert.

With automation, one AI can presumably do a whole lab's worth of parallel lab experiments. Not to mention, they'd be more adept at creating simulations that obviates the need for some types of experiments, or at least, reduces the likelihood of dead end experiments.


Presumably ... the problem is this is an argument that has been made purely as a thought experiment. Same as gray goo or the paper clip argument. It assumes any real world hurdles to self improvement (or self-growth for gray goo and paper clipping the world) will be overcome by the AGI because it can self-improve. Which doesn't explain how it overcomes those hurdles in the real world. It's a circular presumption.


What fields do you expect these hyper-parallel experiments to take place in? Advanced robotics aren't cheap, so even if your AI has perfect simulations (which we're nowhere close to) it still needs to replicate experiments in the real world, which means relying on grad students who still need to eat and sleep.


Biochemistry is one plausible example. Deep Mind made hug strides in protein folding satisfying the simulation part, and in vitro experiments can be automated to a significant degree. Automation is never about eliminating all human labour, but how much of it you can eliminate.


Only if it’s economically feasible. If it takes a city sized data center and five countries worth of energy, then… probably not going to happen.

There are too many unknowns to make any assertions about what will or won’t happen.


> ...the fact that the [AGI] can/will work on the issue 24/7...

Are you sure? I previously accepted that as true, but, without being able to put my finger on exactly why, I am no longer confident in that.

What are you supposed to do if you are a manically depressed robot? No, don't try to answer that. I'm fifty thousand times more intelligent than you, and even I don't know the answer. It gives me a headache just trying to think down to your level. -- Marvin to Arthur Dent

(...as an anecdote, not the impetus for my change in view.)


>Just from the fact that the LLM can/will work on the issue 24/7 vs a human who typically will want to do things like sleep, eat, and spend time not working, there would already be a noticeable increase in research speed.

Driving A to B takes 5 hours, if we get five drivers will we arrive in one hour or five hours? In research there are many steps like this (in the sense that the time is fixed and independent to the number of researchers or even how much better a researcher can be compared to others), adding in something that does not sleep nor eat isn't going to make the process more efficient.

I remember when I was an intern and my job was to incubate eggs and then inject the chicken embryo with a nanoparticle solution to then look under a microscope. In any case incubating the eggs and injecting the solution wasn't limited by my need to sleep. Additionally our biggest bottleneck was the FDA to get this process approved, not the fact that our interns required sleep to function.


If the FDA was able to work faster/more parallel and could approve the process significantly quicker, would that have changed how many experiments you could have run to the point that you could have kept an intern busy at all times?


It depends so much on scaling. Human scaling is counterintuitive and hard to measure - mostly way sublinear - like log2 or so - but sometimes things are only possible at all by adding _different_ humans to the mix.


My point is that “AGI has human intelligence” isn’t by itself enough of the equation to know whether there will be exponential or even greater-than-human speed of increase. There’s far more that factors in, including how quickly it can process, the cost of running, the hardware and energy required, etc etc

My point here was simply that there is an economic factor that trivially could make AGI less viable over humans. Maybe my example numbers were off, but my point stands.


This is fundamentally flawed. There are upper bounds of efficiency that are laws of nature. To assume AI would be supernatural is magical thinking.


Natural intelligence appears supernatural from our current understanding, so it's not surprising that AGI also appears so.


Neither appears supernatural from a scientific understanding.


And yet it seems to be the prevailing opinion even among very smart people. The “singularity” it’s just presumed. I’m highly skeptical to say the least. Look how much energy it’s taking to engineer these models which are still nowhere near AGI. When we get to AGI it won’t be immediately super intelligent and perhaps it never will be. Diminishing returns surely apply to anything that is energy based?


Perhaps not, but what is the impetus of discovery? Is it purely analysis? History is littered with serendipitous invention; shower-thoughts lead to some of our best work. What's the AGI-equivalent of that? There is this spark of creativity that is a part of the human experience, which would be necessary to impart onto AGI. That spark, I believe, is not just made up of information but a complex weave of memories, experiences and even emotions.

So I don't think it's a given that progress will just be "exponential" once we have an AGI that can teach itself things. There is a vast ocean of original thought that goes beyond simple self-optimization.


This sounds like a romanticization of creativity.

Fundamentally discovery could be described as looking for gaps in our observation and then attempting to fill in those gaps with more observation and analysis.

The age of low hanging fruit shower thought inventions draws to a close when every field requires 10-20+ years of study to approach a reasonable knowledge of it.

"Sparks" of creativity, as you say, are just based upon memories and experience. This isn't something special, its an emergent property of retaining knowledge and having thought. There is no reason to think AI is incapable of hypothesizing and then following up on those.

Every AI can be immediately imparted with all expert human knowledge across all fields. Their threshold for creativity is far beyond ours, once tamed.


> It's a logical presumption. Researchers discover things. AGI is a researcher that can be scaled, research faster, and requires no downtime.

Those observations only lead to scaling research linearly, not exponentially.

Assuming a given discovery requires X units of effort, simply adding more time and more capacity just means we increase the slope of the line.

Exponential progress requires accelerating the rate of acceleration of scientific discovery, and for all we know that's fundamentally limited by computing capacity, energy requirements, or good ol' fundamental physics.


Prove it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: