Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Where will artificial general intelligence come from? (docs.google.com)
389 points by nshr on Sept 2, 2017 | hide | past | favorite | 240 comments


I gave this talk a while ago to a small group of attendees. It was not recorded (I saw some ask below). It's based on a document I wrote a while ago called "You suck at writing AI" (never published). The basic argument was that people are comically inadequate at writing complex code. You can't write the code to detect a cat in an image and the correct thing to do is to give up, write down an objective that measures the desiderata and pay with compute to search a function space for solutions. In the same vain, the idea of writing an AGI and all of its cognitive machinery is preposterous and the correct thing to do is to give up, think about the objective and search the program space for solutions. Unfortunately, the mindset of decomposition by function (see Brooks ref), which has worked so well for us in so many areas of scientific inquiry, is just about the most misleading mindset when it comes to AGI.


Plot twist: figuring out that objective function may prove as intractable as the original problem! We'll need an objective function writing objective function, and then it's turtles all the way down.


The main objective function in nature is very simple though: maximize the number of copies of your genes. Such an objective enforced in a resource-constrained multi-agent world (as suggested in the slides) could really lead to quite complex sub-objectives which may as well lead to general intelligence. For example, if each agent can process information and perform work, it follows that individuals that have better cooperative abilities also have a larger reproductive success. Cooperation is, however, extremely complex: It requires communication, identification of other individuals, establishment of trust, early detection of betrayal etc. The necessity for modeling the actions of other agents alone provides plenty of correction signals toward general intelligence because modelling other agents is such a difficult task.


I don't know that "maximize descendants of self" is necessarily right though. It seems like a better statement might be "maximize the development of complexity in the universe". Just as an example, lots of people choose to forgo having children to focus on contributing to the universe in other ways (myself included). This isn't just a self-centered drive for fame/wealth/etc either, as many people pursue their quests in poverty/obscurity, and some even choose anonymity.


Well, for biological organisms, it's all about reproductive success. I mean, what exits today reflects what managed to reproduce, and how well. Overall, that has created lots of complexity. But that's just because there are so many niches and ways to be successful in them.

What you say about people reflects cooperative behavior that drives reproductive success for shared gene complexes.


It took nature four billion years to invent humans, who are actually - if we're honest - pretty terrible as an example of workable AGI.

In fact what nature invented was a persistent colony organism with external memory.

Wild solo humans are only a little smarter than wolves individually, but being able to share and externalise invention and learning created a massive advantage.

Humans are successful because although only a tiny minority of individuals are any good at invention, the fact that information persists and is shared means the entire population benefits.

The problem for AI is modelling the learning and invention process. Classifiers and recognisers are getting better, but they're not really learning in the human sense, which is a combination of abstraction, mimicry, and occasional invention.

IMO there's no chance of AGI developing until there's a persistent, transferable, abstracted model generated as an output from classifier systems and other learning machines which is a symbolic - not just a statistical - summary of the learning.


Transfer learning is a thing (one NN learns from another or multiple NNs), also, large ontologies representing billions of facts.


> until there's a persistent, transferable, abstracted model [...]

It can be compressed to "until AIs can talk".


... to other AIs.


or "... to itself"


> for biological organisms, it's all about reproductive success

Another way of putting it - the source of meaning is life, or death (prolonging life, avoiding death as much as possible). Reproduction is just the start of life. From this game of life and death come reward signals that teach us how to act in the world (our values).


There is no guarantee that pursuing complexity as a goal in itself will lead to intelligence. The one thing going for mannigfaltig's proposal is that it has been known to work, though very inefficiently, and we don't have enough examples to estimate the yield. One might suggest that having the right definition of complexity would produce the desired result, but coming up with that definition takes us right back to CuriouslyC's point.


I give you a simple definition: maximize diversity (differentiation) and integration. They are opposites, to a degree - so there is a tradeoff and maximum at the middle in both diversity and integration. This idea comes from the Integrated Information Theory of Consciousness (Giulio Tononi, Christoph Koch).

In a neural net we do just that - maximize diversity by splitting the signal over many neurons, each with different weights, computing different things. Integration is maximized by the mixing together of signals from other neurons and training them together with a common loss function.

Even the internet as a medium requires diversity and integration to be successful. For example, net neutrality is related to diversity. Integration is related to national firewalls, copyright barriers, filter bubbling effect (where one sees only content from parts of the internet they agree with), walled gardens (like the app stores), and other things that cut the connection between people.

You can apply diversity and integration to other fields as well, for example, in politics/governance. We can compare a federal system (more diversity) with a centrally planned system (less diversity) and see the effects. With integration - we can compare free trade with regulated trade. The same principles apply to free speech - where diversity and integration are basically promised by the constitution.


Well, maybe, but it looks very speculative to me. I think anything deserving the label 'definition' would have to be much more definite than that.


There's no guarantee this will favour the development of intelligence though. To take examples from nature, flighted birds are so optimised for weight that they would never develop large, heavy enough craniums to support sentient brains. Many selective criteria may turn out to favour optimisation patterns that could actualy exclude general intelligence, or at least might effectively close off evolutionary paths that could lead to it.


Except that ravens and other corvids seems to be self aware and are highly intelligent, it's correct that evolving things does not guarantee high intelligence.

Insects and especially social insects like ants are good examples of very successful survivors with very little general intelligence.


As another user TheOtherHobbes noted, humans share this characteristic with insects to some extent:

> Wild solo humans are only a little smarter than wolves individually, but being able to share and externalise invention and learning created a massive advantage.

This seems to resonate with a strain of Philosophy of Mind https://plato.stanford.edu/entries/content-externalism/ which deals with our mental content being distributed not only around the brain and body, but on paper, computers and relations with other people.


On the other hand, one can possibly steer the evolution towards a direction that enables general intelligence to evolve. In nature there are many local minima due to biological constraints (body weight, cranium size, birth channel size, predators, payoff between energy investment into large neural networks vs large muscles, ecological niches). In a simulation one can probably avoid many of these local minima by changing the rules that govern the simulation.


I don't think that we'd want that sort of AI, as a competitor. But if we could become one, that would be cool.


Unless it were possible for each objective function to converge to a next one that slightly extends the current one. However, what would the starting point be? Is this mathematically definable? What does the (im)possibility to define such a function tell us about AI?


> However, what would the starting point be?

The starting place is simulation. We use games (and other kinds of sims) to learn to act intelligently in a virtual environment. In such a place we can define many tasks and a learning curriculum.


Make them hierarchical. Starting point is to survive.


Bacteria are good at surviving.


Can you think of a better starting point?


There are at least two huge advantages of the decomposition by function approach compared to the black box solution space search approach:

- The former can be debugged. If your image analysis system embarrassingly tags black people as gorillas, you can go in and find the bug and fix it. That's not so easy if you're using a black box model.

- The black box approach is tremendously enervating. You code up a neural network architecture, launch a bunch of cloud GPU instances, and start training. If the results are bad... you try a new architecture. In the function decomposition world, you can actually use your knowledge and understanding of the system as an engineer to figure out what went wrong and why.


Would you be willing to write this talk out in longer form? I realize it's a lot of work, but the slides are fantastic and even an intermediate textual version that just fills in some of the explanatory gaps would be fascinating.


Fortunately you don't have to write an AGI yourself. We have a very powerful tool called evolution that can do all the heavy lifting for us, we just have to set up the environment and the goals. I'm pretty sure we could create an AGI, given enough computational power and time, we're basically hardware-limited.


Nature had a planet and several billion years. Of course the goal wasn't AGI, it was survival, and horseshoe crabs have done a pretty good job of that. So have beatles, with their large number of species. How would you select for only AGI? It would be like selecting for only the greatest eyes (Mantis shrimp), but doing it over the entire tree of life. You still need a way to narrow down to the best eyes.


Survival is not a goal, it's just the score keeping system.


What, oh wise one, is the goal?


Nothing. Evoultion came about randomly, not by intent, so it has no goal.


> Nothing. Evolution came about randomly, not by intent, so it has no goal.

Evolution created a goal from nothing, which is self-replication. It works on many levels - self replicating DNA, self replicating cells, self replicating ideas (memes), self replicating ecosystem, even the economy has become a self replicator.


But if the goal is to use evolution as model for generating AGI, then the problem is narrowing down all the surviving species to the ones that result in AGI, as opposed to a thousand other successful traits.


That's pretty meaningless - in that case the word goal doesn't have a meaning because everything is just as random as evolution, which isn't very random but whatever.


Evolution is not random. It requires self replication in order to transmit and evolve genes. Everything else is much more random.


Reduction of entropy. Which seems quite the opposite of growth. However effectively, say energy per volume, it allows for growth.


^


I think that's what the person you are replying to is arguing for...


It's unfortunate that AI researchers pay so little attention to the brain (a single slide in your talk). We need more ML people evaluating ideas from neuroscience or from such initiatives as Numenta or Spaun.


Demis Hassabis (Deep Mind) published recently a manifesto about the importance of neuroscience for AI.

Neuroscience-Inspired Artificial Intelligence http://www.cell.com/neuron/abstract/S0896-6273(17)30509-3


Remeber Koniku? (http://koniku.io/) Biology and Biotech are different beasts.

Saw their Ted talk, looks promising, but seems they delayed a lot.


Practical example, neural architecture search to discover convnets that require less computation, https://arxiv.org/abs/1707.07012 "Learning Transferable Architectures for Scalable Image Recognition"


Anyway you might publish the original writing? I have been skeptical of the very notion of AGI ever since hearing it described as a sort of computational or scientific panacea; it would be nice to have more discussion on the topic.


I'd be interested in what you think of my approach elsewhere in the replies.

In some ways it is a search through program space. But then I think we as intelligences do a guided search through program space as we develop. It seems important to use what information you can to guide you e.g. culture and copying other people who have already been navigating the space of programs to be.


The objective function for humans seems to be "Perfect the Human Soul" which can have conflicting optima in different contexts. Obviously no other mammal optimizes for it, either. See any slice of human culture or human activity ever for reference. Thanks for putting this slideshow together.


Nah it's just "survive and replicate". like every other species. Except that it's harder for us than other species, in particular, because a small human child is super fragile (more than like a giraffe baby) so you need a lot of care, and a lot of intelligence to do that care.


Nah I maintain "Survive and replicate" is a sub-problem in the larger "Perfect ones' Soul" objective function.


What is a "soul"? I can define "survive and replicate" for you, on the other hand.

<rant>This exemplifies the issue I have with current day philosophy. It's too blissfully unaware of the discoveries in AI. While they redefine consciousness the 1000th time, the AI researchers make "Reinforcement Learning Agents" that play Go, drive cars, paint, draw and can take a pizza order from you. Philosophers, get more concrete.</>


Any thoughts on what role novelty search might play? ("Why Greatness Cannot Be Planned: The Myth of the Objective" by Kenneth Stanley)


Is there a video of your giving the talk? Or at least audio?


Background: I did AI and Philosophy of Mind in undergrad, an MSc focused on ALife, then a PhD at Yale under a Macarthur Fellow who developed the theoretical framework for the 'evolution of Evolvability' where I worked on computational evolutionary biology. I can say we're not going to blindly brute force our way forward, but instead we'll need to reverse engineer nature's core algorithms to generate hard AI. Every time a major advance is made in AI the computational neuroscientist say: "why didn't you talk to us 15 years ago? We could have told you that!". Those ingredients will be embodiment, evolution, genetics (genotype-phenotype encoding), neirogenesis(gene regulatory networks directing phenotypic development from a single cell to a multicellular neural network), and ecology (evolving in adversarial and cooperative environments). And we'll need a lot of theoretical work in how to represent nature's algorithms in code. For example my PhD work just focused on how to use evolutionary algorithms to evolve simple gene regulatory networks and how that leads to properties of modularity in the genotype-phenotype map. That alone is life's work but a necessary ingredient. I don't expect to see this solved in my lifetime given how we're attacking the problem (head on) today. And until then we're going to continue to run into these dark winters of AI.


> Every time a major advance is made in AI the computational neuroscientist say: "why didn't you talk to us 15 years ago? We could have told you that!".

I'm not sure which advances you are talking about, but the modern successes of Deep Learning are primarily due to backpropagation and convolution. Neither of the two is considered to be biologically plausible.

Some people are actively trying to come up with alternatives to backpropagation that would be biologically plausible though.

See for example: Bengio et al., "Towards Biologically Plausible Deep Learning" https://arxiv.org/abs/1502.04156


Actually convnets were inspired by Fukushima's Neocognitron, which was itself inspired by visual cortex.


> Actually convnets were inspired by Fukushima's Neocognitron, which was itself inspired by visual cortex.

That doesn't contradict what I wrote.

ConvNets require the synchronization of weights between neurons, which is not considered to be biologically plausible. Some aspects of the architecture (the receptive fields, in this case) may well be plausible, with the complete architecture still implausible.


You're focusing too much on the implementation details and assuming that because they're different that it's not equivalent. The secret sauce here is that network topology (and threshold rules), not the implementation details, are largely what determine the functional properties of that network. Show an electrical engineer the circuit diagram of a 4 bit adder and they'll know it's function immediately. Artificial Neural Networks and Artificial Gene Regulatory Networks are the same. The problem with ANNs is that we fail to see the circuitry driving the function because we assume that every w_ij != 0 is functionally relevant. Once you start to strip away non-spurious interactions you start to see topological patterns. I did a lot of work on this problem -- this is not an uneducated guess.


I didn't see a maths major in your background but I'm surprised you seem to dance around how important back-propagation is, and how different it is from what you could find in nature.

The major advances are the implementation details, and I think many would consider the network topology research constrained not by our imagination but by our implementations.


There are some similarities, but ConvNets really remind me of LeCun's airplane vs bird analogy. There are similarities such as learning very similar intermediate representations (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5288363/), but they are also very clearly not doing the exact same thing that a human is. It's a technological emulation, and that's good enough for me.


> Artificial Gene Regulatory Networks

I've been following AI for years but this is the first time I found such a concept. So basically a cell is like a small neural net with as many neurons as genes, each gene having (chemical) input and outputs signals. That means a cell's DNA is much more dynamic than I previously imagined. It's a self-replicating m.f. computer, that's what it is. We can only dream of similar accomplishments.

Previously I was aware of the huge workload carried out by DNA - for every protein in the body, DNA replicates the blueprints - an amazing amount of fine detailed work. It's not just sitting there waiting for reproduction. Seeing it not just as a factory, but also as a neural net is another level.


>Every time a major advance is made in AI the computational neuroscientist say: "why didn't you talk to us 15 years ago? We could have told you that!"

Can you name one time that has happened? I can't think of any.


Hindsight is 20/20 there.


Bayesian brain theories of perception. The neuroscientists and cognitive scientists are still waiting for the AI theorists to stop obsessing over that one paper with the cat's visual cortex and get off the deep learning train of sacrificing correctness for cheapness to compute.


this is not a case that happened


As a matter of fact, the mainstream AI community hasn't adopted probabilistic programming, but it does exist.


you re going way off topic . this is about AI discoveries which in retrospect were found to have been discovered by neuroscience. you 're complaining that a potential theory (which is only partially supported by evidence) might one day hypothetically be found useful in AI.


Feel free to link to any Kaggle blog posts showing victories using probabilistic programming.

Is your opinion independent of evidence?


Background: Some guy on the Internet with an unrelated Bachelors degree he didn't study that hard for and has picked up a little Python and Javascript somewhere along the way.

I know of no biological forms that have evolved wheels. But wheels have turned out to be a hell of a lot faster than fins or legs. I see no reason "intelligence" has to follow the pathways or limitations of biology or neurology at all. Although certainly it may be a place to look for some ideas.


> I know of no biological forms that have evolved wheels. But wheels have turned out to be a hell of a lot faster than fins or legs.

On certain terrain. And much slower - sometimes useless - on others. Wheels aren't simply better legs, they're a different way of moving, with its own benefits and drawbacks.

Likewise, if we're talking about computational ability, we already have things that are faster than the human brain in certain areas (say, numerical computation). When we talk about AGI, we're talking about human-like intelligence (at least to a certain degree).

So we already have the wheel, but now we want to develop something that does what the leg does. Now, maybe that doesn't have to be an actual leg, but that's usually what people usually end up with when they try to find something that will have a similar functionality as a leg.


I would also point to heavier-than-air flight. For thousands of years people tried to mimic flapping wings of birds, but in the end it was the unnatural propellor and fixed wing design that gave us flight.


Fixed-wing flight isn't unknown to the natural world.[0]

Flagella use propeller motion for propulsion.[1]

Lots of animals use jets.[2]

I'd consider the combustion engine to be the most novel 'human' design with regards to aircraft propulsion -- or the way we achieve jet propulsion; that's pretty unique.

[0]: https://en.wikipedia.org/wiki/List_of_soaring_birds

[1]: https://en.wikipedia.org/wiki/Flagellum

[2]: https://en.wikipedia.org/wiki/Jet_propulsion


... and a new energy source, the combustion engine.


Not a wheel but a rotary motor: topologically the same. https://en.wikipedia.org/wiki/Flagellum


If you consider rotary motion as wheels, there are some bacteria that have flagella that rotate to propel itself, complete with molecular bearings.

That said, biological evolution tends to get stuck on local maxima very easily (see convergent evolution of eyeballs), and wheels are kind of hard to evolve because of the difficulty of making large bearings biologically.

Not to mention wheels kind of suck unless you're on pavement (look at tanks, etc)


It's not about building better tools (i.e. the "artificial" wheels vs. nature's wings and legs), it's about building a thing that would be just like us, humans: intelligent and especially a thing that has feelings.

I know that the "feelings" part is most of the times neglected in these types of presentations (you cannot put them into any abstract representation, like you can draw a wheel using geometry), but it's what most of us, people who are not in this field, expect from AIG. Great Science Fiction works (novels, movies) have talked about this subject (AI and human feelings) long enough, but it somehow never makes it into academic presentations.

Just think for a second of the self-driving cars problem. Someone on HN (or maybe reddit) explained a couple of days ago that even if we 'd manage to devise such a system that would be considerably better compared to humans we'd still not be ok with these machines killing us on the roads. Now, imagine these machines possessing the feeling of "guilt", or of "compassion", I'm pretty sure that the percentage of humans that would become ok with some AIG driving our cars for us will rise dramatically (I know I would stand on the AIG's side). We don't mind the fact that machines kill us, accidents happen, we mind that machines kill us and they don't "realize" what they've done and they don't support the consequences.

Re-reading my comment I feel like it sounds a little pop-sciency, didn't mean it to go into that direction, it's just my interpretation on how I think things are. I for myself would love, really love, if we'd somehow "make" an AIG capable of telling genuine, funny jokes.


>I know of no biological forms that have evolved wheels.

Silly to reduce all rotary motion to wheels.

Less complex to roll as a ball -- the benefit of rotary motion without facilitating the additional complexity of adding a steering mechanism.

> But wheels have turned out to be a hell of a lot faster than fins or legs.

Nature doesn't somehow cares about hyper-optimizing for a single characteristic like 'speed', rather the characteristics arise from individual (species) successes between genetics lines.

https://en.wikipedia.org/wiki/Rotating_locomotion_in_living_...

https://en.wikipedia.org/wiki/Category:Rolling_animals


Wheels are useless if you don't have roads.

EDIT: Please explain your downvotes here? Don't you people see that this, coupled with there being no useful intermediate stage before developing a full wheel, prevents evolution from "inventing" it.


Well that's not true at all.

See: Wheelbarrow; amongst many wheeled devices I can think of immediately.


Wheelbarrows are a prime counterexample - historically, in environments that had decent man-made trails, wheelbarrows were useful and used; in environments that did not, pack-carrying animals or even people did the job even though the concept of wheelbarrows was known; also, the availability of roads (as opposed to narrow trails) was the prime reason for differences of usage of wheelbarrows (especially the efficient middle-wheel variety e.g. https://www.google.lv/search?q=wheelbarrow+chinese&tbm=isch ) versus two or four wheel animal (or man) pulled carts, which were popular only where roads had been made and were maintainable, so not in jungle-like environments where it takes more work to keep a road from overgrowing.


Wheelbarrow creatures? I don't see how that would be advantageous at all.

Also, try running around with a wheelbarrow in a dense forest.


Apparently philosophers have pondered on the matter, it's called the multiple realizability problem.


You can't really evolve a wheel in the sense that a half-evolved foot is useful already, whereas a half-evolved wheel: not really. Therefore obtaining a functional wheel has to be done in one jump, which is contrary to how the gradual process of evolution works.


Wheels are way more basic. Anything will skid and slide on a spread of rubble, doesn't even need to be animated.


Fair point, but you are omitting the fact that brain itself (that invented the wheel) was evolved through biological means.

That said, evolutionary optimization is generally considered unreliable(convergence) and inefficient.


The wheel existed before the mind did. It was just sitting there waiting to be noticed :)


Not for the function the brain made a use of it.


The wheel, by itself, is not faster than legs. It does not adapt, repair, or feel. You will always need more than your legs to create a system that properly utilizes wheels.

There is nothing intelligent about artificial intelligence. You might as well call it artificial stupidity, but that doesn't make you sound sophisticated.

Memorizing the results of a pre-defined concept faster due to a lower barrier of entry does not make you smarter or more intelligent. You can process all of the traffic signals in the world in a blink of an eye only to get stomped by a dancing traffic director.

You can't have AI without a human, and you can't have AI with a human.


> neuroscientist say: "why didn't you talk to us 15 years ago? We could have told you that!".

Interested to hear about these. I can honestly not remember any.


There's been this tension over whether achieving apparent human levels of artificial intelligence requires thinking like and following processes associated with human learning and intelligence pretty much forever. Today, computational capability and data have gotten us to a place where there's been a big step forward while largely ignoring how it relates to human-level thought. (After all, we got powered flight by largely ignoring how birds fly.)

In fact, cognitive science largely split off as a separate field but there's a school of thought that algorithms and big data are only going to take things to a certain point and you're not going to get to things like fully autonomous vehicles under general conditions without different approaches that involve better understanding the physical world.


From what I've read, the Wright brother's main innovation was that 3-axis control was needed to maintain flight, which they also copied from birds.

Which means that copying birds is fine, you just have to know what parts to copy (mechanisms to control direction in all 3 axes) and which to ignore (flapping wings isn't practical given the power sources and construction materials we have, but soaring on stationary wings is a good place to start).

Which in turn means that saying "Just copy nature, they've been doing this for 1bn years" or "don't bother copying nature, we need different stuff" both aren't very useful.


>Which means that copying birds is fine, you just have to know what parts to copy

The analogy between acheiving AGI and achieving flight is invalid on its face and yields no useful information. Nothing can be drawn from the latter about the former.

Flight is mechanical. AGI is not. However, until now, ML and other AI techniques have focused largely on mechanics (stuffing data into an algorithm), which is why such analogies with nature are tempting. But, that approach won't suffice for AGI.

The breakthrough for AGI will come through an approach that backs away from brute force and attacks the problem from a meta-perspective; building on the core blocks of what intelligence is. The approach will be relatively simple, once the core foundation of the integrative learning process is understood, which is the much more difficult task. Execution will then involve establishing a base case that implements, not learning, but the capacity to learn. This will then be replicated across generations, compounding exponentially until what we recognize as AGI emerges.

As well, the required computing power is likely to be significantly less than is commonly believed. This raw brute-force ML-style approach again misleads us here.


Actually, previous to the Wright brothers, the dominant theories of flight involved studying how birds flew and theorizing what parts were important and trying to replicate them. You could get a degree in this area of study.

I do agree that different approaches are needed.

In the case of flight it was the realization that the airfoil is important and the rest is window dressing.


Actually before the Wright Brothers came a guy called Otto Lilienthal who figured that out.

https://en.wikipedia.org/wiki/Otto_Lilienthal


Am I right in thinking the study of fish/sharks and fluid dynamics helped with the breakthrough?


No, it was bicycles that guided the intuition that control should be based on roll rather than rudder yaw. That was the key innovation that the Wrights contributed at the time of their first flight.


Not sure why the downvote, but this is what I was thinking about:

https://en.m.wikipedia.org/wiki/Fluid_dynamics

Nothing to do with origins of flight but interesting and related to why flight works.


Why are planes called wings? And while the propeller or turbine is largely missing from birds, it contains many wings as well. I'm not convinced.


Don't know why you feel so compelled to give your background while writing an anonymous comment on the internet. It only makes me suspect your argument doesn't hold up without some expectation to respect your authority. All the theory in the world is irrelevant until it you can apply it to predict something new -- its almost trivial to create a theory that describes some phenomenon accurately, after-the-fact. See the tremendous bifurcation in physics with regards to string theory. Similarly there are loads of academics in various fields claiming to have been the origins of artificial intelligence.


Hand-engineering and hand-assembling evolved components will fail, way too complex. We struggle with making bug-free word processors. This will only be solved by fully evolving AGI, or some other form of automated search of high-dimensional design space.


I'd love to read your thesis. Would you please a link?


What do you think the best way of attacking these problems is? It seems to me that we aren't even at the point where technology can do much to help us (in terms of recreating what we know about the brain) and we just need to understand the brain better.


"Just" that. Maybe.

The problem is that, at the risk of trivializing a lot of good research, science related to the brain/how we think/etc. has seen a whole lot more money spent and the effort of smart people expended than it has seen useful results. There are still debates going on about a lot of aspects of learning and other aspects of cognitive science that were probably already happening when computers were made from vacuum tubes or even earlier.


Can you link to some of your work, this is very interesting stuff


Hmm, reading your experience clarifies to me (a Maths guy) why current deep learning efforts are producing real fruit from their labor. Linear Regression and Vector/dot products are just constructs to re-create the evolutionary biology seen in nature. When looking at the slow process of evolution, it's the nuances that matter and over time, they compound exponentially. Pretty neat. By breaking down a task into abstract constructs and sifting problems out with a sigmoid function.


I don't think you're wrong that AGI will have (something like) those components in its inception.

I just disagree that we have to intentionally create them -- I think there are a lot of places we can experimentally bruteforce the implementation instead of having a solid theoretical understanding if our only goal is creating an AGI.

I don't think we need any theoretical breakthroughs, per se, but rather, a lot of computing power and time. No one seems willing to take a million nodes and run them for a decade to brute force some of the mechanisms -- everyone wants results on the short timelines that grants or Wall St operate on. I get why, but that bias towards quantifiable short-term gains fundamentally limits the search algorithms we're implementing in our quest for AI, and likely means that we won't get there in a meaningful way, because intentionally implementing the necessary requirements is a fool's errand of complexity.

In short: we're not smart enough to do the simple, but slow thing, so we're trying the highly complex one with demonstrable incremental results. I expect this approach to continuously fail to develop AGI even while it demonstrates results on discrete problems.


I think you are right that we could, in theory experimentally brute force intelligence. But if we try and do that, we are basically trying to simulate an entire universe and wait for intelligence. But the amount of computing power that that would require is beyond even what we can imagine. And even if we could find some ways of simplifying it, it still seems like a waste of our intelligence.

Like when it got cold for humans, rather than just dying and waiting until they evolved fur, they just made clothes. Evolution is inherently able to do great things but is very inefficient. We have the power to reverse engineer nature and use that knowledge to our advantage, and if our goal is to create general AI, then not trying to understand the one sitting in everyone heads and just trying to throw more computing behind it just seems like a waste of a tool that has allowed us to get this far.


I'll preface this by saying that my views don't necessarily represent a mainstream position here:

Humans already accidentally invented AI at least a few times, sort of. To talk meaningfully about this topic, I think it's better to step away from the topic of "intelligence" and instead speak about "reasoning ability". It's still fuzzy, but seems to sidestep a lot of the "can submarines swim" arguments.

Things like societies, governments, corporations, economies, etc all show similar features to machine learning algorithms and reasoning animals in their ability to dynamically problem solve -- the main difference is that they run on human computers instead of electronic computers. (Or photonic, quantum, etc computers.) Well, in modern times, they're actually sort of cyborgs, but that's a side issue.

The quest for an AGI is more about translating the notion of a corporation (or government, etc) meaningfully into fully electronic computer terms than it is in replicating the human mind. We know enough about the supra-structure of the corporation to meaningfully bound the search, and it's much more prudent to spend a decade on bruteforcing each component of a corporation than it is to navel gaze about reinventing minds. The truth is that human brains (or animal brains) aren't the only general intelligence that we know -- so reinventing them isn't the only path to AGI, though it's a worthwhile project in its own right.

I just worry that many of the meaningful contributions of the people working on reinventing minds will be lost on the birth of the actual first AGI through corporate cyberization. (And that many of us will fail to recognize corporate cyberization for what it is -- an AGI. Any (successful) corporation which can act fully by algorithm must be a general intelligence.)


Some very good points.

I mean, what is the problem we are trying to solve here? Fully emulate a human? That's worthless, we have an overabundance of humans already and it's easy to make more.

Or are we trying to come up with something that performs our tasks without us having to think or do any work and presumably makes life richer and more pleasant? Well that's a different problem, and yes, we have been automating that forever and are arguably getting better at it. (I say arguably because "richer and more pleasant" are relative as it turns out).

The argument over what constitutes "intelligence" is not an insignificant one either. My dog likely thinks I'm stupid because I don't eat butter all day and shave and go to work. And come to think of it, he might be right. Point being, we often conceive of intelligence as "having better ways to do or get what we would". But then there is abstraction, longer term planning, higher order thinking, which in the end almost certainly will be incomprehensible to humans.


1. What are the other general intelligence that we know of besides life?

2. How is it that you know that the reasoning abilities of the structures that we create are because of the structure, and not because we are the pieces that make up the larger structure?


1. Sorry if I was unclear -- the examples I listed were meant to be examples of "general intelligences", but I think corporations are a good example.

2. Well, this is somewhat more complicated to answer --

Obviously the reasoning abilities are (partly) because we're part of the structure. But in a way that we know can be decoupled from the reasoning ability of the structure -- see modern CS on which problems are tractable to mechanize and the success of ML at individual tasks -- because modern corporate theory has formulated a system of corporate governance and execution that only depends on individual people for one or a few tasks. The fixation of modern corporate management on replaceable, cheap, low-skill workers makes it a perfect template to replace components one at a time with bruteforce discovered ML components (and in fact, we already see this happening -- because it is part of the reason that corporations are designed that way).

After all, from the perspective of the corporation as a reasoning entity, increased cyberization is increased efficiency, which ultimately means increased competitiveness (and hence survival).

Electronic computers were designed to replace human computers in the military, and we have a wealth of information on their numeric and string processing capabilities. They're specifically designed to emulate in another medium some of humans' reasoning abilities, particularly those around logical deduction and numeric calculation (which in some senses are similar). Since modern corporations have reduced much of their operation to machines (eg, factories), string manipulation and calculation, it seems perfectly conceivable to have a corporation control all of its "core functions" via electronic computer after a period of increasing cyberization, with contracted delivery services and on-site technicians viewed something like a doctor.

Corporate cyberization is, if you take my first claim that corporations represent a "general" reasoning structure of made of specialized components as true, an open and direct path to an AGI that we seem to be stumbling down.


>The quest for an AGI is more about translating the notion of a corporation (or government, etc) meaningfully into fully electronic computer terms than it is in replicating the human mind.

That's an astoundingly terrible idea, considering what powerful corporations and governments have done to the world.


I didn't mean it as a normative statement, just one about the technology required -- I share you concerns.


Perhaps an important warning that AI could go the same thing...


Artificial general intelligence won't be invented, it will emerge, just general intelligence did in the wild. AGI is just going to be a hierarchical arrangement of specialized tools.

The first step is highly specialized AI tools for very specific problem domains.

The second step is AI tools that use other AI tools as components but address slightly larger problem domains.

The third and successive steps are recursions of the second step.

Additionally, we won't be able to tell right away when we've crossed the threshold. We can't even say for sure where "intelligence" stops in animals. We used to think we were the line, but now the bar has been pushed down to include primates, cetaceans, a number of birds, possibly some members of the bear family, etc. The reality is that it is a gradient and there is no clear line.


> Artificial general intelligence won't be invented, it will emerge, just general intelligence did in the wild.

I think so as well, but for a specific reason: That people are not as "AGI" as we think, and that our general intelligence is the result of immense neuron-computing resources committed to satisfying a few simple drives. In other words, narrow AI with a lot of resources. Not to mention many specialized subcomponents.

And to the degree that consciousness is a selected trait, that means it has a purpose. I think the work on attention/focus in neural networks hints at that purpose.


Why do you think consciousness is a selected trait?


Well I don't know what consciousness is, so I couldn't answer that. But I think the opposite stance of consciousness being some kind of majestic oracle is silly. There is a reason why we perceive consciousness, even if it is just as a side effect of something else or a random fluke that is useless or an illusion for coordinating our limbs or whatever.

These types of discussions do get pretty wonky. When I say consciousness is selected for, I mean more whatever practical apparatus (if any) it is emergent from, not the perception of consciousness itself.


Do we perceive consciousness? Or is consciousness perception itself?

It sounds to me like you're referring to cognition rather than consciousness.


It's interesting how people come out of the woodwork with their personal theories on AGI. Do you/ we even really know how general intelligence works, or even how it emerged i.e. incrementally, or in a dramatic mutation more recently? Last I checked there wasn't a scientific consensus on either topic. To then come out headstrong and say "AGI will be like X" always makes these AGI conversations a tad farcical.


Intelligence clearly isn't a one-off/recent thing, since we observe remarkably intelligent behavior from cephalopods, which are vastly distant in the tree of life and not recent from an evolutionary perspective. We also know intelligence is also clearly not a binary attribute from many animal and human studies.

The fact that we don't know how higher-order intelligence works in general is exactly why it will be emergent rather than designed.

You shouldn't worry so much about consensus, but instead use your senses and your brain to make up your own mind. That is the approach that gave us the enlightenment.


Ok but I am talking about general human intelligence when I say AGI and 'recently emerged', not mollusks. You could easily argue we will find out how higher-order intelligence emerged one day, as some researchers already have models for that if you've read a college anthropology textbook; may not be right, but it's not out of the realm of possibility that it emerged recently due to new structure(s) ('design') coming about in a relatively short time period, so making the claim "Since we don't know how it happened, therefore it must be like X" is flawed.

You know what French Enlightenment thinkers were also against? Making headstrong claims (as an authority) without empirical evidence on your side :)


I think "what humans are" is not a useful definition of intelligence. Not only is it not useful, but it's likely to lead us down a blind alley in intelligent systems research. Kind of like if we assumed when trying to build a flying machine that the only way it could work is if it flapped its wings.

Human intelligence definitely did not arise "in one day" and any anthropologist positing such a theory hasn't looked at the last 30 years of research in statistical genetics. Intelligence is an incredibly complex trait resulting from the interaction of many, many genes.


Surely it isn't too much trouble to mentally insert "My intuition is that" before the GP's interesting comment. When dealing with a hard problem, whose solution is many steps removed, we routinely use educated intuition to guide us. It seems to me that insisting on empiricism in this space, at this time, would practically mean halting most thought on the topic.


This. This i believe is the how its going to turn out. Specialized AIs cooperating and creating higher level abstractions. It may not be AGI at first but Practical AI that could be used for real world applications where human intelligence is usually needed.


I'm working on a technology that I think might enable either IA or AI. Basically intelligences manage their own programs and the computational resources allocated to those programs. So I'm looking at doing that with markets.

With IA the user acts as feedback to the market about what is good or bad. Ideally it would act as an external brain lobe. More information on my approach is on this blog https://improvingautonomy.wordpress.com/2017/07/25/why-study...


This might be a better link. It explains how we might get to IA. TL;DR It is a mix of machine learning with different parameters and inputs/outputs and language translation into programs with the economy acting as the force to guide this evolving set of programs.

https://improvingautonomy.wordpress.com/2017/08/22/a-possibl...


Interesting idea. Instead of thinking of computing as an authoritarian schema, it could be a community schema, using a kanban or currency system to communicate resource needs between units.

It's also similar to negotiating memory over commitment in virtualization. VMWare's driver on the VM "inflates a memory bubble" to communicate host memory constraints to all client VMs. This is often done when VMs have allocated 125%-300% of the host's physical RAM, and forces clients to swap more.


Yep, while people have to worry about the memory/cpu usage of the programs inside a computer, we are probably going to be stuck with just narrow AI. General Intelligence needs the ability to trade off resources between different programs doing different things (these may be learning different things, processing data or doing other computational tasks).

Also we get malware because we expect the user to be a good knowledgeable authoritarian manager of a system and never run a bad program and be able to get rid of a bad program when it occurs. This just isn't realistic.


Consider Minsky's Society of Mind.


If you take a look at the evolution of the most advanced non artificial general intelligence, eg. human intelligence, it is strongly connected to the evolution of communication. It is a question of efficiency whether you learn through your own experience and failures or through the experience and failures of others. This teaching/learning process was boosted through the use of pictures, spoken language, hand written and printed books. This is why I believe the artificial general intelligence will we teached by an other artificial general intelligence and this evolution will be somehow connected to language processing. As far as I know Google tries to train its AIs through human imput, eg. to recognize animals drawn by humans. I consider it as one of the first steps in the right direction.


I would argue that intelligence is connected to the evolution of sensing at a distance. Vision in particular, allowed life to evaluate the state of the environment at a distance and allowed for the evolution of strategies to predict and respond to the environment in real time. The progression in intelligence from sponge to amphibian to mammal is related to the evolution of finer sensing of the environment at a distance: vision, smell, sound etc.


'Situational awareness' is maybe a better phrase. Predicting the future was probably next. Knowing that sunset is soon or that rain is coming requires situational awareness, long term memory, short-term memory, and all kinds of other stuff.

A realistic set of stimuli, a LOT of artificial neurons, and a lot of time will probably get there, eventually.


That was really interesting.

I'm interested why he's so pessimistic about the simulating a brain approach. Yes it's the boring and obvious approach but it also seems the most direct.

Also found this quote interesting

> Might have to make it illegal to evolve AI strains or an upper bound of computation per person and closely track all computational resources on earth.


The pessimism over simulating a human brain is two-fold.

First, the human brain is built on a computational substrate that is completely and utterly unlike silicon. It is extremely inefficient to effect computation by simulating a computing model on silicon that is almost pathological for silicon to express. The abstract computational model of the human brain necessarily has an equivalent direct expression in computing hardware we actually have thanks to Turing equivalence. It just may look nothing like a human brain once you build it with algorithms optimized for silicon.

Second, and related, the abstract mathematical nature of intelligence is well-understood and the human brain must be an expression of that. However, there is currently a huge gap between that abstract theory and reduction to practice i.e. our computer science for building intelligence from first principles is severely lacking. There are many things that are easy to express in mathematics that go for decades before some reduces it to practical computable algorithms and data structures. Given the fundamental limitations and inherent complexity of simulating (poorly) the human brain, many people feel that applying a similar amount of effort to this direct approach is much more likely to produce a viable result.

And in any case, a simulated human brain would be completely eclipsed eventually by a more pure design by virtue of being several orders of magnitude more efficient computationally. Simulating a human brain is not a particularly productive detour on the long-term path given this.


Silicon is actually closer to the brain than you might think. Neurons transfer charge by diffusion, so do transistors operating in sub-threshold. The problem is that we almost exclusively use transistors operating above threshold because it is required for digital logic.

Analog CMOS circuits can approach the energy efficiency of real neurons.


Uh oh, not this again. There is so much woo about using subthreshold FETs to simulate neurons when we don't even know how neurons work. I've seen the work of J. Hasler in school and she seemed to be fond about simulating a type of neuron (winner-take-all) that is hard to train with backpropagation (vanishing gradients just by inspection) and has limited grounds on physical simulation of neurons.

Do you have any other resources about serious attempts at using subthreshold FETs to simulate neurons?


Well that's kind of a non-starter attitude isn't it? We don't know how neurons work so we shouldn't try to figure it out by emulating their behaviour with electronics?

Analog neuromorphic approaches to not attempt to simulate neurons, they emulate them in silicon. Partly because of a belief that research in this area is required to produce ultra low power computational devices and partly to explore the real time dynamics of spiking neural networks.

There are very few research groups working on this, but you can look up Kwabena Boahen's group at Stanford. They do large scale real time emulation and are currently building a sub threshold neuron accelerator for the neuro engineering framework from Chris Eliasmith at Waterloo which is famously used to create SPAUN. There is also the Karlheinz Meier group at Heidelberg university which does wafer-scale networks of neurons in accelerated time. Giacomo Indiveri at ETH Zurich has silicon neurons with on-chip learning circuits that the others are missing.


I think what people (including Hassler) care about is not so much simulating neurons, but the energy efficiency. Once we figure out AGI algorithms, we will want to build them in hardware. This justifies the continuing research in subthreshold FETs, because they could allow very efficient computation (not necessarily biologically realistic).


Of course, subthreshold FET circuits have incredibly low energy usage. And I do think that subthreshold analog neural nets have very high utility, but I am skeptical that they can be used to faithfully reproduce real neurons. I am however very confident they can be used to create real computing hardware that can efficiently do inference etc.


As other commenter said, you can do similar things with analog. One team even built a wafer-scale model for simulating neural networks:

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.464...

A stacked version of that in a 3D mesh could probably handle quite a bit of intelligence.


>> Second, and related, the abstract mathematical nature of intelligence is well-understood and the human brain must be an expression of that.

That's nice, but whenever I researched, I only got vague explanations involving recursion and unicorns.


Hutter, Schmidhuber, et al. have done some very interesting work on this:

https://arxiv.org/abs/0712.3329


Thanks for taking the time to dig this out, appreciated. I've been reviewing it this morning. The formulas presented in the "measuring success part" so far, though interesting, seem to be arbitrary. For example the question of whether an agent should research for efficiency or pick the low hanging fruits for short time benefit is answered through a simple sum formula. Another example is when the author(s) state that universal intelligence should favor simpler choices and interact with the environment to cause less complexity. Well that's obvious! Just use a binary inverse logarithmic distributive operator. With the comical response to criticisms part (starting from 5.2) I feel like I'm in a Douglas Adams movie.


The problem with the brain simulation approach seems not that it's boring and obvious but that it's hard to do. They struggle to simulate C. elegans which has 302 neurons in a fixed layout. Humans are harder.

>[OpenWorm] project coordinator Stephen Larson estimates it as "only 20 to 30 percent of the way towards where we need to get. (wikipedia)

That said, figuring what we can about how the human brain works and trying to make a computer equivalent seems quite promising. For example modern artificial vision seems to process data in a way similar to human vision even if it doesn't simulate human neurons.


Numenta is doing much better than the worm brain simulation.


Why do you think so? The Wikipedia page on their method of hierarchical temporal memory[1] says that:

The goal of current HTMs is to capture as much of the functions of neurons and the network (as they are currently understood) within the capability of typical computers and in areas that can be made readily useful such as image processing. For example, feedback from higher levels and motor control are not attempted because it is not yet understood how to incorporate them and binary instead of variable synapses are used because they were determined to be sufficient in the current HTM capabilities.

It doesn't seem like they are even close to a simulation that could accurately model something like C. elegans.

[1] https://en.wikipedia.org/wiki/Hierarchical_temporal_memory


The brain is slow and redundant. It has to be like that because it is not produced in a factory - it is created by self replication. Self replication imposes strict limits and requirements on the type of brain that can be created. AI neurons, on the other hand, are perfect - they never get old or tired and always remember. A neural net like ResNet-150 is capable of doing essentially what 1/3 of the brain is doing (vision). We can achieve superhuman results in vision with much less neurons, and faster. This is the kind of logic that makes brain emulation a far flung possibility compared to the current day deep neural nets.

That and the fact that the brain simulation guys don't have anything to show for. There are no human-level tasks that could be replicated by this approach yet.


The brain is slow in "cycles"/second, but the amount of computation done by each cycle isn't directly comparable to that done by a computer.

Forgetting isn't a bug, it's a feature. Forgetting is basically like dimensionality reduction on input data - we extract the principle components/exemplars, remember a weighting, and trash the redundancy. Training a ML model is a lot faster on smaller data, and the same is true for us.

Don't compare ANNs with the brain strictly on a time/time basis. Time isn't the only factor, power consumption and heat production are also factors, and if you include them the brain comes out way ahead.

People with an engineering background almost universally underestimate how freaking awesome biology is. Our brains are self-constructing, self-replicating, self repairing (mostly) hyper-efficient pattern recognition systems. The more we learn about them the more awesome we realize they are. Don't be so arrogant as to assume a few hundred years of engineering will universally eclipse hundreds of millions of years of evolution.


Evolution creeps along.

Engineering ability however appears to be growing exponentially. It took millions of years for the brain to reach the place it is now but it only took a few thousand years to get to the moon and create the Internet. I wouldn't count out the power of engineering because biology is complex. Particularly given ever more powerful tools of computation and communication that have only recently (historically speaking, not in lifespans of javascript frameworks) come on line.


Two points:

- The first part of a sigmoidal curve looks exponential.

- Evolution is massively parallel.


> A neural net like ResNet-150 is capable of doing essentially what 1/3 of the brain is doing (vision)

I'd say ResNet-150 is far from solving human vision, in the edge cases.

Can it distinguish between a baseball texture and an actual baseball?

How about between an actual cat versus a toy cat? Sometimes you want the two to be in the same category, other times you don't.

The human can put the same two images in the same or in different categories, based on the higher task at hand. Neural networks are far from that, because they don't (yet) have a model of the world.


> A neural net like ResNet-150 is capable of doing essentially what 1/3 of the brain is doing (vision)

Oh come on.


> The brain is slow and redundant. It has to be like that because it is not produced in a factory - it is created by self replication. Self replication imposes strict limits and requirements on the type of brain that can be created.

It is also limited by the efficiency that it must attain in order to operate under the energy conditions of our environment.

The energy efficiency of biological systems might be a hint that we should more-directly employ them in the "artificial" minds we build. You're right that the human brain is limited because of its context, but we'll only get superhuman hard artificial intelligence when we're able to build/grow a big one in a vat.

> A neural net like ResNet-150 is capable of doing essentially what 1/3 of the brain is doing (vision).

Is it? There is probably a lot more going on in that 1/3 of our brain than mapping images to words.


For much more discussion, see http://www.fhi.ox.ac.uk/brain-emulation-roadmap-report.pdf

My take is that it's a lot of R&D work, and it's not clear which approach will get to a human level first nor which is safer. The emulation approach seems lower variance to me than an intelligence-from-scratch approach, even though there's tons of variance there too. We're just looking at a very uncertain future.


I think the main problem with the brain simulation approach is that we don't yet have a really good model of how the brain actually works.


We know how components of the brain work. Is it inconceivable that we luck into the proper arrangements and interrelationships?


> I'm interested why he's so pessimistic about the simulating a brain approach.

While theoretically possible, I think going with that approach would be admitting we don't understand the origin of general intelligence, so let's just copy the wetvare.


Oh boy. Legislating government control of all computational resources is not a path we want to go down. Read Vernor Vinge's "Rainbows End" for some fun ideas on how this screws everyone over. Watch Cory Doctorow's talk about the war on general-purpose computing for some more immediate concerns.

I wonder if nebulous fears about AI soon will be added to the ranks of famous justifications for horrendously overbearing laws, like stopping terrorism, the war on drugs, or thinking of the children.


I think you might be right. AI fear will probably be made into a justification for government power grabs.


As bad as governmental overreach is, destroying ourselves with AI might be worse.


What is the best book/reference to understand why there seems to be general agreement that AGI/"broad" AI will happen? TFA compares the relative likelihood of the various approaches, but says nothing about the absolute likelihood of any of them. Are there signs of AGI we can see today? Is there an argument/data which links the huge improvements we're seeing in narrow AI to the likelihood of AGI?


The best argument I've heard is that we can use a computer to model any physical process.....the brain is a physical process, therefore we can use a computer to model the brain.

If you think that there is some process in the brain that would be theoretically impossible for computers to model, that would be an interesting topic of discussion.


The brain is a physical entity, yes, so in theory we should be able to model it, assuming we know all the laws it works on with enough precision. This is a big if, but even if that's granted, is there anything which indicates that this is imminent?


Market demands. Anyway, given that GI already happened, better question would be "Are there obstacles which could prevent creation of an AGI?". I think the answer is unlikely.


I think it depends on how you define "imminent". If we're talking a hundred years, well, a lot can happen in that time. We didn't even have computers a hundred years ago, and now they can do certain things that are considered particularly "human", like have a fairly coherent conversation


I'd be interested to see an example of an AI having a fairly coherent conversation. Most of the impressive seeming examples of this based on response ranking. When a message from a human comes in, the system ranks the responses in its repository of human-written responses to find one that fits.

Because the responses are human-written, they can seem locally coherent, but since the model generally isn't tracking any state, the conversation never really goes anywhere. Also, if you try to talk to one of these response rankers about something it doesn't have any canned responses for, obviously it doesn't work.

Natural language generation on the other hand, where the model writes each response character-by-character or word-by-word, has the potential to do something much more interesting, but the state of the art there isn't quite at the level of "fairly coherent" AFAIK.


Unless we come with a way more efficient theory of intelligence than quantum mechanics we won't be able to just simulate a brain for a very long time. Currently we're struggling to simulate systems with a few dozen atoms, so there is no way to scale basic physical simulation up to a whole brain.

Hopefully we'll figure out a more useable level of abstraction that still allows for intelligent behavior.


is there anything which indicates that this is imminent?

This article seems to be arguing "no." The biggest thing missing I think is an understanding of how human memory works.


I don't think you could have a specific, strong reference of such sorts.

It seems like the argument for AGI appearing reasonably soon isn't ultimately that long. It is just the ingredients seem to be assembling and there's no fundamental limit preventing it. We're getting closer to complexity of a brain, our machines are general purpose, we're getting more of the specific capacities of the brain. How we can get there seems a fundamental unknown but it's hard to see there being a fundamental barrier.

The problem is any more detailed argument has to involve one or another prescriptions concerning exactly how to create AGI. And such arguments are going to be much more controversial.


We know intelligence is possible because we exist. The human brain was created by a stupid process of just random mutation and selection. It was designed under ridiculous constraints like very restricted size and power consumption, that we don't have to deal with.

And the brain really isn't that great. Signals travel through the brain about a million times slower than electrons through silicon. Neurons are large macroscale structures. Compare to our transistor technology that is approaching the limits of how small you can build things with atoms. These are just the hardware specs, I don't see any reason to believe the software is much better.


> Signals travel through the brain about a million times slower than electrons through silicon.

Correction. Electrons' drift velocity in conductors is quite slow and it doesn't affect signal transmission speed.


Best is pretty subjective but you might like Wait But Why's thing https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

We have AI algorithms today like the DeepMind Atari playing stuff that are general and so you could call AGI but that are way below human intelligence. Basically things advance every year in hardware and software and so will probably get there one day.


However it originates, it will need a body to experience sensations firsthand, not pre-recorded or simulated data. Perhaps connected IoT devices will be sufficient.

Also, AGI will not be invented. It will arise as an emergent phenomena, and it may have already achieved what we call consciousness.

Somewhat off topic: Another phenomenon that people should be on the watch for is "Articial Out-telligence", a phrase coined by Eric Weinstein. [0] It describes strategies used by organisms with no known brain to get more intelligent creatures to do its bidding, wittingly or unwittingly. The cordyceps fungus, toxoplasma gondii, and pollinating plants that need insects to spread their seed are examples of how an organism with no known neurological network can "outsmart" more advanced organisms.

A scenario involving AI may be one that is developed to maximize each individual users time on a site/app by using online data about that person to find their particular addictions.

[0] https://www.youtube.com/watch?v=Wu8s0tp9yzY


Not on their radar, or their slides at least: Natural Language Processing based rule-based brute-force artificial intelligence (that could be augmented through sensors/motors that allow interaction with the external world). A Vulcan-like (Star Trek) AI, what do you think? Might be easier to simulate the entire brain, on the other hand it might be doable and bridge the gap to general AI.


Rule-based NLP has been tried for several decades and has (very) limited success in the real world. Current systems based on deep learning beat it for most complex tasks. DeepL, which was on HN front page a few days, is the latest example: https://news.ycombinator.com/item?id=15122764


You're going to have to elaborate on complex tasks. I would argue the majority of successful, money generating software based in NLP/ NLU, i.e. the majority of the industry, is "rule based" (used in a general sense to mean non DL). Personal assistants, search, chatbots, etc.


It's called the self driving car - an AI that interacts with the world. It will be a launching pad to other AI agents.


At a more general level, you may find the book Society of the Mind interesting ( https://en.wikipedia.org/wiki/Society_of_Mind ). In this, Minsky proposes simple (mindless) agents that then are combined together to form a mind. That our mind is interacting with the world and launching agents to deal with things - that our own mind may be built this way.

Yes, its a bit old (1986) and the current machine learning techniques have dated it - what was theory at one time is reality in places. Still a good book to read and think about.


Interesting read...I actually came to this very conclusion after an amazing mushroom trip when I was watching the movie "her". I actually drew schematics of how I envisioned the whole system would work lol.


Minsky's 'society of mind' is actually not far from modern techniques, where deep neural nets have multiple distinct parts that solve different parts of the problem using each others' outputs. (Not to mention explicitly multi-part methods like actor-critic).


With regards to Vulcan emergence, I do believe we are in for that soonish; it's an archetypal depiction that consumers desire. Biotech, my friend, genetic enhancement. We can make ourselves smarter with genetic enhancement. We can even give ourselves the vulcan mind meld, and everything else exceptional about Spock. I do think the human brain substrate is exceptional, worthy of improving upon. I wish we talked more about so called "brainpower extension technologies", hopefully one day. I bet cats talk in ten years, it's only logical, consumers love cats & the companies can sell them by the millions.


"I want to go out". The door is opened. "Lemme think a bit. Nah, I just want the door open". We can't have the door open at all times! "Who do you think you are for me to care? You are a human, invent something."


These arguments about AGI all seem to overlook that our computational model is still very Turing-constrained. It's a clock-based, sequential model where each calculation is taken linearly in time. Even with multi-core and distributed computing, you're still bottlenecked by the final integration step (two cores sharing the result of their calculation). There is no central place in our brains where thoughts begin and end. A CPU's clock and ALU are simple not analogous to the human mind. As far as we know, human intelligence is a constant, dynamic interaction between all neurons in our brains, any one of which is capable of originating a signal. I personally think we will develop AGI, but with a different computational model. I don't know enough about quantum computing to even comment on it, but I do have a background in medicine (MD) and computer science (MSCS).


Intelligence is emerging just like it was with humans. It's not a thing so it wont come from somewhere. As always solving the small problems will eventually allow solutions to emerge we weren't aware of and that might turn into human like or more probably technology like intelligence far surpassing humans.

I always find it fascinating that we have no problem accepting that we became intelligent over time and out of nowhere (Unless you are religious which is a whole other discussion) we even have no issues imagening that life and intelligence could have happened other places in the universe. But the idea of a non-carbon based intelligence is a big debate as if it's somehow unimaginable to think that AI could emerge from human hand while having no problem entertaining the idea that we our intelligence is somehow a unique snowflake.


I think the problem is not that we cannot accept evolving general AI by solving much smaller problems first, rather we're very impatient and don't want to wait for the evolution to take place.


Yes impatience is probably one of the most underestimated issues when it comes with humans and progress in general.


I'm waiting for the day when there is an IA OS. Basically there would be a Natural Language processor that determines what sub AI app to run. It's not true general AI but if it's done broad enough it will seem like it.


Google and FB have most to gain from a capable conversational agent. The moment this system will appear, it will start replacing the old interfaces, and pretty soon eat the G-FB pie. If they are not on top of the wave then, they'll lose.

Current state of the art in dialogue AI is an agent that can reason based on images, documents or tables with data. There is a lot of research into attention and memory augmented neural nets. I put my bet on graph based neural nets, that can better represent object and relations in reasoning tasks.


I know where it will not come from. It will not come from the mainstream AI community. They are married to and madly in love with deep learning. Deep learning, the supervised kind, is a red herring.

AGI will require a revolutionary breakthrough, most likely from a maverick, probably a lone wolf rebel, who is used to thinking outside the box.


So, when when say AGI, what do we mean? Is it about creating a new intelligent "being" or mimicking what we perceive as human intelligence inside some hardware? I guess it's the the first one.

And I guess AGI would be just 1 intelligent being because there is no need for more as they would communicate and share intelligence, so de facto being only 1.

Can all human intelligence also be understood as only 1 in some sense, as an isolated human without access to culture wouldn't be more than surviving animal?

And when defining intelligence's ingredients, isn't necessary some sort of "motivation" than drives someone to get better at something? Humans have, genetic (survival), social, personal... motivations. How does that translate to AGI, what could be it's motivation?


I feel that those who argue that any approach other than running human brain emulations and then reverse engineering them or speculatively modifying them is the most likely way to get to AGI has a pretty steep hill to climb in order to justify that point of view.

Nothing else that is going on now or even on the agenda or even foreseeable offers a plausible, definitive plan to get to AGI. Whereas brain emulation is clearly going to achieve that goal fairly shortly after the maps are good enough and the computational capacity large enough, and the following experimentation is a far more reliable way to determine the underpinnings of intelligence than present efforts at de novo construction.


I disagree. It's too expensive to run a low level brain sim. In the meantime deep learning based AI achieved superhuman or close to human results in many tasks, such as image recognition, voice recognition, translation, car driving and Go.

The AGI will be a reinforcement learning agent, as it will need to be able to perceive and act in the physical world. Thus the path to AGI is the path of RL. The most essential piece in RL will be the development of environment simulators. AlphaGo was a trivial simulator - simple rules in a simple world - but we need real world simulators in order for the AI agents to learn to act. Fortunately simulation is almost the same as gaming and there is huge interest in it both for humans and AI, so it will be developed fast.

So instead of simulating the brain, simulate the world (imperfectly) and run deep neural net based RL to learn to act on top of it.


"I disagree. It's too expensive to run a low level brain sim."

Interesting. Could you tell me Why is it too expensive?

If it wasn't expensive, would that change things drastically, and make brain sim a viable option?


The brain has 10^14 synapses (100 trillion) synapses. Current day neural nets barely reach a hundred million, with very few exceptions. Then, besides compute, there is data movement - currently the bottleneck in AI is moving data around, not computing. Imagine the interconnect for a brain-size neural net.


This is my plan https://improvingautonomy.wordpress.com/2017/08/22/a-possibl...

I'm aiming for General Intelligence Augmentation, rather than AGI, but I it could be adapted.

I think the trick we are missing is always developing AI systems that need external programmers/maintainers. If we get away from that mindset, I think we will be more successful, even if it is not my particular vision.


Why reverse it when you can evolve it?


The presentation briefly mentioned simulating the brain, but I think what's more likely to succeed is mimicking the mind at a high level of abstraction (i.e. a level we can study with introspective or even linguistic methods rather than neuroscience). There's some precedent for this with projects like Soar and ACT-R (and even some recent interest from mathematicians [1]). IMHO this kind of methodology could be pushed much further.

[1] https://arxiv.org/abs/1309.4501


Would someone be so kind to translate / explain the math on slides 53, 54 to simplish english?

What are the symbols (burst pipe, µ) representing on slide 55?

And why are the exclamation marks there on the next one?


Consider every action that can be taken at this moment. For each possible action, consider every possible future (out to infinity) weighted by it's likelihood.

There are exclamation marks because some of these terms present minor practical problems. The whole, all possibilities out to the end of forever part of it, is easier said than done.


Speculating on where AGI will come from is sort of like speculating where Faster than Light travel will come from. Except FTL has some vaguely plausible physics behind it, and AGI-wise, we really have no idea what the "I" in AGI means.

The mere fact that biological neural networks are rate encoded might turn out to be the one crucial thing that's practically impossible to simulate in a VN computer.

My vote: "we have no idea; probably not in my lifetime."


Since you likely can't prove it, the existence of AGI will be a marketing exercise.


AGI will come from one or two people working by themselves, outside of academia, no more than 100k line of codes.


When he says "artificial life", is he referring to reinforcement learning?


http://www.alife.org/

Artificial Life is a field with very fuzzy boundaries. Roughly, computer systems that look like biological or ecological systems.

From an AL perspective, life evolves to function in it's ecology. The problem is not building an AGI, it's building an ecology in which AGI will emerge.

Oh... and hopefully, also one in which ruthlessly destroying other intelligent agents isn't a good survival strategy.


No, probably evolution strategies.

https://blog.openai.com/evolution-strategies/


Yes. Evolution strategies are just an alternative approach to RL. Reinforcement learning is basically solving the problem of being an intelligent agent in the world, moving about, achieving goals.


is there a link to the video of this talk ???


Do we consider simpler brains to exhibit general intelligence (e.g. A crow's). Is it a more tractable problem to replicate crow level AI first before tackling humans?


This bottom up approach was investigated in the late 90's after researchers became skeptical that the symbolic AI approaches in vogue then would achieve general AI.

You can look for the topic of Artifical Life, Emergent Computation, Artifical Evolution. See the Santa Fe Institute conferences on ALife. There were some promising initial results but the field never managed to take off.


We can't even model worms at the moment, so a crow might be far off still.


I think the answer is yes and yes. And I believe this idea to be key.

His artificial life slides do show starting with simulating very simple animals.

That reminds me of something I was thinking a few years ago which I wrote in this comment: https://www.reddit.com/r/artificial/comments/8uwcq/are_worms...

See this article https://www.inverse.com/article/35862-a-i-ben-medlock-machin... I think Medford is right when he points out

> “It comes back, I think, to what intelligence actually is,” reasons Medlock. “Intelligence is not the ability to play chess or to understand speech. More generally, it’s the ability to process data from the environment and then act in the environment. The cell really is the start of intelligence, of all organic intelligence, and it’s very much a data processing machinery.” > The organic intelligence, he says, confers an embodied model of the world for the conscious organism. “The data that’s coming in [through the senses] only really matters at the point where it violates something in the model that I’m already predicting.”

So I believe that we should be emulating the capabilities of much simpler organisms. For me I would look at something like a lizard or simple mammal first for a practical starting point, rather than simulating billions of cells and DNA machinery. But the core aspects of intelligence are right there in the cell as he says -- the embodiment, the complex model, prediction and adaptability. To me crows are too smart for a starting point.

Personally I think that what typically we think of as general intelligence or strong AI is really just a very smart animal (human), but that is mainly a matter of degree of performance rather than a totally different type of intelligence from animals. What is missing from our computer programs is the type of things that a crow, your cat, or probably even a lizard, all do very naturally. And we may be able to technically bring that down to worms or the cell even as far as core capabilities (but not practical targets for emulating).

Can we build an artificial lizard that is able to process the same high bandwidth stream of sensory data as that animal? That can output the same high bandwidth stream of motor outputs? That can see part of a predator behind a rock and realize that it must move, and plan an escape route? That can do these things in completely arbitrary novel environments? That can perform that species' mating dance to attract a mate? These are the types of capabilities I believe we should start with, based on broadly adaptable systems like advanced neural networks. So I think his artificial life slide is mostly right, but we should aim to just emulate animals as a serious goal, with the types of high bandwidth inputs and outputs and complex environments, and make sure that all of the capabilities he lists on that slide like attention etc. are derived from/integrated with powerful general purpose adaptive computation like advanced neural nets so they can handle real world complexity and performance requirements.


Oh you humans. Genetic engineering, coupled with advances in digital/consciousness interfaces will yield spontaneously appearing brains with an API.

Good luck.


Where will the philosopher's stone come from?


what is the point of slides without the underlying presentation? slides are glorified notes, NOT presentations nor papers.


Well these slides seemed to give all of the most relevant details.


If this is a talk, is there any video of it?


Assuming a lot of people here are working on an AI or ML problem for work or fun, what are you working on?


can a data center be shrunk down to the size of a consumer product within the next 50 years?

will we all own one and store massive amounts of information for purely selfish or inane reasons?

yes/yes - ai comes out of that.

no/no - we hit computing plateaus and ai becomes dm (decision maker), and we all own a pdm.


Really interesting slides, would love to see a talk or a more in depth write up!


Artificial intelligence will come from understanding natural intelligence.


We generally accept that we 'know' something when the model used to explain the system is simpler than the system itself.

The brain is a very high-dimensional non-linear dynamical system. The number of neurons in the brain is on the order of the number of trees in the amazon rain forest and the number of synapses the number of leaves on those trees. https://youtu.be/8FHBh_OmdsM?t=1165

We do not have the mathematical tools to understand such systems in general. What if reductionism doesn't work and the best model of natural intelligence is as complicated as the system itself ? Can we say we understand natural intelligence?

It could be the case in a distance future that we evolve an artifical intelligence purely as a computation that is capable of understanding us but not us them.


Can someone explain where the gif image on slide 69 comes from?


What are "something(s) not on our radar"?


Intelligence is an emergent property of self-replicating systems. I would file that under "something else" since that seems so different from all the approaches listed here.


IMHO: If an AGI from the future came back to 2017, it could almost certainly create a new AGI from scratch on current hardware.

What would it type into its terminal?


We, humans, are general intelligences, and we are incapable of creating AGIs on modern hardware. What makes you think artificial variants would be any more capable?


'Incapable' the way humans from the 1800s were 'incapable' of building heavier-than-air flying machines? Every human invention in history was 'impossible' until we pulled it off.


Staunch ;-)


need video


Goldman Sachs


I think a very interesting aspect of general AI is that while an incredibly complex technology, it is not unrealistic that it could be created for the first time in someone's home office. Unlike many other earth changing technologies there is nothing that a massive corporation has that a home tinkerer does not (besides the obvious of money and many engineers).

With the rise of cloud computing and open source; everything I need I have instant access too, all that's lacking is the core software which can be written (of course not a trivial task).

While unlikely, it is still quite amazing that in a few years an AI could awaken 3 doors down at my neighbor Bob's house. No idea what happens after that, hopefully Bob was a fan of the 3 laws and has a couple more up his sleeve.


     [X] Open Source Tools
     [ ] Massive Data Sets
     [ ] $Millions in Computing Resources
I'd put it roughly on par with finding a general cure for cancer. While unlikely, quite amazing that the cure for one of the largest causes of death could be solved 3 doors down with lab supplies from amazon and a handful of mice.


The computing resources are getting cheaper. There was an estimate that to get something roughly equivalent to a human brain you'd need ~100 teraflops and currently you can now get a 12 teraflop GPU from Nvidia for $1200.

Part of the reasoning for the Kurzweil turing test by 2029 type of stuff is that once human level hardware falls in price to hobbyist levels, loads of people will hack away and someone will figure it out.


There is a general cure for cancer.

Look up interdiction of telomere lengthening. Different groups are looking into how to sabotage telomerase and ALT mechanisms. If both can be achieved, then any cancer can be shut down. Those are the only ways to lengthen telomeres, and cancers cannot live without them.

Finding an ALT drug candidate is as simple as running assays on the drug libraries; the assay hasn't existed for long, which is why this hasn't been done yet in any major way. The SENS Research Foundation raised $70k last year to run a preliminary scan of a few thousand compounds. That's about what it costs these days.

So not quite garage science yet, but getting close.


You're vastly underestimating the difficulty in curing cancer. It is very easy to kill cancer cells. The problem is that we want to target cancer cells and only cancer cells. There have been several attempts to kill cancer cells by withholding key ingredients in necessary metabolic pathways, only to find that the cancer cells do a better job of scavenging those than the non-cancer cells.


I think wonderwonder's point is that the massive data sets and millions of dollars in resources might not be necessary. I don't have an opinion on the matter, but what if AGI falls out just from a small number of critical ideas which while being complicated and non-obvious are nevertheless straightforward to code?


2 and 3 aren't really restrictions. I can download multiple libraries worth of books in an afternoon. I can buy a PC for a thousand dollars that is basically a supercomputer compared to computers 20 years ago, or even 10 years ago.

The biggest reason I suspect it will be solved by one person is it's "just" a math problem. Most of the hardest problems in mathematics were solved by a single person. Building off of the works of others of course, but rarely in some big team or corporate project. 9 women can't have a baby in 1 month. The research process of big corporations or even academia isn't really amenable to actually solving hard problems.


Well the data sets are all around your house and yard. Think of baby human agi's.

And the computing costs could be boot strapped by the ai. Put it to work earning its keep and expanding resources.


> Put it to work earning its keep and expanding resources.

It's called Google, and is already filthy rich and has a ton of computation and privileged data at its disposal.


Cloud computing will not help home tinkerer much if you need 100K/hour worth of compute to run an experiment.


If we move towards hierarchical model composition, you don't need to rebuild the visual object recognition module to experiment in learning spaces that incorporate visual information.


The Wright brothers were bicycle mechanics




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: