Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'd be curious to see just how large the commercial benefits of standard ML actually are. The only reason it is hyped right now is because the media is leading people to believe something close to AGI is right around the corner, because we can bruteforce Go and index a million image dataset...

Anyways, all the AI/ML hype is generated not by actual commercial value, but implied AGI. So, it would behoove us to question the underlying assumption that AGI is actually possible. After all, it is the scientific thing to do.



Machine learning is used widely across a huge number of industries and fields from internet search, pharmaceuticals, mining/energy, digital security, entertainment, etc... So the commercial benefits are definitely tangible and not just "media hype".

On computability of intelligence. I'm not an expert on this, but many people study the dynamics of biological neural networks and can represent these dynamics as PDEs which can then be mapped to electrical circuits. Granted approximations happen along the way and it has been difficult to scale these methods to large population of neurons. It still points to a solid argument that biological neural networks can be represented on a silicon substrate. This is basically what the entire field of neuromorphic engineering is focused on.


My understanding is the practical benefit of machine learning and control systems is mostly due to simple models. Not the fancy "deep" models currently in vogue. An added problem is the high dimensional models are essentially black boxes, and are probably significantly overfitting the data, hence all the adversarial type research.

Why does the mapping of biological neural networks to silicon substrate imply the human mind is a computer?


> Why does the mapping of biological neural networks to silicon substrate imply the human mind is a computer?

Is this a rhetorical question or something, because it seems to me you've answered yourself there. I mean, if the mapping works, what else should it imply besides the consequent?


The implication requires a further premise that the mind is reducible to the brain, which we do not know to be true.


Not just the brain, but the whole body and, even more generally, to phenomena that can be described by physics. Unless you are trying to argue for a non-physical (i.e. magical) soul, the argument is sound.


Right, why assume the mind reduces to physics? This is usually how people argue for AGI being inevitable, but assuming the mind reduces to physics is a big assumption. Perhaps we have a physical soul.


Why not? Everything so far had been reduced to physics; sometimes to at-the-time undiscovered physics.


Deep models on GPUs have made using machine learning on images tractable. Manufacturing is one industry which is expected to benefit a lot from this, using computer vision heavily for automation, quality control and robotics.


I'd be curious to see the actual ROI on this claim.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: