Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Despite the general positive spin around it ("we did it as a learning project"), most people would agree that Tay was both a technical and a PR failure.

But the pattern does repeat: Microsoft releases an AI which fails. Tesla's autopilot cannot "see" white object on white background. Apparently, Google also had a crash which is recently being claimed as human error. My guess is that this list is not going to stop here.

Suppose I ask you to build me a teleporting machine. You try, and like the movie Spaceballs, my torso and up comes out aligned wrong. This is now declared part of the iterative learning process, except that the cost borne by the corporations for the failure is quite minuscule compared to the cost borne by the affected party (risk asymmetry).

So while people talk about the huge advancements in AI, shouldn't we be quite skeptical especially at this point? Since none of us have seen the alternate parallel universes, and considering

a) the resources being thrown at the problem

b) the risk asymmetry involved

c) the privacy intrusion involved in the data collection (you knew I would bring it up, didn't you?) and not to mention

d) the inability of anyone to demand any kind of transparency from these AI pioneers

I can as well ask, are we as a society paying too high a cost for this progress? Could we really not do any better than this?



Certainly interesting ideas, but we only need an AI that is better than the current paradigm. If Teslas doesn't detect a white object, it is not as heavy as a human sleeping in the wheel. So if Tesla's kill 30k people a year, but humans die in the wheel at 120k people a year, an improvement would be nice, as it avoids 90k deaths a year. So if we are already using teleporting machines (and we accept the tradeoffs of using it, like cars) and 1 in a million fails, if AI makes a failure in 1 in 10 million, clearly we should use that technology right?

So as Elon musk said recently: "Whatever this thing is you are trying to create.. What would be the utility delta compared to the current state of the art times how many people it would affect?"

The wonderful thing about this AI algorithms is that we can rate them on their efficacy, they might be a black box, but the input and output are always known. If we see that google crashes 10x more cars, we wouldn't use their AI.


> only need an AI that is better than the current paradigm

This is fundamentally what is being debated here. While the current paradigm can seem fairly poor, let us consider a few things which are true for the human driver.

1. He/she puts himself ALSO at risk, as opposed to the self driving system (remember it is theoretically possible for the self-driving car to not have any occupants at all. It is potentially only a matter of time before it happily wades through stand-still traffic to go and buy grocery for you).

2. He/she is not, in the process of being/becoming a good driver, also taking away personal freedoms of other people - which is effectively what is happening when the megacorps collect any and every piece of data they encounter. In a recent article in the Economist, we hear about a system which augments the autonomous cars by mapping roads in extremely high resolution. [1] Remembering all the work Google does to occlude sensitive information from its maps, imagine how much more effort has to go into this system to have it occlude personal details completely. Now imagine this data (which is currently being collected by a third-party company) landing in the hands of Google/Tesla/Uber etc. who are going to combine it with other human oriented information (e.g. Bob always leaves his office at 5.00PM, and always swerves sharply to avoid the pothole at so and so corner street, let us add that info to our system and improve it).

3. If you think the above scenario is ridiculous, then the next thing you would probably ask for is accountability. In other words, at some point, you are going to ask these companies to open up their data collection processes and algorithms to the world. This is exactly what would happen if the entire thing were a completely OSS-based process. There isn't an equivalent problem for the human driver, because you have sufficient faith in a human's need for self-preservation that you will not demand a real-time thought reading machine which will warn oncoming traffic if the human driver is having an onset of road rage.

> So as Elon musk said recently: "Whatever this thing is you are trying to create.. What would be the utility delta compared to the current state of the art times how many people it would affect?"

This is also being debated. There are such things as side effects, and some of them are invisible. The current state of the art (i.e. the inefficiency, or rather the inadequacy, of humans to perform these tasks) does not, as a side effect, also rob society of its peace of mind. Imagine if, for every piece of information which is collected, you also had a tiny pebble placed somewhere in your neighborhood. Soon, by the time these systems have reached the utility delta that you are happy with, we might have a mountain the size of Everest. Will we? I don't really know. Because it is invisible. Some people would still be OK with it. But most people, hopefully, would want to see the size of the hill. Is it a molehill or is it really a mountain? The lack of accountability surrounding these questions is actually quite shocking to me. [2]

[1] http://www.economist.com/news/science-and-technology/2169692...

[2] Not to mention the other cascading side effects of the data collection process itself, such as your personal data, which you don't even know how it was collected, being collated to be made sense of and sold to the highest bidder




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: