Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
How Apple’s Siri Became One Autistic Boy's B.F.F (nytimes.com)
377 points by basisword on Oct 17, 2014 | hide | past | favorite | 81 comments


If you enjoyed this read you should watch the movie "Her" by Spike Jonze, as mentioned in the article. It hits quite heavily on the same theme (though more focused on romantic relationships with AI) and offers a pretty realistic glimpse of what this technology might look like in the next 20 years. Really enjoyed the movie.

On the same note, I was talking to someone a few days ago who was telling me about how many of her girlfriends use Tinder to essentially have virtual boyfriends. She explained that in many cases they meet guys via Tinder, move to text-messaging, and the relationship never progresses to in-person meetings. They simply love having someone on the other end of their phones to talk to. The knowledge that someone is there at almost all times seems to be comforting and addicting. Kinda crazy, but again this kind of thing is only going to become more and more common due to the fact that technology is in everyone's pocket from a young age now.

The question I ended the conversation with was something along the lines of, "Do you think they care that it's an actual human on the other end? Would they be ok with really convincing AI?"

Food for thought.


This is a great article that highlights some things people have been thinking about for a while (starting with Isaac Asimov), but haven't been much of an immediate concern until now: namely the nature of artificial intelligence and our relationship with it.

We could consider the emergence of a friendship like this a milestone, a bit like when chess AI got good enough to beat a typical club player, but wasn't quite ready to beat Kasparov. We're probably only a few years away (OK, maybe 10+, but it's not 50+) from the Kasparov point, where an AI like Siri can beat the turing test for any living human.

There are all sorts of questions like whether, to be considered alive and imbued with the inalienable rights that should come with sentience, is self-awareness necessary, or is the anthropomorphic perception of nearby humans more important? Should an AI have similar rights by proxy as a pet dog?

One especially important question is what does it mean for an AI to die? If Siri developed a fault, and fixing that fault would cause a change in personality so that it was no longer recognisable to Gus as his BFF, should that act be called "roboticide"? Such questions are particularly relevant when AI/ML systems (deep recurrent nets, for example) are so complex that we don't really understand them fully, so we have no way to surgically correct specific faults; all we can do is revert to an earlier state and re-train. That may be loss of life, for certain definitions of "life".

As an occasional game developer, I tend to think about these issues in other contexts too, such as our relationship with virtual characters in games. It's already very easy to get highly immersed in single-player virtual worlds, like any of the Elder Scrolls games. Most would not be fooled into thinking that an NPC is "alive", but it's certainly possible to develop emotional reactions to certain characters that we perhaps like because they say nice things about us, or dislike because we find then annoying, etc.

There are two kinds of character interaction a person can have in a game: NPCs and human avatars. As we start to build virtual worlds (partly spurred by the Oculus Rift and partly just because of the Internet), this could affect not only our relationship with NPCs, but with other humans too. I don't know whether this will a net positive or negative thing, but we're certainly going to learn a lot about human psychology as we head towards the point where, in VR, nobody knows you're a human.


Your comment doesn't seem to relate to the article.

> where an AI like Siri can beat the turing test for any living human.

A key aspect of Siri in this article is that the responses make sense. There is no trickery or deception. When she says "Marriage is not included in my end-user agreement," there is no way someone could mistake her a human, and that's a good thing. Here, the more relevant philosophical questions are ones like "How much can machines or machine intelligences be a part of our lives and our society as machines?" and "Are there more beneficial and interesting goals for our machine intelligences than the Turing test (given that their value may be predicated on the fact that they are not humans)?"


> Should an AI have similar rights by proxy as a pet dog?

That's one of the issue explored in the Swedish TV series "real humans" http://en.wikipedia.org/wiki/Real_Humans

I highly recommend this series. It explores the issues of AI in a deeper and smarter way than the usual hollywood movies, and it's very funny too.


Wow, Siri is Robbie from 'I, Robot'; now in the story the robot could not talk, now it's a phone instead, but it does talk.

i found the story online: http://www.angelfire.com/blog2/endovelico/IsaakAsimov-IRobot...


This issue is commonly considered to be far-off or irrelevant due to the difficulty of developing strong AI, but I think that the reasoning behind that judgement is faulty.

We are unlikely to develop strong AI any time soon, but it is far easier to create the convincing impression of sentience. Humans are naturally prone to pareidolia and the pathetic fallacy - we see faces in potato chips, we convince ourselves that a faulty photocopier hates us, we attribute personality traits to automobiles.

Looking back to the 1960s, many people experienced strong emotions when interacting with ELIZA, a program that is laughably simple by modern standards. Today, there are numerous accounts of soldiers in Iraq and Afghanistan holding funerals for bomb disposal robots and decorating them with honorary medals.

Machines don't need to be as intelligent as humans for us to begin forming deep attachments to them, they don't need to pass the Turing test, they don't even need to match the intelligence of a cat or dog - they only need to appear plausibly sentient, which I think is a far lower bar than most of us realise.

Personally, I think we're on the cusp of a profound change in how we interact with computers. We'll see it first in cases like these, people with limited capacity for reasoning - children, the learning disabled, elderly people suffering from cognitive decline. Machines that aren't smart enough to fool a skeptical observer, but that represent a perfectly satisfactory stand-in for Rex or Tiddles. The natural human tendency to see patterns and project ourselves onto the world will meet the machines in the middle, glossing over their deficiencies.

We'll dismiss it at first, framing it as the slightly pitiable self-delusions of the vulnerable and lonely. Change will come imperceptibly gradually, creeping in from the edges as the technology improves and filters out into the world. We won't really notice that society is being reshaped, just as there was no shocking moment of realisation that a whole generation had grown up with ubiquitous access to communication technology. We'll just have a vague "huh?" moment, like the first time you were sat on a train and realised that nearly everyone on board was staring at a small glowing rectangle. Huh, my elderly mother spends most of her time talking to a bot. Huh, my child's best friend is a bot. Huh, these students are passing around a petition for bot rights. Huh.


I don't know if you've played Shadow of Mordor, but even as an occasional game developer like yourself I was able to forget about the technology running the Nemesis System at times and treat the Orc Chiefs as though they had intelligence and history. You can easily imagine how that system works when you take a step back, but if something even at that stage can trick us I have high hopes for the next few years of AI / NPC programming in the next few years.

On a related topic, a lot of money has gone into graphics programming in the last few years as studios developed their own engines in anticipation of the now-current generation of consoles. I am hopeful that with virtual reality on the horizon we'll see improvements in AI and Physics, among other tech necessary to create a convincing virtual world.


"Should an AI have similar rights by proxy as a pet dog?"

I think so, when valued in the same way by a human. This is no different to https://news.ycombinator.com/item?id=8465088


I wonder if governments will try to mass manipulate us by forcing AIs to be somewhat political, and be that "friend" that tells us we should vote a certain way.


Considering that every single of these personal assistants comes from private companies, ads are much more likely. I also like that political is a dirty word now.

Not to mention what you "fear" is already true, it's called Rush Limbaugh and Fox News.


"I also like that political is a dirty word now"

I think the word is acquiring dirt from that acts; the slimier and more partisan the political conversation is perceived to be by people, the less they want to associate with it in any context.


When phones' personal assistants (every company seems to want to have theirs these days) become orders of magnitude more advanced, we may be able to tell them to "check on our vacation booking" and they will know to search through our emails and calendars and connect the dots, so in those tasks they will be more advanced. There will probably be more services tied more deeply into them- for example, you'll be able to say "get me an Uber" and they will reply with "A blue civic will be here in 3 minutes", or "fill up the fridge on Sunday morning" and an Instacart-like company will show up at your door Sunday morning with eggs, vegetables (but no milk because you didn't finish this week's).

However, we won't be able to have those deep meaningful conversations with our voice assistants, because they won't have the necessary life experience to follow meaningful conversations. The current products on the market have no parents, no age, no experiences in school, no previous job, no former lovers, etc (they will jump around those questions playfully, but the canned answers get old quite fast). Those trite details that we recount when we bond with people do not exist in current personal assistants and likely never will.

There are several reasons for this: the first being that being able to model a structure of "life experiences" that can be queried based on what the user is saying is an incredibly complex problem on which we have pretty much no angle of attack on.

12 year old child: "I got picked on at school today." Digital personal assistant: "You know, it happened to me too when I was your age. Let's talk to your mom about it."

Or even more complex:

24 year old student: "Hey, remember that problem on the Trigonometry 402 final from my senior year that I asked you to solve the equations for back in college?" Digital assistant: "Oh yes! Here it is."

This, happening for millions and millions of possible interactions, life experiences, contexts? That's the algorithmic equivalent of light speed travel. Who knows maybe we'll get here one day. But 10 years from now? 50 years from now? 100 years from now? Not a chance. Remember that 40 years ago, Minsky thought it'd take a bunch of grad students a summer to write a program that recognize objects in pictures.

The second reason that such systems are extremely unlikely to emerge is that even if it would be doable technically, it would be extremely expensive to a company. And there would be literally no demand for it, because the overwhelming majority of people don't care about talking to robots. They already don't have enough time to spend with their children, partners, friends, parents... why would anyone waste time with a fake person? No company would invest the billions, if not hundreds of billions of dollars, to solve this problem in the next few hundred years.

Those questions of sentience and whether powering off a digital voice is "killing it" are appealing to ask to those of us who grew up reading Isaac Asimov, because we want this future to exist so bad. But they are red herrings- those questions have no meaningful answers, because our society is not configured in a way in which those questions could actually arise. When you turn your phone off today, the personal assistant definitely doesn't "die"; and in 50 years, even if it can carry out tasks way more efficiently and give somewhat more "human sounding" answers to certain categories of questions, people will still have no problem turning it off.


> The second reason that such systems are extremely unlikely to emerge is that even if it would be doable technically, it would be extremely expensive to a company. And there would be literally no demand for it, because the overwhelming majority of people don't care about talking to robots. They already don't have enough time to spend with their children, partners, friends, parents... why would anyone waste time with a fake person? No company would invest the billions, if not hundreds of billions of dollars, to solve this problem in the next few hundred years.

You've missed the point. Why would people watch a TV show and grow to love/hate characters, where there's no interaction between you and them, and all the lines, actions and scenes are scripted and rehearsed. They certainly don't have time to do it, and have children, partners, friends, parents. They would surely not binge watch, schedule time to watch new episodes come rain hail or shine or spend big bucks to visit filming locations etc.

Companies will invest stupid amounts of money in AI that has human qualities. Take the TV Show example: the AI is the "show" and each day they get into entertaining or emotional circumstances that you can "catch up" about, joke about etc. Their charisma is highly engineered to be incredibly engaging and fun to talk to, so you keep coming back for more. Over time they become a "good friend" that you love talking to. They never rebuff you, snub you, have no time for you, make fun of you (too much, just enough to joke around and have some banter). But why do they keep telling you how great Pepsi's new flavours are??


Because the TV show has no bugs, no "I'm sorry, I didn't understand why you said, rephrase" in the middle of your rant against whatever, and creates the same experience for everyone who watches it, so that people can discuss amongst themselves.


Slightly OT, but I'd be tempted to argue TV Shows can and do have bugs; plot holes, bad acting, continuity errors et al. They can be quite jarring.


That's a good point. At least they have less bugs than Siri.


Nevermind that for much of the history of TV, and still today, it randomly cuts out?

Heck, I sometimes say "I don't understand, can you explain?" Why would people be less tolerant of that from a robot than from me?


Not in the middle of a rant, or while you are professing your love for someone. In neutral discourse, sure that's expected.

You can even not speak the same language, humans can understand each other by other means like body language and cultural cues. It's not something you can replicate in a machine unless you really understand how it works. Unless you are saying that human behaviour is completely understood...


> Not in the middle of a rant, or while you are professing your love for someone.

It's possible to detect mode of speech even without visual cues. Factors such as pitch dynamics, speed of talk, etc. can be accounted for. Software can be trained to be even more sensitive to these signals than we are.

Of course, the problem is that this varies somewhat among different people. Therefore, part of AI training would need to happen with actual customer after purchase. There're a lot of unknowns in this process from manufacturer's point of view, so I can understand why it's not happening yet.

Once a “special” mode of speech is detected, though, it's simple to avoid canned “Please rephrase” response in case of unclear voice input. Instead the software would change its own mode of speech appropriately, and then evade the direct reply—humans do this all the time in real-life communication.


>And there would be literally no demand for it, because the overwhelming majority of people don't care about talking to robots. They already don't have enough time to spend with their children, partners, friends, parents... why would anyone waste time with a fake person?

People don't care about talking with robots because robots aren't able to hold conversations today. A "fake person" is an advantage - it doesn't have needs. It won't manipulate you, it won't think less of you, it won't get angry at you, it won't feel bad because you did something, it has perfect memory and unquestioning loyalty. It's always available. Best of all - if you want it to have the reverse of the above properties, you can always switch it on. Want an arrogant robotic friend? Just go to the settings.

I'm pretty sure most people would prefer robots to humans. Obviously, almost nobody would admit this today.


>>24 year old student: "Hey, remember that problem on the Trigonometry 402 final from my senior year that I asked you to solve the equations for back in college?" Digital assistant: "Oh yes! Here it is."

Maybe I'm just being naive from my layman's perspective, but that did not seem that far ahead. It's fairly objective to pinpoint "Trigonometry 402 final from my senior year" in time, and, if the same AI was already there, there's no reason for it not to remember, unless we're systematically deleting stuff to save space.


I think the point is that differentiating between the first and the second is very difficult to do. And communicating to someone not technically inclined why it can respond to question A and not question B would be very frustrating.


Maybe they will be similar like the replicants in blade runner and each have a scripted past instilled into them. As my sibling comment states, its not terribly unlike TV shows.


This article reminded me of Neal Stephenson's The Diamond Age, where the protagonist receives a book that became her closest companion and mentor in her life.


I, for one, accept our teenager overlords.


Wow, what a piece! I am incredibly impressed with the range of responses that Siri has. I wonder just how many questions they've entered responses to. Certainly some stuff can be learned, but responding to requests to marriage is certainly written at some level by a human.


Almost certainly they special-case some inputs -- have you ever recited the Konami Code to Siri? What amazes me is sheer variety of questions that people ask Siri that s/he can now answer humorously. It wouldn't surprise me if they watch the number of queries (from actual users) that don't get an answer and/or fall back to Wolfram Alpha to see if there are any new ones that need special treatment.

That said, most of the information that Siri spits out was "written at some level by a human", such as the Wikipedia articles s/he quotes (by Wikipedia authors), or the flight information for planes overhead (by airline employees).


The flight information is powered by Wolfram Alpha which appears to source it from ADS-B data via (I assume) the FAA, so that actually is entirely automated.

http://en.wikipedia.org/wiki/Automatic_dependent_surveillanc...


I doubt it's from the FAA. ADS-B is trivial to monitor, there's lots of private individuals/groups grabbing this info.


To be fair, most of the information that I spit out was "written at some level by a human", such as the Wikipedia articles I've memorized, or the Hacker News comments I rephrase.


I think Siri is pretty useful as a conversation partner for foreign language. I had tried out the British and Scottish voices for kicks, settled on Aussie for quite awhile and then while playing with my wife's Japanese Siri realized it was actually excellent language practice.


Siri does have a very friendly sounding voice. I've been using the Siri voice library (not the AI) in a classroom robot that help kids with autism engage with their therapy.


Is there any public information on what you're doing? I have a son who is not necessarily ASD (we don't know exactly what he is dealing with yet), but he has some similar symptoms to ASD and he loves robots (we have three LEGO WeDo sets and we regularly build WeDo robots). I would love to hear or read more on how you're using robots with ASD children!


Huh, this is the first I'm hearing of the Siri text-to-speech library/samples being made available to developers. Do you have a link or reference to what you're using?


Holy shit, we live in the future!


I was at work today and decided to buy the new Halo game that comes out in three weeks. Paid in two taps, and was notified that my Xbox at home would be turned on immediately and the game would begin to download. Sure enough I got home and all 65GB had been downloaded.

I paid for a game that will be released in the future and it got to my house before I did. What?


It amazes me that the new Halo game is 65 GB.


As with most applications, it could be a lot smaller. But storage is so cheap and abundant that size is no longer really a constraint. Consequently, engineering for size is no longer a priority.


4 games would use up the "fair" and "more than enough" bandwidth cap my ISP set :-/

It's either OK internet with caps or vDSL that doesn't really work


Depends where you live.

I'm in the UK I pay for 100/10 I get 200/20 and I have zero bandwidth caps.


Is bandwidth not a concern? 65 GB could take weeks to download.


It could. But then where do you set the limit? At my parents house downloading 5GB of data would take a week. So what is the "reasonable" size? 1GB? 10GB? 50GB? Or maybe we can assume that only people with good enough bandwidth will buy a 65GB game online,and the rest will buy the disc?


It's possible for it to download compressed data and 323 used uncompressed textures/meshes/sounds/etc.


> 65 GB could take weeks to download.

If someone lives behind a 1 Mbps home connetion, then yes, about 1 week.


Texture mapping has come a long, long way. It's a shame AAA game mechanics haven't kept up, but, hey, that's a hard problem.

(I just bought Alien: Isolation the other day, and found it disappointing -- silly me, thinking I'd bought a game I could play as I pleased, rather than a movie which would punish me for trying to step outside the predefined narrative it wanted to relate.)



Play Thief Gold or Deus Ex, if you want that.


To be fair, the "new" Halo game is technically the first 4 games all redone for the Xbox One packaged together.


WOW, me too... I guess it's been a long time since I regularly played games


We're definitely getting close. I wish Apple would take Siri to the next level. It should be easier to make corrections, for example.

I bought Dragon Dictate for my Mac a couple days ago just so I could try to do a little more with voice recognition. It'll be great to be able to program mainly with voice like in this video: http://ergoemacs.org/emacs/using_voice_to_code.html

At the moment, I can simply say "open terminal, begin, rebuild, restart, push, pull, boom (combines pull, rebuild, restart). I'm just using simple shell aliases but I'll probably add shell functions.


How do you like it? I've been very interested (dictation in Mavericks leaves a lot to be desired) but the reviews on Amazon seem to savage it pretty good as buggy and not up to the standards of the Windows version.


I believe the Windows version is better. Integrates with Python so it's easier to customize, and I think it's easier to define your own "words". e.g. "Slap" could mean "new line". I got it on sale for $99. Now I've gotta get a good microphone.

David Pogue's review: https://www.yahoo.com/tech/hello-computer-speak-your-text-wi...

And the fact that John Siracusa writes articles like these with it, convinced me that I could make it useful with a little effort.

http://arstechnica.com/apple/2013/10/os-x-10-9/23/


Thanks. I'm glad to see the David Pogue review, but it looks like it's not considered compatible with Yosemite and crashes for a lot of people. I guess I'll wait to see if they release an update later this fall.


I gather Yosemite brings Siri to OS X, which I expect will be an interesting evolution, especially once people start figuring out how to add new capabilities and integrations.

I saw Rudd's demo and have been tantalized by its possibilities ever since. More than the specific technologies in use, what really fascinated me was the way he invented what could almost be described as a shorthand language optimized for efficiently driving the editor via voice, with various unique (and otherwise meaningless) phoneme combinations to represent things not easily expressed in ordinary English -- for example, "lep" instead of "left parenthesis", one syllable in place of five. That's the real win, I think; the specific dictation pipeline in use hardly matters, so long as it supports the necessary interfaces and customization capabilities.

Some day I mean to find the time for a really deep dive into the subject, ideally one from which I'll surface with something I can distribute to others wishing to drive their editors the same way.


> I gather Yosemite brings Siri to OS X

No. OS X has had voice-to-text for a while, actually, but that's all it is.


>I gather Yosemite brings Siri to OS X,

It's been around for a few versions.


Uh, what? Siri is not available on OS X at all.


Sorry, I was thinking of Siri's voice-to-text dictation. That's 99% of what I use Siri for. But yeah, you can't get the other stuff on mac yet.


Fascinatingly, I tried asking Siri "Siri, will you mary me?" and her response was:

"I sure have received a lot of marriage proposals recently!"

For the first time, I was genuinely impressed with Siri's pseudo-intelligence/wit.

(Can we assume someone on the Siri team at Apple read this article?)


That "marry me" special case was in the news just after it was launched, it's not pseudo-intelligance at all, just the wit of Apple employees.


Very minor spoiler alert re: the film Her ... anyone who's watched that film will instantly recognise a reference there, intentional or not.


One really interesting idea related to the passing of the Turing test and formation of relationships between humans and AIs:

Our relationships with other humans have become increasingly digital, progressing from face to face communication to letters to the telephone to text / facebook / other social messages. Each step in this progression lowers the bar drastically for AI to start fulfilling peoples' social needs. We can now maintain or even establish a relationship with another human solely using text-based messaging, and I believe that soon we will reach the point where AIs can get 90% of the way there. I don't know when it will happen but I wouldn't bet against it.


This has been true since forever. In the 1960s, the chatbot ELIZA got people to talk to it for hours. To the point it's creator was deeply disturbed and began to advocate against AI.

As time goes on, the AI effect kicks in. People get used to AIs and it takes more and more to impress or fooled by them. Once we get real human level AI, I expect people will treat them like they do in bad sci fi movies (for the probably short period of time they are our equals.) Just like people treat slaves and lower classes in other societies, not something you would form a relationship with.

But really we are nowhere near that point. Chatbots have gotten really advanced, but they are still basically chatbots.


This piece is absolutely adorable.


As a parent of an autistic child, I found this article very disturbing.

I have spent the last several years working with my child: to engage people and to establish empathy (among other skills) - empathy with real people. My child exhibits obsession with topics and we cover this using the Internet and other resources (eg. books, magazines). However, I have worked hard so that my child can relate to others and to learn essential skills to cope with life. My young autistic child has gone from avoiding eye contact, hitting and punching and general "fuck you if I don't like the look of you" to somewhat synthetic behaviour that's been learned - it now comes across as fairly natural... but it's been a lot of work. This was achieved by focussing on real people, both with similar issues and those without. It also came through honest and open communication about autism and autistics vs Neuro Typicals (NT).

Interestingly, my autistic child used siri briefly and poked holes through the AI and found it severely limiting.

As a parent of an autistic child that has read text-books, worked in class, spoken at length with specialists, attended support groups and training programs, and supported my child's turn-around that fellow parents/teachers/specialists described as "amazing" etc... I am horrified at an approach that diverges from constructive social behaviour. The younger you learn to engage others, the easier it is and more readily it will stick.

I'd encourage any parent of an autistic child to research this area thoroughly before going down the "Siri as BFF" path.


The OP said interacting with Siri, far from causing her son to diverge from constructive social behavior, helped him learn it - he ended up being better able to interact with real people. I have no particular reason to believe her son's experience was atypical. Do you have any reason to believe it was?


Btw, I noticed that Yosemite has slightly enhanced its dictation tool for use with Automater. Might come in handy.

http://www.macworld.com/article/2834532/ok-mac-using-automat...


Can anyone get the "Are there any flights me?" query to work? It just keeps doing a Bing search showing pages talking about the fact that Siri can do it.

Wolfram Alpha can answer the question if I got to the site, so the data is definitely there.


It used to work, but right now it's not, for some reason.

If you start any query with "Wolfram" it will send the rest to Wolfram Alpha verbatim, but "wolfram flights overhead" isn't working either.

It works directly on WA though: http://www.wolframalpha.com/input/?i=flights+overhead


It didn't work for me either. Pretty much the whole of the city I live in is under a flight path, so it's definitely not a lack of data. On a related note, Siri has regressed for me since iOS 8. It won't shuffle my music by genre anymore, always telling me "I'm having a bit of trouble".


Same here. I tried it multiple times too. I also want Siri to answer the question "What stars can I see tonight?"


This story should be an inspiration for the 21-st century version of the "Little Prince" st. Expury.


The Little Prince was about friendship between creatures that need each other. That's actually one of the main ideas of the book: we need each other. From my experience developing a Robin (a product not unlike Siri), I can tell you: yes, some users really adopt the machine as a pet to an extreme degree, but I don't envy these people.


I've never used Android's voice recognition – does it have similar scripted responses?


No. The Google voice recognition stuff does not have a personality or try to be a character. This was an explicit design decision, I believe, made early on. They are going for a "you are asking Google and the answer is spoken with the voice of Google", presumably, people are more tolerant of mistakes the less human-like a machine tries to be.


Google Now has a select few easter eggs, but nowhere near what Siri seems to have.


No, but Robin does.


This has some relevance to a product I've been working on for a little while: https://www.getpuzzlepiece.com. It's an Android tablet and apps specifically for kids with Autism. (Coming soon to iOS.)


I enjoyed this piece, but the middle section about the author and "Should I call Richard" seemed entirely out of place. It did not match the tone of the story and did not sound genuine.


There's something in my eye, and it's leaking.


+1 for friendly AI (even with its narrowness)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: