> Graham said that, after thousands of founder presentations and pitches, he and the YC partners are now able to tell “within minutes” whether a startup will pass muster or not.
This I find slightly worrying. It's amazing if it still works (and I've been wondering how YC was still able to scale), but is the minimum amount of information that needs to be exchanged between startup and YC in order to get a good assessment really that small? Or is the quality of selection (due to scaling issues / less time per applicant) nowdays compensated by the effect of YC's reputation?
I didn't say we can always tell within minutes, just that we sometimes can. Also, the interview is not the only information we get. We've also read the application.
Having interviewed many people over the years (not for an accelerator though), you can easily tell within ten minutes if someone can not pass muster. I have never had anyone who stumbled over the first three questions I asked them, recover on subsequent questions. Nowadays, the only reason I keep interviewing people who stumble over the first three questions is out of politeness.
As I am usually interviewing people for one position, which they need to perform well in, it would take more than ten minutes to get to a yes. But since YC gives sixty or more yeses a batch, and they still win if only a few of them pan out to be a Dropbox or Reddit, than they can probably afford to give a yes in the first ten minutes as well. Because who is a superstar is obvious in the first ten minutes as well. The only question for a superstar beyond the first ten minutes is if they are a fit or not (for example, would the obviously motivated and skilled author of Temple OS, who is marked dead here on HN, be a fit).
I have to disagree with the "can easily tell within ten minutes if someone cannot pass muster" part. Interviewing is hard. We tend to make such snap judgments but those judgments reflect our own biases.
I have come to believe over the years that interviewing measures interviewing skills. Test scores measure test taking skills. Success on the job requires success-on-the-job (to coin a phrase) skills. All those things correlate, but the correlation coefficient is not super high. In fact, ignoring those correlations can be an effective strategy to find great people.
Aren't "interviewing skills" essential to founders? Isn't a YC interview basically a chance to pitch your company and answer probing questions about your plan? Seems like a pretty fair qualification to me...
Fundraising is not a CEO's endgame, but it's very similar to the day-to-day of CEO work: promoting the company to others; being the public face; communicating clearly, concisely, and effectively; making sure that the company 'works' in all the ways that matter.
All of these are 'soft skill' competencies that hackers tend to downplay. But you need someone who can do this and do it well.
I don't downplay the importance of those skills, but are they essential to all CEOs? Does every company need a public face? Absolutely not, at least not for startups, depending on your core competencies and what makes you a good CEO, the public facing stuff to the extent that it is necessary in any given company can be delegated.
In other words, you by failing to distinguish scaling phenomena (1->N) with a gating criteria (0,1), you are making an attribution error concerning the "essentialness" of your explanatory variable.
For example: co-founders are typically not "interviewed" for the job. But (a) their selection is essential; and (b) if you can find a co-founder, you can find any lesser employee.
It could be argued, that having the ability to "recruit" without formal interview is actually a more essential skill.
except you completely disregard cultural backgrounds, but meh. to each their own. interviewing just like any other skill can be practiced.
edit: not saying i disagree per se. I can tell really quickly if someone knows their shit by working with them for a day... or so i'd like to think. the truth is i've been completely misjudged based on interviews before. i've seen people on top of their fields do the same mistakes with other people because they "seemed to know" based on talk. in the end talk is just that, talk.
You know, I know that PG has been beaten to death over the speaking English with an accent thing, which is unfortunate, because it got away from what he was trying to say. But this is an interesting point. Culture, language, and other similar factors that could lead to poor interview performance is something that I hope we can somehow figure a way around.
Now given that, let me say that I completely understand what PG was trying to say there, and I hope that I haven't hijacked the conversation.
Let's not leave the "speaking with an accent thing" alone for a second.
PG's point was that if your accent is sufficiently strong that people have trouble understanding you, then you have a big problem. And that's something that is obvious in a sentence. So person opens their mouth, and you know it is a no.
If the person merely has an accent, that's OK.
And so we have confirmation of what he's saying now. There are times when it is obvious within the first 10 minutes, in fact in the first 2, that you need to say no.
PG points out regularly that there is no way of predicting whether a startup will succeed. So, I'd guess that much of what they do is try to filter out companies that are likely to not succeed.
If YC has nailed down the process, then only formidable teams would make it to the interview, and then the interview would serve to simply make sure the dynamic and attitude of the team is not such that they'd be likely to not succeed (sorry for the double negative, but I think the inverse would have a different meaning). It seems reasonable that after hundreds of these interviews, YC partners would be able to spot those dynamics in minutes.
Well, they're really trying to filter out companies which won't sell for millions. I don't think that everything below that is failure. Perhaps he should have gone with "won't wildly succeed"?
If you are the Richard Feynman of your field, don't you think you could pick up if someone is a good physicist or not in 10 minutes of talking about physics? Not just physics per say, but the mental models you have, the experiments you are going to perform, what kind of theories you have, what you've worked on before, etc. I certainly do, especially if you get to ask the questions.
I can sort of see this for physics but what if it's programming or software in general. I wouldn't be able to tell if someone is good if they were describing something to me in F#, as an example. They could be going about it the wrong way.
What if someone was really onto something and they were describing it to you. They might be trying to explain it as simply as they possibly can and you might think you understand, but a lot could be lost and your impression might be that, that person is not good at what they are doing or that it's a toy.
It all comes down to good communication but I think overall, we learn more over more time. I get that there is a time constraint with the current approach so maybe to scale things there should be some sort of engine that processes people/teams from a watch list over a period of time. For example, mine data from a private YC journal that applicants can start writing in a year or six months in advance. I don't think that should be a problem since a lot is submitted in the applications (my basic understanding of the process).
That's true. What about people like da Vinci or Michelangelo?
I am not comparing pg to either of these three people, at least not directly. I'm just claiming that it's not unlikely there's some kind of taste, knowledge or whatever have you which he (and the other partners) possesses to a much higher degree than Random Joe Investor.
This may well be an example of our ability to find patterns based only on "thin slices," or narrow windows, of experience. [0]
Called "thin-slicing", first read about it in "Blink" by Gladwell, which is based entirely on this concept, and has plenty of great examples of this phenomenon (The Wikipedia link below has some of them).
I think the human brain is the ultimate pattern-matching and learning machine, and when trained long enough develops an ability to discern patterns from the faintest and narrowest of signals.
As a founder of a prospective startup, I know that so much can change in a matter of days or weeks — not just about the people/market/concept, but also with the founders themselves as we "grow up". I wonder how many founders came into their pitches throwing red flags, but who would have outgrown them during, or even before, the YC program began.
In my honest opinion, they should just re-apply afterwards. It's impossible for investors to see if those red flags will or will not disappear during development. They are quite explicit that the pitch is paramount and why it so.
Probably quite a few, especially those like Drew Houston who didn't get in on their first try. Presumably on that first application Drew had some of the same good credentials (MIT grad, perfect SAT score).
It's a shame YC doesn't provide feedback on applications…I'm sure they make notes as they go through the apps (at least for the ones they're not sure about) — would be great for them to have a system to include those notes with their decision notifications!
I would imagine that there is always the chance of 'false negatives' (people you call as failures that eventually succeed), but they are probably pretty bang on with identifying positives.
As an investment first that pretty much the only statistic that matters.
One could make the argument that amount of information to be exchanged in order to get a good assessment really is that small. If they mostly look at how you behave under pressure and whether or not you can still tell a consistent story (pitch your start-up), then you only need a couple of minutes. From what I've read, they judge team dynamic and character for the most part, only checking if you're able to think of a good idea because it would most likely change anyway.
This I find slightly worrying. It's amazing if it still works (and I've been wondering how YC was still able to scale), but is the minimum amount of information that needs to be exchanged between startup and YC in order to get a good assessment really that small? Or is the quality of selection (due to scaling issues / less time per applicant) nowdays compensated by the effect of YC's reputation?