Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Please don't propagate the myth that Google asks prospective programmers questions like, "How many golf balls could you fit on a bus?" Google interviewers list the questions they asked when they write up their conclusions, and anyone who asked a question like that for an engineering interview would be immediately contacted by the hiring commitee and told not to do it again.


What's actually in the post: "If you are a Google recruiter, and you want me to interview for SWE or SRE or any role that has an algorithm pop quiz as part of the interview, [...]"

I was asked about algorithms for my internal reinterview to transfer from SRE (O ladder) to SWE (T ladder) in 2010. It was the usual sorting algorithm complexity stuff. That's never been my strong suit, and I'm sure I disappointed the interviewer. I sure felt like crap afterwards.

On the other hand, the second interviewer engaged me in practical matters like designing a class which would do some things, and would be thread-safe, and how I'd rig it. Also there was the question of what you could do without a mutex for whatever reason, and when you needed to suck it up and burn the CPU time on it. Then we got into the actual design of a class like Mutex and the helper MutexLock wrapper normally used with it in the depot, and so on, and so forth. I imagine the responses from that individual helped balance out the algorithm drilling I got the day before.

Where are you seeing the "golf balls" question in this post? What you said is true, that it's a crap question, and asking it would probably draw attention to you, but why did you even bring it up? It's like you're blaming the writer for propagating something when it hasn't even been mentioned.


Personally, I think that this whole "oral exam" style of interview is completely wack. If you have a lot of experience with it, then you are likely to do great, and if you don't, then you are likely to suck at it.

When I was in high school, for instance, I was on the math team, and the first year, I got four questions right the entire year! By the time I was a senior, I got four answers right every meet, on average

Did I get smarter between being a sophomore and being a senior? Not at all! I just had a lot more practice of that style of thinking in that particular kind of situation.

This being said, the golf ball question is no more ridiculous than any of the other questions that Google might ask you. That sort of question is designed to see whether you can do a "back of the envelope calculation" that will get you within an order of magnitude of the right answer. Being able to do this sort of calculation is actually an important skill for any kind of engineer to be able to do. I don't think, however, it important skill to be able to do while in one of the most stressful situations you will ever face in life.

Also, I have to take issue with the claim that Google doesn't ask you brain teasers. I interviewed there about three years ago, and I was definitely asked a brain teaser. It was couched as an algorithms question, but it wasn't the sort that you'd see in a typical algorithms class. It was the sort of question where you only come to the answer by having a leap of insight and a light bulb goes on over your head. I.e., this is how all "brain teasers" work. And most "Mathlete" questions, for that matter.

The problem with this sort of question is that if the light bulb doesn't go off in your allotted 20 minutes, then you look like an idiot. And if it does go off, you look like a genius. What if it goes off after 25 minutes when you're in the elevator? Too bad!

You might argue that you can talk it through, but this doesn't usually work for me. To solve this sort of problem, I usually just have to stare at the wall in silence until it comes to me. During the interview, I drew geometric shapes on a piece of paper. The interviewer must of thought that I was stupid. Or as stupid as you can be while wearing a Brass Rat. Until I came up with the right answer at minute 19.5, and then he must have wrote down, "Very smart indeed!" Or at least that's what I imagine, since they did ask me back for another round.


The assumption is that, if you were a genius, you'd be able to come up with that eureka spark in the time allotted in the interview.

I won't discuss whether this assumption is correct or not. But i will discuss that even if you hired such a genius, it would not ultimately make the company any more money, unless you could put such genius to great use rather than grunt work (which a genius would do no better than the grunt - thats why its called grunt work).


> Did I get smarter between being a sophomore and being a senior? Not at all!

You learned nothing in 2 years of high school? Huh.


> You learned nothing in 2 years of high school? Huh

The type of questions that they asked at a math meet never exceeded the knowledge contained in Algebra and Geometry, which I had learned by the end of 9th Grade. Mathletic events were designed this way so all high school students could participate on an equal footing.

So, no, I learned nothing during those additional three years that increased my abilities on the math team. The only thing that increased my abilities on the math team was practicing solving the sort of algebra and geometry math brain teasers that they asked at math meets within very limited time constraints.

There's not a math meet problem that I ever saw at one of those math meets that I couldn't have solved on my own given enough time.


She links to to an article that propagates said myth. Click on "arbitrary technical puzzles".


Okay, there it is. That makes a little more sense. Thank you!


What do you mean by "O Ladder" and "T Ladder"?


Those were (are?) the designations for certain types of jobs over there. You might think that if you're hired into the company, your possibilities are wide open, but they aren't. System administrator flavored SREs (site reliability engineers) could only get other SA-flavored jobs, of which there are relatively few. Meanwhile, software engineer (SWE) type SREs could go into any other SWE job, of which there are many.

If you were hired on as a SA-SRE as I was, then you have to do an internal interview to get to be a SWE-SRE. If you can't make it through that, you're stuck. I made it, and a friend did too, but I know people who didn't. I'm sure that makes them feel great, especially if they're already doing SWE type work in their daily jobs.

I was told repeatedly there was no difference between the types, but found out the hard way when it was time to transfer from a toxic situation and there were few alternatives. It took over a year to finally get it all sorted out.


Google was well known for logic brain teasers like that in years past - there was a time when Google recruitment was defined by such questions.

They've since stopped, and sadly it looks like Microsoft has taken up the mantle for stupid irrelevant logic questions.


I had an MS interview a few weeks ago and it contained no irrelevant logic questions. Sure it involved moronic things like regurgitating Moore's voting algorithm and atoi but it at least involved code.


retaken, perhaps.


True. Microsoft has been famous for those for decades.


Many others have read the published books and incorporated the same in their processes.

I had a series of interviews that was all logic questions with a trick that you had to decode to successfully answer it.

It's cute, and everyone gets to feel really proud of themselves if they can solve an applied prisoners dilemma for the answer everyone in the room is looking for, but it's not relevant to the performance of any position that I was being considered.

I'm involved in a lot of interviews and I think the problem is getting worse. Taking all of your questions from a certification exam, constructing a bubble sort, or talking about why manhole covers are circles might be satisfying to a bad interviewer, but does no one a service IMHO.

I'd suggest diving deep into past work experience and ask them to show you something that they have done well so that you can understand the utility that they provide. Very few people that I encounter do this and then try to pound round pegs into square holes.


I've spent some time at MSFT (including a transfer) and know a few recent hires across the lake and NONE of them (including myself) have had any of these sort of questions. A few data structures and algorithms type questions, but never the 'how many pennies fit in this room' type stuff that they're supposedly so famous for.


Were they well known for it because they did it, or because people assumed they did it? An awful lot of people immediately jump to "stupid brain teasers" when hearing "they ask technical questions". I have people write code and solve actual problems during interviews, and have gotten several "these kinds of riddles are a waste of time" responses from people who are offended by the notion that I want them to be competent.


From my limited anecdotes of the time, I believe Google actually did it. We're not talking about programming-related brain teasers here, we're talking about non-programming ones.

e.g., "3 light bulbs in a room" and "family crossing the bridge" being the classic examples.


Never heard the family crossing the bridge one. Which one is that?


You have a bridge that can only take a certain amount of weight at once without collapsing. It's dark at night and you have only one flashlight. A flashlight is required to cross the bridge, which is traversable in both directions.

You have a family of people of varying weights (the exact numbers you'd have to look up) - determine the optimal way for the family to make it across the bridge.


Thanks. I don't think that's a bad one. In fact, like the Towers of Hanoi, I think it has enough parallels with computer science and engineering to be a good interview question.


Indeed, it is a specific instance of http://en.wikipedia.org/wiki/Knapsack_problem


I've heard the same thing, always from unreliable "I heard" sources. I've never heard anyone who actually did interviews or was interviewed say that.


I was interviewed by Google and had to answer this "brain teaser" kind of questions. So, no, it is not just folklore.


Can you be specific with some examples? As demonstrated elsewhere in the thread, some people perceive the same questions as appropriate and others perceive them as "brain teasers".


Funny, I was asked that exact question at Google. Granted, it was ping pong balls, and it was an airplane, but same thing. It was also 2005, for an APM position, coming straight out of college.

Truthfully, for a kid coming straight out of college, that's not such a terrible question to ask. (No, I didn't get the job. :P) It allows you to start from a totally neutral knowledge position, and lets you see how a person thinks critically about an unorthodox problem - one they almost certainly haven't seen before.

The problem comes when kids start training for such problems, instead of just tackling the sorts of interesting challenges that would inadvertently cause them to approach this question well...


Why would that even be a bad question to ask?


Because they're not real logic questions and they don't predict anything. It's about the same as judging someone's programming/design/writing/etc. ability by making them do crossword puzzles or Sudoku.

The LSAT is filled with logic games, and they are nothing like a silly question like, "how many golf balls could you fit on a bus." They are actual logic questions that have a definitive answer that tests your reasoning abilities.

Being good at brain teasers does not make you intelligent; it makes you good at brain teasers. And they don't help prevent Alzheimer's, so they're really just a way for board people to pass time.


> a silly question like, "how many golf balls could you fit on a bus."

Why is it a silly question?


Because it makes all sorts of assumptions that are outside the scope of what you really want to test.

First, it assumes familiarity with golf balls. I have friends who grew up in Manhattan who have never seen a golf ball. They may have seen one on TV, but that doesn't let you really gauge the size of something.

Second, it assumes familiarity with buses, and assumes the interviewer and the interviewee are talking about the same kind of bus. Are we talking about a school bus or a big-city reticulated bus?

Third, it tests skills you're not testing. I live in the city and take the bus all the time, and I don't know whether a bus is 20 feet long or 70 feet long. I'm not good at visually estimating measurements. That might be a skill relevant to a carpenter, but not so much to a programmer. Even if you assume you have the measurements, then you also have to visualize the packing structure of the golf balls. This also tests spatial skills, which are arguably not highly relevant to a programmer.

Brain teasers are generally stupid because they are under formalized. I agree with the poster above. If you want to test logical skills, give someone a section of the logical reasoning section of the LSAT. Those questions were carefully designed to test logical reasoning and to avoid testing other things.


Generally speaking, when questions like these are asked (I used to work somewhere that did them, and was often responsible for evaluating the responses), nobody cares that you come up with the right answer.

Rather, I should specify, the right answer looks like this: Let's say a golf ball is approximately 2 inches in diameter, and let's say that the bus is approximately 20 feet long on the interior, with seat backs that are approximately 6 inches thick, seats that are approximately 8 inches thick, sit 18 inches off the floor, and are supported with posts of approximately 1.5 inches diameter.

The key is in being able to break down a problem, identify the challenges (seats are wonky shaped, for example), identify all the components of the problem (e.g., seats take up space, seat posts take up space, buses aren't perfect rectangles, etc.) and all that.

For what it's worth, I've evaluated that question more than a few times during interviews, and I have absolutely zero idea how many golf balls you can fit into a bus. If you don't know what kind of bus, ask. If you don't know the dimensions, ask.

If you're going to throw up your hands and claim that the problem is unsolvable (plenty of people do, some even got hired), then perhaps a job in solving problems with possibly unforeseeable parameters isn't the job you want.

There are plenty of other problems with the question, sure, but the key is in being able to figure out what the parameters are, at least loosely define them, and come up with a strategy for working the problem out. In the best answer I ever got, the guy asked to borrow the whiteboard and started charting out equations to calculate it (with variables such that if he was off on the diameter of the golf ball, you could replace it with the correct value and re-run the calculation). I stopped him well before he got anywhere close to actually solving the problem and recommended he be hired.

Of note, I do not now nor have I ever worked for Google, so I can't say how they perform those, if it is even true that they did, but that's how I've always approached them.


The problem is that people who aren't familiar with golf balls, etc, might be so put off by the question it impairs their ability to think through the problem. You're biasing the question against specific groups of people, on criteria that are completely irrelevant to the job.

I used ask "can you break down the problem" questions all the time. I'd base them on programming tasks. Because, you know, I was interviewing programmers.


That's a fair critique, and pretty much what I said the first time I was asked to join the interviews to evaluate the answers.

Regardless, I do believe that being able to ask "how big is a golf ball" when you don't know is a crucial skill to possess, and I've found (anecdotally) that the people who threw their hands up, but were hired anyway, generally had poorer work performance because they either didn't know how to ask for help, or weren't willing to admit that they didn't know something.

Humility is a very good skill for any programmer I think.


The desired result is not "X golf balls will fit in a bus" but the process of how the interviewee approaches the problem.

Are they entirely baffled by the question? Are they stuck, with no idea about how to proceed? Or do they ask for more information from the interviewer? Interviews are not about quizzes; they're a discussion. This question is a great way to start a discussion with someone.

"Well, I've never played golf, but I do play squash. So, I'll use a squash ball as a start."

"I have no idea how big a bus is. Let's assume a cuboid of let's say 10 ft by 10 ft by 30 ft."

"Really this question has some sphere packing stuff in there. Being honest, visualising that kind of thing is not my strong point. I'm much better at things like $TOP_THREE_HERE. So, I'll use a weak version first to get a ballpark figure. Let's just line the balls up in a grid (as if each ball is a cube), where each ball touches 6 other balls, or the some other balls and the floor, sides, or roof of the bus."

If anyone is using the question as you've suggested then yes, it's a bad question and they've failed. But you've missed the point of the question.


I wouldn't say those are the reasons the questions suck. The idea of asking it is not to get a good answer but to discover the process by which someone would approach an unsolvable or half-solvable problem. In other words, they use it to see how you think through the problem.

The issue I have is, if you don't know that when you're asked that question, it's easy for a good candidate to freeze up because they don't know what is really being asked of them. In other words, it's like taking someone off the street and giving them the SATs. Their score is going to suck compared to if they prepared for it. So what you're really doing is testing their ability to take an arbitrary test, or jump through hoops. Since Google prefers advanced degrees, they probably are already pretty good at jumping through hoops, so it's a bit of a pointless exercise that can throw off great candidates completely if they aren't prepared for it.


It's not a silly question at all. You can completely parameterize the answer in terms of bus volume and golf ball size, or define a "best guess" by explicitly defining your assumptions on the size of a golf ball or a bus in the question. Finally, you should at least be able to provide a magnitude.

Or, you know, you could always just ask what size a golf ball, and bus is.

It seems to me the only assumption the question makes is that a person could make some logical assumptions about the question, ask some logical questions, or ignore all assumptions and parameterize the answer.


It doesn't make assumptions, YOU make assumptions.

You can calculate a "valid" response in function of the bus volume and the ball radious f(v,r) without assuming anything and without having any real data.

Off course you can also focus on the "supidity" of the test and that also gives the company an idea of your thinking process and problem solving capabilities.

In any case the test serves its purpose well.


Because it could lead to an algorithmic discussion of numerical integration by shells vs numerical integration by disks from integral calculus, a discussion about 1-3 sig fig engineering estimation and error estimation and error propagation, maybe a discussion about circle packing efficiency.

It's an analysis of four things: How an applicant socially responds to ridiculous requests outside their previous experience, and how they handle initiating and architecting a "large" engineering project from a vague request, how mathematically well educated an applicant is, and how wide the knowledge level of the applicant extends (basically a psuedo IQ test, with all the legal risks that implies).

This is vitally important for a handful of positions, but a complete waste of time for most positions, thus silly.


I'd have to disagree with "they don't predict anything".

Do you think P(intelligent|good at brainteasers) > P(not intelligent|good at brainsteasers)?

I'm not saying it's the best test, most accurate, etc. but if someone is really good at brainteasers I'd post it's more likely than not that they're intelligent. There's probably false negatives with people who are intelligent but bad at brainteasers. But for a filtering test (when Google has too many applicants anyhow) I can see why they're used.


Because they've been shown time and time again to tell you if someone is good at answering brain teasers rather than if someone is good at being a programmer/engineer/whatever.


*citation needed


Because it was popular in the past, and nothing is more out of fashion now than what was in fashion last year.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: