At the start of my career in computing, in one two week period I sent some resume copies, went on 7 interviews and got 5 offers. The reason was the high interest in computing around DC for US national security.
I was making in annual salary 6 times what a new, high end Camaro cost. My 2 BR apartment cost me 8.5% of my gross salary.
Then some suits saw the problem, had the NSF have a team of economists estimate computing labor supply and demand curves, and then have the NSF write into research grants that so many students had to be supported and, hint, hint, could get students from Taiwan, South Korea, India, Greece, etc.
So, net, the main reason for the nonsense in hiring in computing is that there's no real shortage. When there was a real shortage, organizations would hire based on indications of talent and expect the employees to learn on the job enough to do the job. Now it appears that the employers have a list of 10 of their most important software tools in their stack and what to hire people with that particular list of 10 tools, with that particular stack -- not promising for either the organization or the employee.
For another, managers want their subordinates to be no threat, just secondary, submissive, subordinate, heads down coding, no bright ideas, no risk of the suit being shown to lack technical competence, etc. And, the more people the manager hires, maybe the more the manager gets paid; so hire a lot of inexperienced people instead of a few deeply experienced people -- empire building, goal subordination.
The non-technical suits need to start to catch on and get the IT departments being more productive and per dollar.
There’s definitely a shortage of skilled professionals - we have a hard time hiring without lowering the bar as well, and I am at a FAANG. We don’t necessarily care if someone has experience in most of our stack - myself, I hardly had any experience in almost any of it when I joined, but I had the combination of soft & technical skills desired.
Since I joined, I have said no to all but 2 candidates I phone screened, and one of those I rejected in the onsite. We didn’t even ask a difficult question for people to implement, but people demonstrated their lack of thoughtfulness in how they approached problems.
I don’t care how old people are - one of my teammates has been at my workplace for a little under 30 years and is valued for his well-honed perspective, tireless QAing, and reliable execution. Regardless of age, I expect people to intelligently talk about tradeoffs, and keep an eye on the problem that is being solved & its effects on the end user.
The interesting thing about bigger companies is compensation stops matching up with the value created. Jumping up $100k-150k in total compensation usually correlates with an exponential increase in productivity increase from the person’s presence, and usually that increase is not from code.
We also struggle with hiring and not "lowering the bar", but I've come to exactly the opposite conclusion about their being a shortage. We're simply not offering enough. We've said yes to a few, and they've mostly responded with variations of "better offer elsewhere".
Likewise, most of the offers I get from recruiters are absolutely terrible. 50+% are not a geographic match (and these come from LinkedIn, which knows this preference!), and many fail to specify one or all of: a. what company? b. where, or remote? c. what does the job entail? d. what is the compensation? But plenty of ramblings about "hyper growth unicorn startups".
Like the guy you're responding to gets at, I get matched by recruiters based on tools in my stack, not based on what, perhaps, I might want to learn. And I spend 2x on CoL and get 1/2 as much.
Watching the bidding war for a candidate who has offers from both Google and Facebook makes me think there's still a shortage of something. And those companies mostly don't hire for a particular stack.
There's a certain category of engineer that is still scarce and in-demand, it's just a matter of figuring out how to convince people you are in that category.
Sounds like Paris fashion -- "scarce and in demand" and wildly expensive. But, believe me, there is no shortage of dress designers or dress makers.
If the people who see a shortage will list what they want, then it will likely boil down to (i) narrow technical topics that, with good documentation, people should be able to pick up quickly, (ii) technical work beyond narrow topics, e.g., new, maybe original, and for patents, publications, intellectual property and trade secrets, etc. where need talent, (iii) good language skills, e.g., reading, writing, speaking English, especially to help others understand, on technical and business topics, not the same as belle lettre, poetry, Shakespeare, and (iv) various soft skills where some girls in high school are better than some senior technical people.
To evaluate: For (i), if they have already learned lots of skills, then they should be a good bet for learning more. (ii) If they have done original work, e.g., published, maybe STEM Ph.D., then that should be grade A. On (iii), look at some writing samples, e.g., publications, documentation. On (iv), in the house, have a coach good at teaching the soft skills.
Presto, bingo, "Look, Ma, no more shortages!"
E.g., for skills, lots of job ads want experience with the software package R, relatively good for much of traditional applied statistics. Okay, I've done a LOT in both mathematical and applied statistics, never had any trouble getting the computing for the data manipulations, but never had occasion to use R. So, last week I took two hours and looked at R. For the statistics packages that are standard, there was nothing new for me. Yes, they have a programming language with its own key words, syntax, and object model and what look like nice ideas for packages and name spaces. Okay. Semi-whoopie. For some of the advanced statistics I have done and published, I saw nothing in their standard packages or the many others I found via Google searching. Okay -- the current packages are not comprehensive, just what we would expect.
BUT the biggie point is, R is to do data manipulations, commonly for statistics, but R is not a good place to learn statistics. To learn statistics there are lots of books, university courses, and papers. E.g., at one time I needed to learn about estimating the power spectra of second order stationary stochastic processes. Sooo, I found Blackman and Tukey, The Measurement of Power Spectra, at dinners dug in, in a few days wrote some illustrative code, showed the code to our target customer, and, presto, bingo, won the competitive bid with "sole source". So, I know some statistics and did well with a need for things new to me, and later did well with some statistics new to everyone and published the results, but I did not learn statistics via R, and R does not appear on my resume.
It appears that for nearly all recruiters, the necessary and sufficient condition for knowledge of statistics is R on a resume. Gee, before R was ready, that leaves out K. Gauss, K. Fisher, J. Neyman, A. Kolmogorov, J. Tukey, L. Breiman, A. Wald, D. Brillinger, E. Dynkin, E. Cinlar, and more. It also leaves out people one step beyond in, say, stochastic optimal control, R. Bellman, R. Rockafellar, R. Wets, D. Bertsekas, and from my dissertation, me.
It looks like what is wanted is people who needed to draw some graphs and did that with R and then moved on to doing some curve fitting and drawing graphs. So, that is their knowledge of statistics, with no knowledge of minimum variance, the Gauss-Markov theorem, t-tests, F-ratios, etc.
Gee, once I did some statistics: The company wanted some revenue projections, really package projections since they were shipping packages overnight. Yes, I went to college in Memphis and met some people. Well, we knew the current packages per day shipped. And we knew our planned capacity. So, essentially we needed to interpolate between these two.
So, how might the growth go? People had hopes, wishes, and intentions, various ideas in marketing, etc., but nothing with credible numbers. So, target customer hear about the service from current customers, so the rate of growth should be proportional to the number of current customers talking and the number of target customers listening. So, with time t (days), packages at time t y(t), planned size of the business b, we should have the rate of growth the calculus derivative d/dt y(t) = y'(t) as in
y'(t) = k y(t) (b - y(t))
for some constant k. So, an SVP and I tried various values of k, picked something reasonable, and drew a graph. Didn't use SPSS, SAS, Mathematica, MatLab, Excel, or R. The graph kept a crucial BoD Member and crucial investor from walking out and saved the company. Later I discovered that my rationalization of the two cases of proportionality was an axiomatic derivation of the logistics curve that long has been known as a good, first cut for viral growth. So, that was some statistics, but without R. Besides, needed to solve the differential equation, and R would not be much good at that; there is a closed form solution, and I found that. Gee, guys, all without R!
But if a guy these days wants to use some statistical software, they can pick from SPSS, SAS, Excel, Matlab, Mathematical, R, and more. But somehow among too many people, R remains the necessary and sufficient indication of knowledge of statistics. So, then, some people conclude that there is a shortage of good people in statistics.
Uh, HR recruiters, for "good people" in statistics, look at education, e.g., relatively good university courses, e.g., from Mood, Graybill, and Boas, good experience and accomplishments, e.g., power spectral estimation, and publications, e.g., distribution-free, multi-dimensional hypothesis testing. For R, f'get about R. Anyone good at statistics will use R if that is the best tool under the circumstances. Statistics is serious and as challenging as we please. R is secondary and relatively trivial and NOT the only good way to get the arithmetic done.
And, really, Mood, et al., is not as good as desired. In particular, the book is short on use of the crucial Radon-Nikodym theorem. E.g., (i) the book makes a mess out of the super surprising topic of sufficient statistics. (ii) The book gives a sloppy proof of the crucial Neyman-Pearson result. For (ii) I got impatient and worked out a proof from non-linear duality theory and the Hahn decomposition from the Radon-Nikodym theorem (R-N). The R-N result is really important, e.g., has a nice proof from von Neumann, but too few people in statistics know that result.
Better: Multivariate statistics, e.g., regression, is a perpendicular projection. So, the Pythagorean theorem applies. So, we get
total sum of squares =
regression sum of squares + error sum of squared
Right HR people, that IS the Pythagorean theorem. All without R! How 'bout that! Knowing that result is MUCH more important than knowing R.
One more: What is the single, all powerful approach to estimating a number from several other numbers? Or suppose we want to estimate the value of real random variable X from several variables in a random vector Y, with just meager assumptions. So, we want a function f so that our estimate of X is f(Y). Well there is a function g so that g(Y) = E[X|Y], and that is the unbiased, minimum variance estimate of X. Only a short derivation is needed. Well, in practice, the discrete version of that is just old -- and may I have the envelope, please? Drum roll, please. "RIP". And the answer is, old cross tabulation. Yes, if the dimension of Y is large, in practice we encounter the curse of dimensionality, but in principle we want g(Y) = E[X|Y] or the discrete version of that cross tabulation and in practice likely want that when we do have enough data. "Big data" anyone? We use statistical models, say, when we don't have that much data. Knowing that is more important than having used R instead of SPSS, SAS, ..., for doing the arithmetic.
Did you read the comment you were responding to? If they were looking to pay someone “incredibly low” wages they wouldn’t be getting into a bidding war. I don’t know what your definition of “low” is, but there are lots of people at FB and GOOG easily clearing 400k+.
I think the bidding war is actually a sign of a pretty efficient market in that tier of jobs. The only thing that would really increase engineer pay significantly would be killing the H-1B program, but that would probably be devastating to the industry. Especially smaller companies.
EDIT: also devastating of course to the tens of thousands of people who would have to abandon the lives they've built in America.
I was making in annual salary 6 times what a new, high end Camaro cost. My 2 BR apartment cost me 8.5% of my gross salary.
Then some suits saw the problem, had the NSF have a team of economists estimate computing labor supply and demand curves, and then have the NSF write into research grants that so many students had to be supported and, hint, hint, could get students from Taiwan, South Korea, India, Greece, etc.
So, net, the main reason for the nonsense in hiring in computing is that there's no real shortage. When there was a real shortage, organizations would hire based on indications of talent and expect the employees to learn on the job enough to do the job. Now it appears that the employers have a list of 10 of their most important software tools in their stack and what to hire people with that particular list of 10 tools, with that particular stack -- not promising for either the organization or the employee.
For another, managers want their subordinates to be no threat, just secondary, submissive, subordinate, heads down coding, no bright ideas, no risk of the suit being shown to lack technical competence, etc. And, the more people the manager hires, maybe the more the manager gets paid; so hire a lot of inexperienced people instead of a few deeply experienced people -- empire building, goal subordination.
The non-technical suits need to start to catch on and get the IT departments being more productive and per dollar.