But before I went to college I was self-taught and after college I consider myself to be self-teaching all the time. Learning should never end.
And while in college were probably self-taught too. As an ex-academic I believe formal education is about credentialing. Being able to tell someone that you've acquired a degree of mastery we believe is appropriate to some level. And that's fine, but I fundamentally believe that all learning is really self-taught.
Someone lecturing to you is just a different medium from reading it in a book or online video. In all these cases someone who knows the topic better than yourself has manifest some information in some way for you to learn. (It's not like anyone using the term "self-taught" actually means they learned how to do AI from first principles. They learned it by reading a book written by Russell & Norvig.)
And with that said, let me say that CS isn't programming. No more than a degree in biology is the same as being a doctor. In fact the profession of doctor is so specialized they created a special degree just for it, with residencies in specialties -- and you don't need a bio undergrad degree to get into med school.
And as a general rule, if you're passionate about an intellectual pursuit you'll probably be pretty good in it (not asserting how the causality works). Those that learn via books outside of a formal setting are self-selected as it requires a special type of discipline. Although as more people race to become founders and teach themselves, this pool probably gets diluted.
As a self-taught developer, college-graduate, and continual learner, I propose that exposure is the most important benefit of the college experience. We have to know what questions to ask before we can ask them. We can accidentally walk into all kinds of concepts on our own, but the college experience is a kind of jumpstart, a dive into the large pond. Now we know some questions to ask and have an idea where to get answers. We have more patterns at our disposal, more tools in our belt.
Also, there is that credential thing and the higher paycheck. And the networking.
If only my own college experience had provided that. I get far more "exposure," new questions, and ideas from reading HN than I ever did in my CS classes.
Credentialing is useful, especially in cases where other forms of information are difficult to obtain.
Medicine is a good example. As a patient you don't really have the expertise (and maybe not the time) to determine if your physician is of good quality. You rely, partially, on the credential received by getting a medical degree and passing the boards. While not perfect, it gives you some information.
The same goes for computer science. If I were looking for an expert witness for some computer science related legal case I'd take their credentials into account -- in part because a jury will.
But hiring a programmer is different. Like I said in my original post a computer science degree isn't a programming credential. There's some overlap, but the degree of overlap is variable and generally unrelated to program quality -- Stanford tends to have a lot more pragmatic "programming" courses than MIT, at least historically.
Additionally with hiring a programmer you, the person doing the hiring is also usually an expert in the field. You can more easily ascertain their degree of knowledge and aptitude in areas you care about. Plus you have more time to do so (you'll usually get a full day if not more to question them).
Now with that said, do I consider CS degrees when hiring? Honestly it depends. If I was hiring someone to do some signal processing work I'm more inclined to look for someone with graduate degree background in the field, as a starting point. It's a domain that tracks closely to foundational work. OTOH, for a web dev I'm just as likely to hire someone who dropped out of high school as a PhD -- since frankly the PhD probably has spent less time doing web development than anyone else. But in all cases, if you come to me with some asset that shows great skill in what I'm looking for -- maybe a portfolio, or a paper you wrote, or a program, or a recommendation from someone I respect -- that will always get you in the door, period.
Unfortunately that isn't an efficient solution when you have hundreds of applications to process. Perhaps this is why degrees are often required for entry-level software positions as someone with a degree is seen to be better employees (compared to the high school drop outs).
A lot of what pg says is accurate, but this line left me scratching my head, "No one likes the transmission of power between generations—not the left or the right."
This is one of the core tenants of the right, which is to maintain the status quo with respect to power. The easiest way to do this is to end entitlements, get rid of taxes, most importantly the estate tax.
I'm self taught, the last math class I took was 9th grade Algebra.
I wrote one of the first vector based, anti-aliased, alpha channeled UI frameworks for Windows 95 (16 bit!). I wrote a DSP processing engine for a telemedical application to monitor heart rate and blood pressure in 1997. I've written control software for elevators. I've written contract after effects plugins for post fx shops. I've written audio and midi sequencers. My video editing app, Shave, is the only other useful editor in the Mac App Store other than iMovie.
I've never cracked a math book.
Am I the exception that proves the rule? I'm not saying I could write Maya, but I think you are overvaluing your opinion a little.
Similar story here - I taught myself 68000 assembly by hex-dumping the opcodes out of the only compiler I could afford (i.e. free), and then taught myself calculus to find the tangents along a bezier-curve for my flagship product at the time (a 3D editor/converter, which eventually sold quite well)
I've also had the severe displeasure of working with CS grads who couldn't figure out why a 54MB file wouldn't fit on a floppy disk, or asked "where is the .EXE" when working on a windows web site.
I'd take a motivated "self-learner" over a paper-backed "scientist" any day.
This article combined with your commend tangentially brings up an interesting point in my mind. What does it mean to be a computer scientist? Are you just a programmer with more background? When/where does deep math and CS knowledge really become important?
I honestly think "programmer" and "computer scientist" are slowly being conflated, and the distinction is important to understanding this sort of issue. Is this Granny Smith vs. Fuji, or Fuji vs. Florida Navel?
All of the mathematics listed above is on the short list of what anybody following the Khan academy or the MIT open courseware would teach themselves. It's already been shared and organized into a curriculum, for free, by a number of different sources. In fact, I'd say the process of learning the software would be the bigger challenge for an independent learner.
But even before the availability of those sorts of really convenient tools a lot of people have self-taught themselves past that level. In fact, I remember my abstract algebra prof telling us an anecdote to illustrate the importance of working through things on our own instead of waiting to see what was presented to us in our courses. He talked about a grad level course that he had been teaching for the past 10 years. Clearly, he knew the material inside and out, but the interesting thing is that he had never taken it himself as a student. How did he do it? He read a couple of books about the topic! It's a time-tested method.
I'd say if people aren't learning the math it's because they underestimate its importance. Either that, or people like me (and maybe you) overestimate its importance.
I'm a university dropout, but the level of linear algebra, multivariable calculus, and discrete math that I did take before leaving school is so far above and beyond what Khan academy currently offers. With the exception of Calculus, Khan academy is up to about where my public high school curriculum left me. Its calculus offerings are impressive and definitely cover my first year calculus courses. I've never been able to make much use of OpenCourseware because it doesn't fit my learning style. Don't get me wrong, I'm not holding anything against Khan academy. But it's quite impressive that it will eventually cover all the material in university that I had to pay to learn.
There is a huge collision that's about to happen: increasing costs for post-secondary education; a middle class that will no longer be able to afford said post-secondary education for their kids; and free, quality, ubiquitous educational content like Khan academy. I'm not quite sure how things will play out. But things like Khan academy could pop the educational industry bubble, at best forcing lost cost quality formal education, and at worst, resulting in ubiquitous access to informal education to those that put in the effort.
Great points. There is definitely a shake-up in Education on the way. The high-cost, middle-tier, universities are already feeling this and it will only expand.
The key to self-education is effort. It's much easier to be forced to take classes, do assignments, sit through lectures and take exams. It's much harder to have the self-discipline to do this on your own.
Self-education, discipline, etc. require passion. This could apply to your coursework at MIT or downloading copyrighted technical textbooks off USENET for free to teach yourself mathematics and programming, etc. The internal drive is the differentiator...not so much where you came from.
shrugs I don't have a degree and I'm fairly comfortable with all those topics (and quite a few more - including abstract algebra, real analysis, numerical methods, tiny bits and pieces of category theory, etc).
I like to read journal articles, and when I stumble onto something I don't know, I go learn about it. Just in the last year or so, I've taught myself stochastic calculus (I was doing HFT and wanted to learn more about the greeks) and the basic typed lambda calculus (was into haskell and wanted to learn more type theory). These days I'm doing distributed systems, so I'm brushing up on the process calculi.
He never claimed to refute pkaler's argument, he just added his relevant viewpoint. I personally have taught myself plenty of math, although I agree that my college classes have provided a solid context for me to explore many math concepts.
This is a point that seems obvious but I'm not sure it is actually true. I majored in math and took a number of computer science courses and have a hard time figuring out where to begin when it comes to signal processing and videogame code. In contrast, I know a number of guys who make video games and audio software and they are almost all college dropouts or have completely unrelated degrees. In fact, in my limited experience it seems like games and audio software are two parts of the software industry where not having a CS degrees is the norm.
I can somewhat agree with this. I used to do a lot of computer graphics work professionally. My degree was in electrical engineering, and it was the controls and robotics classes that taught me the necessary math background to work with computer graphics.
Related rant: Many university educated computer scientists think that all the math that they learned was 'invented' specifically for computer science, when in fact it existed long before computers were around. I once got into a heated argument at a Siggraph party with a guy who actually claimed to be the 'inventor' of quaternions.
I'd even wager that most audio engineers who program learned something like, you know, Audio Engineering and just kind of slipped into programming along the way. This maybe makes them a worse programmer than a CS grad, but a heck of a lot better audio algorithm designer.
Many algorithmical problems can be solved in code, but they require specific knowledge about the problem space, which CS grads lack. This is not to say that CS grads are bad at maths, it's just that audio engineers spent years learning about hearing, acoustics and audio analysis and audio algorithm design is more than just some linear algebra.
I think the maths-in-programming topic is a bit of a distraction. To be blunt, general programming doesn't need any university-level mathematics at all, assuming a decent pre-university maths education. On the other hand, specialist programming in mathematical fields typically needs specialist mathematics of at least masters level, which surely no CS undergrad course is going to cover because not even good maths undergrad courses do.
Any sort of maths, really. When my cow-orkers want to look for correlations in graphs they quite literally print them all, lay them out on a meeting room table and stare at them. I just run Spearman and Pearson coefficients on every possible combination, then I look. I look for regular patterns using Fourier. I optimize systems using Linear Programming. My self-taught colleagues are good at their jobs day-to-day but seriously, I am being driven up the wall by alerts and reports that trigger on some dumb threshold (e.g. loadavg 10) and not on some change in the behavior or response of the system...
I'm not sure that I would brag about running statistical methods on data before I looked at a plot of the data; especially not multiple ones.
Aside from the tree-murdering, your cow-orkers (nice reference) are doing the right thing, at least initially. You should always eyeball things before going to statistical tests.
Thirty or forty sheets of correlations sounds like too much work however you slice it. If they were grouped in some logical way I agree you could make them palatable with 'plot' or with the magnificent ggplot; the latter is one of the few things that I can happily play with all day...
Not sure of the poster's original context. Just thought I'd air my statistical opinion out of sheer orneriness.
Yeah... but for 8 variables, there are n choose 2 = n(n-1)/2 = 36 covariates. My spidey sense tells me there is one scatter plot per page.
ggplot vs base & lattice in your experience? I have used lattice before but not in a few years, and I use base all the time. Don't have time to muck around with
Plausible. Or it could be 30-40 arbitrary pairings with no variable used twice.
ggplot: came for the Tufte-type aethestic, stayed for the graph grammars. I have a lot more experience with ggplot than base&lattice since I switched so early but wiser heads tell me that they are similar in capability. I just find ggplot easier to work with and look at.
All that, and yet you refer to the people you work with as "cow-orkers". At least in that one narrow respect you could stand to be more clever, if not more polite.
As a current college student I can tell you that what you are saying is an exaggeration. A small subset of Linear Algebra is required for graphics . I took linear algebra in first year of college and forgot almost all of it by the time I took a second year graphics, needless to say I had no problem picking up the concepts in a weekend from scratch. The amount of LA required is a very very small subset compared to all the crap you do in a full blown LA course.
Most students in the CS program hate math and CS theory courses. Subjects like Algorithm Analysis, Discrete Math, Automata Theory which often get cited as examples of stuff that self-taught programmers might not know are boring and annoying. Almost every student in the department dreads these courses but has to hold his/her nose and take them to fulfill the degree requirement and most people promptly forget these courses after passing the exam unless they have real interest in the topic and are taking fourth year electives in the subject. To be honest I don't think the self-taught programmers are missing anything.
And congratulations for digging up a use case for Calculus in CS. I can site examples of CS fields where say Cognitive Psychology has huge influence. The fact still remains that it is completely irrelevant to most of CS unless you are interesting in that one obscure area (in which case more power to you).
Your reply comes across as extremely bitter. I could definitely understand being pissed about taking an American History or Anthropology prereq or something, but are you seriously saying that algorithms are "boring and annoying"?
As a self-taught programmer who is just wrapping up pre-program requisites for the CS program at a decent school, I am drooling to get some formal instruction in algorithms and FSAs. And I'm sorry but I've even gleaned some good understanding of time complexity notation from my pre-calc/trig class.
I just don't get this, I guess. I'm dying to get some formal education on the more scientific and heavier math subjects. These are things I wouldn't ever teach myself because I am so task-oriented when I program. Why "waste time" mastering the concepts of FSAs when I've got a web application to build?
I hope you learn to appreciate the potential for greater understanding that these boring and annoying classes give you. You have made the choice to discard their value on your own faulty assumption that your limited understanding of the world is a complete understanding of the world. It makes me kind of sad that all the liberal arts majors in the world are happy as pigs in shit to be studying Proust and how to make coffee, while here you are shitting all over a high quality education in a hard science/engineering that will undoubtedly provide you a lucrative, comfortable career if you choose to pursue such a thing.
I can assure you, as a 31-year-old man who was self taught until last fall, I have definitely missed those topics.
Algorithms would be interesting if we actually studied algorithms. Algorithms classes are nothing but thinly veiled math courses and an exercise in writing proofs. I have absolutely zero interest in mathematical aspects of CS and at this point in my education these courses which are being forced on me are just preventing me from taking the more interesting classes that I want to take like Compilers and Networks. There is nothing more annoying that being upper level student and being forced by university to take courses which you have zero interest in and as a result not getting a chance to be in interesting courses that you actually want to take.
I'm not trying to sound bitter but as I mentioned a vast majority of students barely tolerate these courses and only take it because they are being forced to and once they pass they don't think about such dry material and end up at the same level as "self taught programmers". In that sense I don't think self taught programmers are missing anything. If you are interested in this stuff than more power to you. It's just that from my experience I highly doubt that many self taught programmers would want to or enjoy learning these topics anyways.
You might argue that they don't fit your definition of "good schools," but plenty of schools let you skim by with only the lightest introduction into things like set theory, probability, Turing machines, and automata. The math requirements are also pretty shallow, usually only requiring you to take some level of calculus, but not requiring linear algebra or analysis. Theory courses that scratch the surface a bit deeper are often electives. Some schools force you on tracks, but others let you choose and you can basically glue javascript and ruby together for the rest of your degree.
The tracks allow a similar siloing. If you take an applications track, it's likely you won't be learning fourier transforms that you'd get in the graphics and vision track. In either of those cases, you may not get the functional programming experience that you would get in the AI or theory tracks. Those two wouldn't require you to take the database courses you'd need for info science.
There's only so much that classes can cover in the course of an undergraduate education. Teaching yourself advanced topics is likely essential in taking on a project of a different nature than what you learned in school.
Sadly in the real world 90% of IT jobs can be winged by googling snippets and reworking them. Even worse people judge what's on the outside. I worked with a web dev that didn't even know js and didn't have highschool degree, he got the job because he "looked nerdy".
Think about how many times you've worked on a unique and perfect piece of code, only to realize someone's already made it. Usually with clever optimisations you never thought of. So it might be in a different language or framework, but porting it would have been easier than understanding the issue and reworking it.
A good code monkey understands the problem, a good businessman doesn't bother and repackages other people's stuff at a price.
I myself have a compsci degree and upvoted you, but the people in charge often don't get this.
The difference between mediocrity and brilliance --in ANY field-- lies in something that has nothing whatsoever to do with a formal education. I have met so many people with impressive pedigree's that are clueless to not need further proof. I have also met a good number of people with what I would call "interrupted educations" that focused so intensely on a particular field of study that they truly became masters within that field.
In engineering or cs there's also what happens before university.
In my particular case, I built my first computer way before I saw my first college textbook on the subject. And, by "build my first computer" I mean that I wired the CPU and ALU from discrete components (Who knows how a diode matrix can be used as an instruction decoder?), created my own machine language and coded instructions bit-by-bit into memory.
After that came wire-wrapping 6502, 8080, 8085 and 68000 computers, again, before seeing my first college textbook. And so, it only stands to reason that the focus and drive once in college and out of college was very, very different from that of someone who never had any such experiences and came to college with a blank stare waiting to be taught something they knew nothing about.
The physics, linear algebra and calculus portions of my degree were less than 9 months of study, and I went to a very good school for computer science.
I think you're overvaluing those if you think someone cannot learn those topics if it's apropos to the thing they're doing.
Signal processing is a considerably bigger chunk of schooling though.
I don't disagree with your opinion, I prefer school credentialed programmers, however I don't think your line of reasoning works.
I don't think he's claiming that someone can't learn those topics, but rather, that most self-taught programmers don't learn them, whereas those in university are more likely to end up in math classes.
There seem to be a fair number of programmers who are BOTH college/uni-educated AND learnt it all on their own. What I mean is one of their previous employers/clients (perhaps their first) have them on file as having some CS/SE credential, while others (including their current one) have them listed as self-educated. Perhaps they told one employer their major was computing, but their current client thinks their major was business.
One way to recognize these frauds is whether they list their credentials online. They may write online that you don't need qualifications to be a good programmer, which is true, but they won't actually say they don't have any credentials because that's not what they've told their previous employers and they don't want to be exposed.
When there's 2 people giving presentations, one lists their PhD from a certain university and the other only lists some IT groups they run, I know which one I'll pay more attention to.
With sites like khan academy the expectations for self taught programmers can be much higher now than before. If someone isn't a mathematician I think they can still do better than gluing Javascript and Ruby together.
>"There are some things that can't be self taught."
Do books not matter anymore? The most brilliant ideas ever conceived and implemented in society are written down. All information that goes into your head comes in as a concept in some form or another. If it can be conceptualized, it can be communicated (whether in a class, a book, or experienced through trial and error). What we're really saying is there are some concepts that self-taught programmers aren't discovering. Point them out and they'll find them (they're in books).
School is awesome. I went. But don't believe the hype.
Some people actually believe it isn't possible to learn sufficiently complex things without instruction. I suspect this is because they themselves are unable, unwilling, or have not tried to learn without instruction. I had a professor that insisted I was cheating because I was able to get A's on projects and tests in a C course, but did not attend any classes or discussions. He insisted that it was absolutely impossible to learn C without taking a course and attending the lectures.
I'd say the only exception to this is when it is illegal (or possibly immoral) to carryout the lessons outside of a sanctioned learning environment.
Dissecting cadavers comes to mind, or operating a small fission reactor, etc.
Of course there are ways to learn about these things without actually doing them, but there are probably some edge cases where experience is important and difficult to obtain outside of the university.
There're plenty of other edge cases where the knowledge is proprietary and only available outside of the academic system, though. You're not going to learn how Google Search works without working at Google. You're not going to learn how MS Windows works without working at Microsoft. A bunch of applied chemistry or applied biology is only available in industry.
It being illegal or immoral doesn't make it impossible. It's a minor semantic quibble, but a bit of an important one.
For it to be impossible, you'd have to have some sort of situation where the people or materials needed to gain or apply the requisite knowledge for the whatever-you're-learning were only available through some sanctioned environment, regardless of legality or morality.
I'm sure things like this exist, but they're few and far between (and we probably don't have any clue about them).
I self-taught myself trig and calculus in high school. I wasn't told about AP classes until it was too late to sign up and then I was told it was a waste of time to take the test without the class. I signed up for the AP test anyway, spent a semester self teaching and got a perfect 5 on the test. It took some dedication but really it wasn't that hard to learn. Later I found out that none of the students in the class got a 5, mostly 3s and 4s.
I got a 5 as well, but in my case it was less a function of calculus mastery and more a function of knowing how to program everything they asked for into a TI-89.
(I've since gone through a few calculus textbooks to actually learn everything that gizmo let me fake an understanding of.)
I believe you can learn whatever you wish, but depending on your field of study it's pretty hard procuring multimillion dollar machines used in some (chemistry, nuclear sciences, etc) sciences.
AWS kicks ass, but it doesn't (yet) offer scanning-tunneling electron microscopes or particle accelerators to the best of my knowledge, which might be what grandparent was referring to?
I thought the same thing actually. Then I thought about the wealth of open scientific databases, some of which are already loaded into S3 (ftp://ftp.ncbi.nih.gov/blast/db/nr*), and figured that the criteria for learning, in the context of programming, was satisfied.
I took a data structures class in which one of the students, due to work schedule or something, was unable to come to the lectures. She (the only woman in the class, by the way) got a better grade than the rest of us who sat through the lectures.
I had a similar case in a programming class, too. We worked in teams of two, I wrote the programs for the weekly tests (they were required to write the exam), she had no idea about data types, she couldn't compile, she had no practical understanding of programming. When we wrote the exam, she passed well, I failed. :)
Agreed. I've learned that the only times I can't learn on my own is when the text itself is incompetent at teaching. Sometimes though there are some amazing lecturers that deserve your full attention.
One of the most important things that sadly isn't taught at many colleges/universities, is that often the required textbook sucks.
I had a hell of a time learning digital controls theory (Z transforms, etc) from whatever book we were assigned. On a hunch I started looking through the library and found a few other books that explained the concepts in a much more transparent fashion.
I have always approached learning in exactly the same fashion. Seek out multiple sources of information on the same topic, and browse them all. You'll quickly find the source (or sources) that explain it best to you. It's a total waste of time to bang your head against the wall and blame a poorly written text.
To be fair, you should actually quote what I wrote which was:
"Personally, I think the differences come at the top end. I think computer science students who've spent a lot of time doing research (particularly Ph.D.s) gain something that can't be self-taught."
So, in this context, I think it makes sense. Yes, there will always be edge cases like Ramanujan (0.00001% of the population) but I took a little liberty not to qualify every single statement I made.
If anything, this was more about giving respect to the <1% of people who have Ph.D's. But I guess you were offended.
But wouldn't a Ph.D., as in reading research papers and arriving at some unique new insight, be closer to "self-taught" than "taught existing knowledge from someone's lecture"?
As a PhD student myself, I've realized that working through the program is only half "reading research papers and arriving at some unique insight," which you could say is self-teaching. The other half is presenting your ideas to your advisor, peers, faculty, etc. and handling significant criticism. This occurs in higher-level classes, as well, in which the feedback from the prof. is a very important part of the experience. That, of course, is not "self-teaching," far from it.
On a more general note, I think people should not make claims about what happens in situations (e.g. PhD programs) that one has not experienced herself. Claims like "whatever that person learned in college could have been learned outside college" is too often false simply because no one person can experience both sides of the debate.
To be fair, the life of a PhD student is pretty well documented.
I don't think the first hand experience has much value except to the individual experiencing it.
As to the value of a PhD; it's a social construct afaik. It's value is in the reputation you build which is vetted by other PhDs and organizations, your dissertations and debates, by the body of work you publish, and so on. Take all of that away and you're left with just the knowledge which is pretty much available to anyone with half a brain and a spark of curiosity.
It's ludicrous to think that I should invest years of my life and more money than most people make in a year in order to learn how to develop an operating system. That knowledge is freely available and easily accessible. Computer vision? Check. Machine learning? Check. Statistics? Check. You don't go to university for a CS degree in order to learn how to program operating systems or figure out data structures. That would be an incredible waste of time and money.
Someone who is self-taught is simply self-taught. They may have decided that they don't need the rigors of a formal education. Perhaps they don't care about publishing papers and defending them against critics. However, there's no way to say quantatively that a self-taught programmer is better or worse than a formally educated one. It's purely a value statement and one I find is loaded with a lot of FUD.
I think this is similar to why Silicon Valley is so attractive as a start-up hub. You have seasoned VC's (instructors) and bright peers as your self (other students).
You can do it all by your self, it's just going to more time to find funding and partners. Something your competitors in SV would have an upper hand with.
The second half of your experience can be had on github, launchpad, bitbucket, or project pages http://www.chromium.org/ or http://source.android.com/, where one can find advisors, peers, and faculty to rip your work a new one. As well, this translates nicely to the experience one might expect from a future workplace. Many of these projects have everything one would need to self-teach process, humility, and success:
If you insist on classifying into those two categories, sure, but when you bring up grad school, there's a lot more to formal education than sitting in lectures.
It wasn't fair, but I wasn't intentionally taking you out of context. I was kinda heated and paraphrasing from memory (not an accurate way of quoting).
It's not specifically against you, but against an attitude many other people have. The people that believe that knowledge must be handed down to you by authorities. Sometimes those authorities are handing you someone else's logic and experience even though I could understand it on my own. I don't believe every idea/concept needs an interpreter.
On the whole, I thought my post was pretty balanced. At the end I talk about how I'm looking for people to work with (cofounders / core team) and I care more about their ability to hack than which school they came from.
I also said that people with degrees from good schools can become complacent and lazy because they always fall back on it whereas someone without a degree could be a better producer of good work.
@oniTony: For sure. I was thinking more along the lines of the time, effort and resources used in the pursuit of a Ph.D that make it unattainable for most people.
Generally speaking, it's not that it's impossible someone who doesn't go to school will learn things, it is just less likely.
I generally expect a greater degree of people who went to school (therefore COULD benefit from instruction) have a higher chance of foundational knowledge helpful in a computer based career.
At the same time, I met dozens of nitwits while at school who can't code their way out of a paper bag and misunderstood half their classes.
School increases their likelihood of being good, it doesn't guarantee it.
The most difficult thing about self-teaching is where to start and what to read.
If there were recipes for learning all major concepts in every discipline (currently working on this) self-teaching would be much faster and easier for everyone.
Yeah, I alluded to this but didn't want to get into the thick of it.
Degrees are like filters. I've done a lot of resume reading in my time, filtering through candidates for jobs. It's mindboggling how many responses one gets. Literally hundreds. And that wasn't my full time job (which was software development).
At the point that you're looking at hundreds of responses, you start applying certain filters so that you can increase your probability of finding what you want. Typically the initial filter is something that doesn't require a lot of time. For many, this tends to be if they have a CS degree.
School is awesome in that it helps show you things you should know about, but otherwise wouldn't be aware of. So far as ignorance of a subject's existence prevents you from self-teaching, there are subjects that cannot be self taught.
It's also how formally educated programmers learn. The ones that deny this just haven't realized exactly where their professors build their lectures, which is just a spoken book format in itself.
When you are self-learning from books, you only learn as fast as you personally can.
With an instructor (if lucky, a good instructor) you learn same material, but much, much quicker due to how course program is laid out, experience of the instructor, etc.
In the end, it's up to you and how you want to spend the time on this earth.
Also, there are many kinds of self-taught programmers. Some are task-oriented learners* , and will learn CS concepts they encounter on the way to achieving their underlying goal. (This seems to be the kind the article has in mind.) This has its problems. For example, rather than realizing that their problem is easily solved with parsing tools, they may just lean harder on the regular expressions they already know. Learning within a classroom setting would direct them to ideas they may not otherwise encounter, nudging them out of local maxima.
Others self-learners try to get a full overview of whatever niche they're working in. Perhaps they've been burned by missing concepts from purely task-oriented learning, or perhaps they find having some big picture understanding makes finding practical applications easier. (I fall into the latter.) One problem with this group is that learning everything by breadth-first search can take a while - I wrote a web server from scratch to figure out how web programming works, while some people would be content just going through a Rails tutorial or something. That kind of perpetual, open-ended research really adds up, though.
> Perhaps they've been burned by missing concepts from purely task-oriented learning, or perhaps they find having some big picture understanding makes finding practical applications easier.
Or we just like to learn, or we've been told by people we respect that we ought to be better grounded in theory. (Both are true in my case.)
In response to a post about raising smart kids on Slashdot a long time ago, someone posted this:
"Uh-oh, the ground is trembling, (Score:4, Funny)
by Anonymous Coward on Thursday November 29, @08:06AM
Small mammals are scurrying for cover,
All the birds have taken wing.
The hordes of self-proclaimed geniuses who wander the halls of Slashdot approach."
This is possibly the Hacker News equivalent of that topic. It appears impossible to have these discussions without the whole thing degenerating into thinly-concealed self-praise.
'Self-taught' vs 'CS-educated' covers such a wide range as to make the question almost meaningless, in any case. Even among CS-educated people, most of us have progressively forgotten and relearned so much as to have way more in common with 'self-taught' than we'd care to admit. For example, I took computer architecture courses in both undergrad and grad school, but pretty much forgot it all and relearned almost everything that I know about it now in a very different context (high-end x86 that didn't exist back when I took comp arch). Much of what I 'knew' then isn't even true anymore...
And 'self-taught' could mean "ploughed though Cormen, Leiserson and Rivest + TAOCP + SICP + Hennesy&Patterson" while CS-educated can equally mean "took a bunch of Java courses and dodged all the hard stuff I could".
I'm a mostly self-taught programmer (I took a class on QBasic and a class on Visual Basic in community college).
At the risk of generalizing, self-taught programmers tend to learn things as we need them, since our primary focus seems to be on making features or making products.
I will say that I greatly appreciate the academic programmers though, as without them I wouldn't have any tools to work with. I have no drive to create a new database storage engine, a more efficient bloom filter, or an experimental programming language, but I'm grateful that someone gave me those things to play with.
My experience is that CS-educated programmers will often be more humble and ready to admit that they might not know all there is about a subject, and self-taught programmers more certain that they know the one way to solve a certain problem.
Of course, this might just be the people I've met, but based on personal experience, I think university teaches you that you're _not_ the smartest one in the world, that there are lots of things you don't know, and that things are usually not as simple as they seem.
That said, a lot of the self-taught programmers I know get more humble as the years go by. And of course, all the great CS-educated programmers I know learnt programming in their spare time, and got the theoretical and low-level background at uni.
I think this statement is funnily self-negating: "I already understand grammatical structure, the relationships between subject/object/verb and the uses of prepositions." This guy must be learning Romance languages (e.g. surprise! he's learning Portugese) since languages outside of this family differ in a lot of ways. E.g. all the things here can and do differ: http://en.wikipedia.org/wiki/Romance_language#Linguistic_fea...
This sort of euro-centric natural language egalitarianism seems to bleed over into his thoughts about programming languages as well, e.g. those expressed in this statement:
"Learning a new language simply teaches me to communicate the same thoughts and feelings but in a different way."
This is wholly false. Languages differ with respect to their requirements for self-description. These requirements, when propagated into code through the actions of a large community, shape the types of programs that are written and how those programs can be expected to interact.
For instance, part of the reason that Haskell programs can be very short, is that the type system on which Haskell is based is extremely deeply thought out and based on a rich mathematical framework. This framework allows the programs to be self-describing in a consistent and meaningful way and for that self-description to be used to express ideas more clearly.
Ruby is self-reflecting through a sort of imperative set of meta-programming techniques. This allows "conventions" to exist in ruby code which enables a framework like rails.
Etc. etc.
When will people realize that not all languages are created the same? Language shapes culture and culture shapes language. Once you put a stick in the ground and decide where features will be located and how they will be inter-related you are planting the seed for all sorts of complicated emergent behavior.
It's true for natural languages and for programming languages. This guy just seems to have missed the ticket.
The problem here is that you can't group all of one type together. There will always be brilliant self-taught programmers. From my experience in the field, however, is that self-taught will often take the first solution over the best solution, or will pick the best solution to their particular problem without thinking enough about coding standards or clean integration into the code base. There are plenty of tools out there to ensure this kind of thing in other languages, but I don't often work with those.
Of course, there are also many schooled programmers which may be able to think critically but lack the ability to go from a program to a product.
But not all of course... you can't make these stereotypes without starting a flamewar.
"self-taught will often ... pick the best solution to their particular problem without thinking enough about coding standards or clean integration into the code base."
I disagree that getting a CS degree has a positive effect on your ability to write clean code or adhere to coding standards. Those things come from work experience. I would argue from my own experience that self-taught programmers are better in this respect. The majority of code I see written by CS academics and students in my department is hideous.
I tend to agree with Mark Twain on this point: "I have never let my schooling interfere with my education."
For me, I have learned more outside of school than I have in it. Really I'm in it for the diploma. I wish companies would look at your software rather than your certificates, I think they would tend to find better developers. I know some companies are good about this but the reality is that programmers do need to go to college if they want to compete for jobs.
I will say this for school: it fills in gaps in education so companies can expect some standard knowledge from everyone. Also, it can be helpful for people that don't have the motivation to learn on their own.
Agreed on the math point. I took CS50 (Introduction to Comp Sci) at school my Senior Year and being that my focus was directed mostly to kegonomics, I didn't quite get as much out of the course as I should have. I also took "Bits" which is watered down Comp Sci though the professor Harry Lewis said Zuckerberg got a C in it (?!?!?).
What I did learn was the basics of putting together algorithms and "thinking like a computer." For a guy who studied Social Anthropology and whose last math course was in High School, you can imagine the nitty gritty of building a C search algorithm was difficult to get at first glance (Linear search, bubble search my a$*).
To cut the BS:
1) CS I took was broken into discrete testable units. For each problem set there was a core CS theory that you needed to understand to be able to finish it. Some were less functional (ie arcane data structures), others were basics of programming that anyone should know like the back of their hand (pointers, data manipulation etc).
2) When self-teaching, you tend to pull together bite size chunks of important info necessary to solve a particular problem rather than building a solid foundation. If you're completely self taught, you'll miss 101 level theory points that may help you later when tackling a known problem that has a solution or is nearly impossible. That said, in self teaching myself, I've cobbled together quite a bag of tricks as well as the resourcefulness to pull up any info I need to solve a problem and absorb it quickly. I've yet to meet a problem I couldn't tackle just because I wasnt a "CS Educated Programmer".
Through the course though, the biggest takeaway for a non quant type like me was breaking problems down into discrete parts solved and tested by logic loops (read: thinking like a computer). The basic structures and solutions to problems involved I still use today when I program for the web.
I see too many programmers that get in it for the money and not the passion. Having passion removes the need to be formally instructed. Passion gives people the drive to excel and be the best. However, you do get a big picture view of what's going on having earned a degree. The slackers tend to be the non-self-teaching among us.
I once read something to this effect from Oded Goldreich's book (his book on Computational Complexity): Knowledge is the result of hard computation on publicly available information. I think getting a formal education definitely aids the process of learning. Otherwise how do you get around Meno's paradox (i.e. how do you enquire about something if you don't know of its existence?) ? And self-taught programmers definitely would benefit from some structure in their education (again - the structure is decided by a formal CS curriculum). I have been trying to learn signal processing over the last couple of years and I found that OCW + Berkeley webcasts took me a lot farther than random googling about how to accomplish something.
"Learning a new language simply teaches me to communicate the same thoughts and feelings but in a different way. This is the same with programming languages: you accomplish the same tasks in a different language that has different syntax."
I liked that. A computer language, or CS, are both, for me, means to turn an idea into reality. I don't think it necessarily matters which way you were trainer, so long as the idea you're trying to execute gets done. Success either way imo.
I would have very much liked to have a CS education (I have an MS in Neuroscience), but did not really know I had aptitude and a passion for it until later in my life. I took the long self-taught route to programming being very task driven, ie the first business I started needed certain things so I just did it. Was it ugly? oh heck ya! To some degree it still is, but for many years it has more than payed the rent.
I very much miss a mathematics background but these days there are many sources of information that can and do aid in the solving of programming problems.
I have railed against "credentialism" for years because ultimately there have been many people I have worked with who were capable of many tasks but not allowed to the opportunity because they lacked a degree, but not the skills. And don't get me started on people doing the identical job but one getting more due only to a degree.
A credential tells me what you might be able to do, a body of work shows me what you can do.
A bit of background about me before I add my 2 cents: because of my rank in an entrance examination (JEE), I ended up majoring in Civil Engineering and managed to work towards a minor in CS (which was, and is my passion). I'll be graduating in a few months.
I've experienced a bit of both worlds—I picked up books on Data Structures (including Red Black trees), attempted to implement Rjindael in C++ (without understanding any of the underlying math) in high school, learnt PHP/JS/CSS/C++ and started building from there.
What I've observed is—as a self taught programmer the biggest disadvantage is that you don't know what all is available/standard—as someone else mentioned in the thread, I did not know about parsers until I managed to take a course in Programming Languages.
Since I realized that there was so much I did not know, I've tried to make it a point to keep asking my batch-mates in CS what they are reading, picking up textbooks recommended by professors in class; reading other books as recommended on HN, and elsewhere (SICP, Learn you a Haskell..., started reading TAOCP, etc.) and have attempted to at least match my batch-mates who are CS undergrads. At the very least—try to get a fairly broad idea about known solved and unsolved problems.
The biggest advantages an avg CS undergrad has over me is that s/he has the credentials (which, I've tried to match by having my own open source project, doing interns in well-known companies, etc.), and that s/he is exposed to much more theory than I would ever be without actually having to do much beyond attending classes (for which I have to go the extra mile).
The advantage _I_ have over the other, standard CS guys is that I have the freedom to find out more about what I really want to do, and explore/follow certain areas which I find more interesting without being bound by academia (tests, reports, stupid & pointless assignments), and in general, perhaps—have a lot more fun while learning.
There is no divide, like anything in life if a person simply uses it as a means to an end then the product is going to reflect this. This is true with anything.
I think this the the point -- that in general someone who is not in a PhD program generally won't spend such a huge amount of time focused purely on a very narrow but deep problem.
There are obviously exceptions to every rule, but I think this is a good point. Most folks who are NOT in an academic program don't take out loans and stop work on all practical projects for 3-5 years to focus like this.
I'm not very familiar with the CS PhD world -- I'm very much in the self-taught category, personally -- but I'm married to an author who got a (fully-funded) MFA in creative writing. It was great -- not because the profs were teaching anyone how to write (if you didn't already know, you didn't get into the program), but because it gave her a couple of years out of the real world, focused entirely on her writing, surrounded by very supportive & similarly focused people.
Am I missing something, or does this article basically say: within the domain of self-taught and c-educated programmers, there are groupings with various levels of theoretical and practical skill?
It seems it wouldn't be a stretch to say that, "cs-educated programmers" aren't even really identifiable by trait, since course selections made by both the university department and the student can change the curriculum unpredictably. The same could be said for "self-taught;" in my experience, half the people in cs-related math or theory classes just read the book and don't bother to attend class anyway.
It's a slightly different slant on the topic, not taking up any one side, but perhaps self-taught people (in any field, not just programming) are inherently more passionate about learning?
If a person can teach himself/herself something non-trivial like programming, I am sure (s)he can easily learn what is academically taught. The learning for self learned-programmers is mostly driven by need, unlike in a college setting. The toughest skill to learn is teaching yourself which not many people learn at college/university.
The best programmer I met in college quit after 2 semesters because he already knew the concepts.
However, aren't we really talking about IQ? From what I can remember, companies really started using degrees when testing for IQ became illegal. If a possible employee has an IQ of 120, then you can be pretty certain he's going to pick most of the concepts you throw at him. The college degree filters (beginning with the SAT, ACT, and high school diploma) a lot of low IQ people out the process (at least, in theory) and if not, it at least filters out those individuals who don't work hard improving their mental capacity (for instance, cognitive abilities may be improved through the study of music).
What an employer really needs is the ability to quickly filter out bad candidates. In reality, it is HR that does the culling. They don't know how to filter technical candidates, so the first thing they use is the college degree.
I'll be honest and say that I'm a CS dropout who's never felt like any kind of maths wizard. I didn't enjoy the CS course I took back when I was 17 but that never stopped me from pursuing a career in the field that I'm passionate about.
Since that time I've taught myself everything I needed to know and never really felt the poorer for it. It's not easy and a degree is a great foundation but it's exactly that, a foundation. What comes after graduation day is a lifetime of learning.
If you really want to pursue programming, to any level, it's your passion and willingness to keep learning and pushing yourself that will ultimately determine how successful you are. Degree or no degree.
Uni makes you learn really boring shit that you really don't want to learn and don't think is relevant at the time, and then gives you a credential. Both of those aspects should not be underestimated.
Is anyone else who's been following this discussion disturbed by the absolute paucity of any hard evidence to support any of the arguments given?
It appears to me that nobody (including me) actually knows anything, statistically speaking, about what the differences are between self-taught and CS-educated programmers. Everyone has an opinion, and some anecdotal evidence (I worked with ... or I've met ...), but that's a far cry from hard data.
Is anyone aware of any studies that might shed some light on the topic?
In the tech world I think the only time it matters is when you're a noob. A college degree give some credibility to landing a job etc. But if you've got a proven track record developing kick-ass software, that's what will keep your career moving. As far as interacting with other college student and networking goes that can be pretty easily be done by hanging out at with the right crowd online (like here!) or at other professional settings in the rel world.
One thing to consider is that some people are not of the age to earn a degree yet. I am a high school student and people asume I am not capable or competent way too often.
There's no contest between a person who spent their childhood messing around with computers and somebody who's trying to learn it all in school.
I think CS degree programs are actually most valuable for people who already know how to program. You get to spend the time refining your art, whereas somebody starting from scratch barely has time to become competent.
I've seen that self-taught programmers can reach the equivalent of programmers educated at top universities. However, the self-taught programmers rarely specialize like those with Masters or PhDs in say, machine learning, data mining, information retrieval, and so on.
I learned programming in college in spite of my classes, not because of them. I was doing 3x the coding outside of class and learned way more practical stuff that I'm actually using today. College taught me way more about dealing with people than with code.
This is an either/or fallacy. To be excellent, you need a mix of both. Formal schooling isn't required, but you generally need to have been exposed to other smart people, many more experienced, because otherwise it's impossible to tell what self-study will be fruitful and what not. So some form of instruction, whether it comes from school or from a lucky landing in the work world, is required. That said, if you're not autodidactic to some extent, you're going to be unable to grow.
By the way, I've learned a lot of those "skills that self-taught people lack" on my own. Lambda calculus and type theory I hadn't even studied till I was 24. I'm not an expert on either, but I know as much as a well-educated non-expert (i.e. I know what the typed lambda calculus is and why it's interesting, I know about System F as the basis for ML, et cetera).
As an outsider (i studied physics), I always thought computer science was NOT about programming. I find it fascinating that you CS people can work on the foundations of computation, like P=NP (see, only know the basics). Programming to me seemed a very pragmatic thing: you learn as you go. Need to build a parser? read parsing theory. Need to do text mining? learn it. In that sense, i don't think there are "CS-educated programmers". While everyone can benefit from taking an introductory course, in the end everyone becomes self-taught
While it is true, computing requires repetitive, continual retraining, I disagree about the parsing/logic/etc stuff.
In class you're taught principles of X Y or Z and how they relate to Q V and R, or at least they do.
When haphazardly picking up skills, you often do not come across the greater, abstract ideals, and instead see craftsman level heuristics to solve a problem. For your example, the parser thing, recognizing if you need an parser, a regular expression engine, or a simulator is a pretty heady, abstract thing for some cases. Very few people would ever educate themselves on the difference between the cases just to do so. But that difference is one of the ones a college trained CS grad should have.
And while in college were probably self-taught too. As an ex-academic I believe formal education is about credentialing. Being able to tell someone that you've acquired a degree of mastery we believe is appropriate to some level. And that's fine, but I fundamentally believe that all learning is really self-taught.
Someone lecturing to you is just a different medium from reading it in a book or online video. In all these cases someone who knows the topic better than yourself has manifest some information in some way for you to learn. (It's not like anyone using the term "self-taught" actually means they learned how to do AI from first principles. They learned it by reading a book written by Russell & Norvig.)
And with that said, let me say that CS isn't programming. No more than a degree in biology is the same as being a doctor. In fact the profession of doctor is so specialized they created a special degree just for it, with residencies in specialties -- and you don't need a bio undergrad degree to get into med school.
And as a general rule, if you're passionate about an intellectual pursuit you'll probably be pretty good in it (not asserting how the causality works). Those that learn via books outside of a formal setting are self-selected as it requires a special type of discipline. Although as more people race to become founders and teach themselves, this pool probably gets diluted.