Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
What should a self-taught programmer study?
50 points by yla92 on April 19, 2014 | hide | past | favorite | 61 comments
I found this interesting question on Quora ( http://www.quora.com/Computer-Programmers/What-skills-do-self-taught-programmers-commonly-lack ).

I, personally, am a self-taught programmer. I've no background for formal CS Studies at University. I've no knowledge about Algorithms, Data Structures and Design Patterns, Compilers etc.

I always feel there's a need to know such topics to become better a better programmer. If so, how can I pursue such topics ? Do I have to go back to University and attend a CS degree ? Or take online courses (and which?) Where do I start ?



Do a google search on each of those topics, asking what book is most important in each, especially to a newcomer. Usually you'll find the answers on StackOverflow.

Read those. After that? Stop worrying about it. You're not inferior to a CS Major, nor are you less capable. Whatever it is you want to work on, start working on it and don't stop. Reading about algorithms and data structures and design patterns is all great, and can help you a lot- but remember, it's pointless to try and prove you're the equal to someone who has a piece of paper. Instead, make things. Make the best things. Work for a company that inspires you. Work on problems that interest you.

You'll learn what you need to along the way. You're feeling imposter syndrome, and let me tell you- you're not doing yourself any favors pushing yourself to feel that way. Learning is great, but do it for a specific purpose, not to attain some status.


Thanks for this. I'm in a similar position to the OP, and needed the encouragement.


Depends on your goals.

Generally, if you want to work with product development, please learn the importance of good abstraction, readable code, clarity, doing the simplest thing, refactoring, and so on.

Most of the time, you can get away with just hacking away until it works. But code bases developed like that tend to develop tons of incidental complexity, weird logic, unintelligible ownership issues, etc.

Clever algorithms are awesome, but don't let cleverness and complexity grow like vines. Code is like a bonsai tree or something: you have to be meticulous and responsible in pruning it.

Check out SICP. It's a beautiful book that teaches very good design.

Also, take every opportunity to really learn about the languages you code in. Like, if you code JavaScript, you should understand its scoping rules, how its closures work, its peculiar object semantics, its various crazy quirks, etc.

Don't underestimate the value of time. Learn when to take a shortcut, and how to isolate "shortcut code" into a module that can be replaced or improved if needed.

If you're interested in design patterns, read Gabriel's "Patterns of Software" -- http://dreamsongs.net/Files/PatternsOfSoftware.pdf -- for a more nuanced view than most courses and articles.

If you're into OO design, read Eric Evans's "Domain-Driven Design," and take heed when he emphasizes the importance of domain language and concepts.

Learn how to use some structured to-do system efficiently. Don't try to juggle dozens of random things in your memory. If you realize you will need to do something after you're done with debugging or whatever, you should be able to quickly jot it down.

Learn methodical debugging. Learn to narrow down your problems.

Read books like "The Pragmatic Programmer," "The Passionate Programmer," "Working Effectively With Legacy Code," and so on.


Rich Hickey's presentation "Simple Made Easy" changed how I communicate about software complexity.

I got so frustrated about only having a 1 hour presentation to link people to, that I ended up writing some notes to be able to refer to them later.

http://daemon.co.za/2014/03/simple-and-easy-vocabulary-to-de...


I'm self taught :)

Algorithms in a Nutshell does a great job of introducing data structures and algorithms. It's also very short. Data structures and particularly basic graph algorithms (DFS, BFS, DSP) are very powerful and can solve a lot of common problems.

After that, study Math. Algebra, Set Theory, Automata and Calculus were real eye openers. Schaum's Outline of Discrete Mathematics is a good start in discrete math.

Automata: if you can, try and get hold of the Coursera course on the subject. It's excellent. It is hard work, but it got it through my thick skull :)

Then, compilers! They're awesome. They changed the way I think about development, and also in one feel swoop destroyed a lot of respect I had for the industry. I read "Engineering a Compiler" and followed the excellent Coursera course.

Did learning that make me a better developer? In some ways, yes. I'm capable of solving problems that I thought were impossible a few years ago. On the other hand, it is now clear to me that our industry is building castles on sand too often, and that makes me sad.

I feel like Algernon.


How exactly did learning about compilers help you ? What applications do you build ?


I'm working at triggre.com, we're building an environment where normal users can build information applications.

Now, this is a crowded field full of the corpses of previous attempts - there are even DailyWTFs that were inspired by some failed attempts - but Triggres' strategy is different, to the point that they have a great chance of finally nailing it.

Knowledge of compilers is a prereq to working on these kinds of systems.


You should definitely study parsers and compilers. The biggest gains for me have come from studying parser, compilers, VMs, and functional languages. The data structures and algorithms you sort of pick up along the way as you study the previously mentioned topics. For a really nice way to get started with all that stuff I highly recommend PL101 course at http://nathansuniversity.com/. It is a set of guided exercises for designing and implementing various programming languages all in your browser. That's actually how I started learning all this stuff.

If after going through the lessons from PL101 you decide that stuff is interesting to you then another book that I highly recommend is "Compiler Design: Virtual Machines": http://www.amazon.com/Compiler-Design-Machines-Reinhard-Wilh.... That one goes through various virtual machine implementations and by the end of it you'll understand precisely how pointers, stacks, arrays, etc. all work in various programming languages. Another good book is "Essentials of Programming Languages": http://www.amazon.com/Essentials-Programming-Languages-Danie... but I haven't put enough time into that one to say whether it will benefit you or not.

If the previously mentioned online resources and books are not your cup of tea then another good way to exercise your programming muscles while learning about data structures and various program organization principles is to implement a simple project like a ray tracer or a recursive descent parser in different languages. Start with a language you are fluent in and then diversify from there. This is actually how I learn programming languages and having a clear goal in sight is an excellent motivator for exploring all the new concepts in whatever language I'm learning.

Paying to learn at this point is a waste of money in my opinion. If you are comfortable learning stuff on your own then there is no shortage of resources available on the internet. All it takes is some discipline to follow through.


I'm in my last year of Comp-Sci. I started the program as a self taught adult. I can say that very little I've learned in school has been useful for the internships and jobs I've since had. That being said, I would never trade the CS experience for anything. It gave me context and a greater understanding of the field. I can now tackle harder problems because the program gave me the tools to do so.

But this is all stuff you can learn on your own. Assuming you know how to program already, my suggestion is find out what a local university teaches 2nd year students. Find their curriculum and see the classes they offer.

Typically, 2nd year is when you go beyond the basics of programming and start scratching the surface of really cools stuff like fundamental algorithms and data structures. If you can learn this stuff, you'll have enough context to determine what else you need to learn.

FYI, here's an amazing open source textbook on data structures: http://opendatastructures.org/


Another excellent book on Alogorithms and Datastructures: http://interactivepython.org/runestone/static/pythonds/index... . Seriously, there are a lot of very high quality books being released for free past couple of years, especially by teachers.


"What skills do self-taught programmers commonly lack?" is a very different question from "What skills do self-taught programmers need to learn?"

What you need to learn depends a lot on what you plan to do. Programming is a large field with lots of different specializations. If all you want to do is make web apps and mobile apps, there is probably no need to learn how to write an OS, you only need to understand how the various OSs differ from one another. On the other hand, if your passion is in systems programming, there is probably no need to learn about the latest CSS preprocessor. All of these things are good to know, but one person cannot, and does not need to, know everything.

Pick one or more specializations that you're particularly interested in, and learn a lot about them. Don't completely ignore all the other fields, but satisfy yourself with a high-level understanding of (1) what those fields are generally about, (2) why they're important, and (3) when you should defer to the judgment of someone who is an expert in one of those fields.

Anyway, here's one specific suggestion that I don't think anyone has made yet: if you do anything web-related, learn what the various protocols that make the internet tick (http, smtp, dns, etc.) actually work under the hood. You should be able to create a text file, pipe its contents to a telnet session, have the machine on the other side return useful information, and be able to parse the result. Not because such a skill will get you a job, but because the insight you gain during the process will put you ahead of the majority of "web developers" who have no idea how much complexity Rails hides behind its ass. When you're done with text-based protocols, move on to binary protocols. (Memcached is good for practice because it uses both.) Incidentally, this will also teach you a lot about commonly used data structures.


As a self-taught developer myself, I too am lacking in these areas and to be quite honest, I think the best way is to continue to self-learn the areas you believe you are lacking in. I've been doing courses over at Udemy myself to learn about algorithms, structuring data and other things. You should go browse the courses over there, you might find something that'll help you out like it did for me.

The problem with studying a CS degree is you're made to learn a lot of what I deem to be useless information. I have friends who have undertaken CS degrees and the general consensus is they learned a lot more when they left university and are quite envious of the fact I took a shortcut in path to employment without any kind of student loan debt. I work with people with degrees mostly and I feel as though I am on the same playing field as everyone else.


It's true that CS is more theoretical and not that useful for the majority of paid programming work out there. However you can go get a job anywhere building CRUD apps and acquire the knowledge to build a Rails app or whatever is the flavor of the day, but finding a job where you can, for example, build a compiler is rare. In a good CS program you get to do dozens of such projects from implementing a ray tracer to a network stack to hand optimizing decompiled assembly.

Aside from feeding your curiosity, the practical advantage of a CS degree is that it gives you the lay of the land of computation in general, so that if you are lucky enough to work on something novel and challenging that requires more advanced data structures or algorithms then you'll at least know where to start researching. Without CS you won't know what you don't know and so you are likely to miss entire avenues of exploration.

The idea that you will learn more in industry than in a CS degree only applies if A) you are in a shitty CS program or B) you aren't trying hard enough to learn anything. If you are so eager to get out into the real world, do a startup in your spare time—you'll never have as much spare time available for projects as in college.

Granted the ever increasingly extreme expense of college throws a major wet towel over the whole value proposition, but let's not conflate that with the unique learning opportunity that it represents.


"The idea that you will learn more in industry than in a CS degree only applies if A) you are in a shitty CS program or B) you aren't trying hard enough to learn anything."

Disagree. If you learn more in 3 years of college than you do 3 years on the job, you are a shitty employee.


You get out of college what you put in. Businesses have no interest in your intellectual growth, your professors (at least some of them) definitely do if you do.


This comment cannot be repeated enough. You can graduate with a high GPA from a respected CS program and still be able to do next to nothing. You have to choose to learn and being in college does not do that for you.

Note: my hindsight is 20/20


With all due respect, you are not going to get a valid answer here in HN. What you will get is a lot of answers trying to help but resulting in a lot of noise.

Programming is a very wide field. You can choose a lot of different things to learn and specialize. You don't need to learn everything now. You have all your life.

Now my quota of noise :D

Focus on getting a solid background, forget about the languages. Languages come and go. Operating systems too. With a good background you can learn any programming language in "almost" no time. From higher to lower level I would learn:

- Algebra (vectors, boolean, sets, etc.).

- Algorithms (recursion, tree/graph handling, etc.)

- Operating systems theory (resources, I/O, memory and process management).

OK, too generic. So, you will need to learn some language to practice your theoretical knowledge. This is what I'd like to suggest:

- At least one systems language (C, etc.) even if you never have to use it.

- At least one OO language (C++, Java, Ruby, ...).

- An interpreted scripting language is always good to know (Perl, Python, shell, ...).

- Bonus: A functional language (Lisp, Haskell, ...). It will make you a better programmer but it will alienate you to the point of seeing other non-functional programmers as low skilled blue-collar workers :P

There ain't no such things as a "best language". A programming language suits well to some tasks and not so well to others. Don't be a language racist.

I will add some non-technical subject to the pack:

- Learn how to deal with people. Not everybody is skilled in this field and it is always useful. People is very different and you will have coworkers. Try to create a smooth relationship with them. It will make your life and your coworkers' life better. A lot of techie guys underestimate this.


What should a self-taught programmer study? The simple answer is: whatever you want. I have always found that I learn best when I'm motivated to learn about the subject. So I would suggest studying the things you find interesting. Where others' suggestions can help is in describing and motivating fields you might not have heard of, or might have dismissed. There are plenty of suggestions in this thread, or just look at the curriculum of a top CS program.

As for how you study -- just choose some and use it. There are many great texts, online courses, and so on. However it is far far more important to just start with something -- anything! -- than to waste time deliberating about the best method. The difference between a book and online course, for instance, is not so big that it's worth spending time deciding between them.


I would recommend very simple things really -

1. For Data Structures and Algorithms.

- Join this site where active job seekers hang around and share interview questions - http://www.careercup.com

- Solve a question a day and post your solutions, read through that of others and comment.

- This is an active community so you would get up and running with the basics quite fast.

2. For Design Patterns.

- Do a "light reading" of this book - http://www.uml.org.cn/c%2B%2B/pdf/DesignPatterns.pdf

- Explore opensource projects and see if you can identify patterns.


I think a grasp of the different data structures is necessary, along with Algorithmic Complexity; it'll help you determine and avoid slow/ressource-demanding algorithms. A combination of online courses ( Coursera, Standford CS courses ...) and a community that's eager to help ( Stackoverflow or even HN) is enough to get you going.


I just started a series of blog posts that handle some of the more complex subjects in a clear and relatable manner.

http://daemon.co.za/2014/04/introduction-fullstack-fundament...

This is a roadmap, not a recipe book. I want to give people a bit of surface area to grab onto when learning these things, so they can figure out how all the pieces relate to each other.

here's a trello of the planned posts so far (several outlines/drafts already): http://i.imgur.com/txu4yd3.png

Any ideas about what I should write about would also be useful.


Reading a lot of books about obscure areas of CS is great and all, but real skill (applied programming) comes down to practice. Pick challenging projects that force you to learn about new topics (i.e. low level networking, graphics, UI, REST API's, etc...). Make sure you take the time to write large (relatively) programs in different types of languages (object oriented, functional, etc...).

I always found the best way to learn and RETAIN the knowledge is to utilize it while reading some new topic. For me I purchase books on Amazon, watch Coursera and iTunesU courses, and then created a few applications or open source libraries based on this information.


Nice list here: http://matt.might.net/articles/what-cs-majors-should-know/

I've so far done a 6 month part time course in computer programming, which has made self-learning a lot less intimidating.

For the last few days I've been learning C and Lisp with www.buildyourownlisp.com after that I plan to learn all about computers from logic gate up with www.nand2tetris.org, then algorithms on Coursera. There is so much to learn! But it's easy to be motivated in such a huge and interesting field.


I think most of the CS programs will just waste your time --unless you attend to a very good university. Given that best hackers I've met are self-taught programmers, you can definitely work it out yourself. I think these are amazing times; there are many pretty good online courses & materials, places to get help, etc.

I recommend Programming Languages course on Coursera. But if you are impatient, you can start from learning Python (it has a very low learning curve): learnpythonthehardway.org/book/


Given that best hackers I've met are self-taught programmers,

And the best hackers that I have met, were among top N% of their program, finishing their degrees with honors.

To repeat a cliche: the plural of anecdote is not data.


Before chosing a paid degree, you need to evaluate your skills as a programmer.Go to coursera.org choose a few CS classes and see if you are able to follow them. You'll know soon enough what your skills really.

What should you study? other's people code. Take a library you use everyday and see what's under the hood, trust me, that's the best way to learn practical programming.

If you're looking for theory then there is a lot of resources available online.


You should study "The Algorithm Design Manual" by Steven S. Skiena. It's really good and it will teach you what CS students learn in the Algorithms and Datastructures class, including evaluating if the runtime of a program is going to be quadratic, linear, n log n etc. in worst case, best case, average case.

You should read a bit about patterns, at least so far that you know what the most common names refer to. One thing is it's good for your design and programming, the other is it's a tool of communication. You may also like a book about antipatterns.

Other than that it depends on what your current programming experience is. E.g. some people develop great non-object-oriented code for years and never really get the concept of OO, if you are one of them try to find a website that teaches OO concepts in terms of one of the programming languages you are fluent in (unless you are and almost certainly will be in an area where OO is completely irrelevant ... stuff close to hardware or something). Or if you e.g. ever only used the little bit of SQL that is enough for many programs (select etc.) you might want to find an SQL tutorial with exercises, skip the parts you already know and get into let's say joins, if you never used those before. So in general: Go a little bit further than what you have learned so far by always studying exactly what was needed to get the job done.

If, for your previous work, you never used any kind of modeling, e.g. UML, you may want to look into that a little bit - you could miss out on jobs where this is used extensively in the requirements or design documents, e.g. on demand from the customer.

Depending on what you have been doing professionally and where you want to go you may want to take a little bit of business/economics classes, like accounting. I hated it like hell when I was studying CS and it was mandatory, but it turned out to be one of the very few things from college that are somewhat relevant for my job.

If you don't know what people mean by "agile development" or have only a very vague idea, you may want to read up to find out, also on some of the various methods, so that you are not stumped if the topic comes up in a job discussion.

I don't recommend going out and studying computer science. It will cost you a hell of a lot of money (at least if you're in the US) and not teach you a whole lot. So unless you have money lying around and also no problem that you will earn very little for 4 years, in which case you could do it for the fun and experience, it's not really worth it for someone who can already program well and gets properly-paid jobs without a degree.


Question for university-trained hackers: do you find your theoretical knowledge of algorithms, data structures, etc. has made you more enlightened? E.g. do you look at a problem and, because of your education, think to yourself, "wow, this problem could clearly be solved by algorithm x" or "that would be so much more elegantly represented as data structure y?"


Yes, that's certainly the case. I often encounter problems that can be solved elegantly using algorithms and datastructures that I learnt in university. E.g. if I were self-taught, I'd probably not recognize when to use, say viterbi decoding or Levenshtein automata. And these are things that I encounter in my day to day work.

I don't think university training is the most effective method to 'enrich' yourself. If you have discipline, reading good books on CS theory is far more effective. First, because courses are relatively slow since they tend to be paced for lowest common denominator. Second, because there is some chaff every program, especially when you assume that most people specialize to their interests fairly quickly.

The advantages of following a university program are on a different level: many employers won't hire someone without a relevant degree, since it is (for good CS programs) an indicator of knowledge and skill. Secondly, university is a good place to get to meet people, both friends and significant others.


Without having attended a university, I would assume that one of the many advantages are the possibility for mentorship during self-study. Which is a scenario one could replicate in a daily routine.

I think mentors are extremely important, to inspire and to help you, when you hit a dead end. So you avoid many of the negative experiences, and keep a good flow when learning. It's definitely the case for me.


Without having attended a university, I would assume that one of the many advantages are the possibility for mentorship during self-study. Which is a scenario one could replicate in a daily routine.

Certainly, as Frank Zappa aptly said: “If you want to get laid, go to college. If you want an education, go to the library.” But that is all under the assumption that you have enough discipline.


Well for me the university knowledge which I'm (still) gaining is sometimes a good starting point to learn more on my own. The specialised sub field of CS I'm currently into is not that wide spread and it helps to have some of the experts around to ask for advice.

As other comments said most of the techniques and technical solutions you can find around the internet or in books and the best way to learn things is to actually look at other peoples code, work with them and discuss problems.

Well structured lectures can be a shortcut into a certain topic but personally the most important skills I aquired from university were the abilities focus on my tasks and to structure and plan my work realistically.


Yes. There's no denying that. I would not have studied finite automata, nor subjected myself to the horrors of compiler design, or even still, hardware languages such as vhdl or verilog, had it not been for spending 100k+ on some formal education.

Why does this matter? Well: All my code is as efficient as possible, to the best of my working knowledge, all while meeting deadlines.

How did I do it? Well: Get it to work. Make optimizations. Repeat. Potentially refactor. You'll always have a fallback that works, because you had to in order to get a non-failing grade.


Yes. But more importantly, I've worked on code, written by people without formal education, that could clearly have benefited from a different algorithm or data structure. I'm talking really simple stuff, like using a dict instead of a list for a lookup.


I have a formal education and I don't remember any of it after 10 years. So don't bother. Just learn as much as you can about the stuff you use or you wish you could use and then some more about stuff that's cool to you. You will forget all other stuff anyways pretty fast if you are not going to use it.


Programming != Computer Science

Programming is a tool used to solve Computer Science problems but Computer Science is in my opinion not a course in programming. If you eventually want do something else than programming but like the field I would advise you to pursue a degree.


Distributed systems is often something self-taught programmers lack. Reading something like the course notes for http://pdos.csail.mit.edu/6.824-2013/ is useful.


Funny how that was one of the first few things I had to teach myself, as distributed systems was something that only the biggest businesses had when I started school, and so, the curriculum didn't exist yet.

Hadoop is not exactly the best example of distributed, but it does contain all the core components. If you want a highly efficient, distributed system, then I suggest one tries to write their own. This ground is still being tested.

Sockets and asyncrony are tricky things, and I'm sure there exist better ways of achieving distributed computing.

1) Compute-intensive, job-centric? 2) Compute-intensive, parallel reductions? 3) Database-intensive, map-reduce? 4) Database-intensive, sharded, non-normalized? ...

The various forms of distributed systems is something that many people don't fully grasp. It's rather easy to build your own #1 or #3 (hadoop). Facebook has done an alright job at #4. Parallel reductions on distributed systems... I'm thinking million factor by billion-row matricies. That is something that we have yet to explore. Sure, we've done thousand-factor by billion row no problem. That's essentially a map-reduce. But doing the matrix reductions on 1e6 by 1e9+ is not something typically done. ... at all. One would need to find alternate ways of representing those 1e6 features as separate matrices... perhaps some form of Bernoulli/Bayes combination and increase the number of operations by 1000-fold.

// Forgive me for the rant. This is something I do like to think of in my spare time. You're right in that self-taught's don't have this skill. My value-add is that a lot of school-educated don't possess this skill either.


Self taught programmer here.

This thread will eventually have 500 replies all telling you to learn linked lists and various sorting algorithms. Maybe you can do that, but only after you've learned about operating system fundamentals such as processes, threads and, in my opinion, pipes and the socket APIs.

You'll likely never have to implement a linked list in C in the real world, but I promise you there will be many times that your web server is erroring on connect(2).

Any company that puts a bigger emphasis on linked lists and sorting algorithms in the hiring process than on OS and networking fundamentals is a good sign to start running away.


You'll likely never have to implement a linked list in C in the real world…Any company that puts a bigger emphasis on linked lists and sorting algorithms in the hiring process than on OS and networking fundamentals is a good sign to start running away.

Not true.

Networking and operating system fundamentals are important, no doubt. But there's also ample documentation, and it's relatively easy to find practical examples about how to work with (say) multithreaded code.

But you'll be better at exploiting your knowledge in this area if you're familiar with the underlying tools and algorithms. For example, how are you going to write good multithreaded code unless you understand semaphores and mutexes? How are you going to parse and search data rom your network service, unless you know when it's appropriate to store it in a doubly-linked list, or maybe a skip list…

I've known programmers to do preposterously inefficient things because they don't understand the underlying constraints and rules of the system they are working with. In particular, developers I've worked with who don't have a formal CompSci background tend to be more likely to implement complex ad-hoc solutions to problems where there are well-understood general classes of implementation (sometimes included in whatever stdlib or framework they are using!) which will solve the problem for them.

Understanding the principles underneath it all is something that all great programmers I've known share in common. That doesn't mean that it's impossible to be a good programmer without a CompSci degree, and it doesn't mean that software engineering is any less important to an actual deliverable product. But writing it off in the way you do isn't really justified.


The interesting question I have for you is what is the most complicated code you've written? There are a lot of cases where I imagine more advanced knowledge isn't necessary. But let's be honest, that advanced knowledge didn't come about because the people who created it wanted to just twiddle their thumbs. A lot of scenarios don't call for this knowledge. Various scenarios do. Perhaps we should qualify the advice with "it's important to do this if you want to do A, but unimportant if you want to do B, and maybe useful if you want to do C."

This is an excellent read too for another reason why this stuff is interesting/useful. http://www.joelonsoftware.com/articles/ThePerilsofJavaSchool...


The interesting question I have for you is what do you think is complicated?

1. The algorithm you spent 1/2 a day writing and designing that took me a day using google and common sense

2. The 20k LOC program we then spend a few months writing that contains that single algo in it but yours keeps falling over because you spent all that time learning algos instead of learning how to program

Swings both ways!

Managing complexity, practical experience and the 'art' of programming are things that are not taught in a degree and far more fundamental and used far more frequently.


So you should run away from Google?

Sorry, I couldn't help myself. I actually think you have a valid point.


It has been said repeatedly that those white board algo questions are pretty pointless.


Perhaps, but as an employer I would run away from a candidate that couldnt code a linked list.


I don't like whiteboard exercises either, but I definitely have to agree with this. Three interview questions that I've found do a great job of determining how effective an engineer will be:

1. Describe the principles of a linked list. What properties does it have that are useful, and in what situations will using one cause a problem?

2. What's the difference between this and a doubly-linked list? What are the benefits and downsides of using the latter?

3. How would you represent a tree in a SQL database? What are the properties of your representation?

(I especially like the last one, because it's totally open-ended. Engineers come up with everything from naïve adjacency lists through to nested intervals and closure tables, and their ability to describe the tradeoffs is a really strong indicator of their programming ability. Of course, it's not a suitable question for all positions.)


Implement stuff like linked-lists, sorted binary trees, ... in C. That's 99% of all the "Data Structures" you'll ever need.


If you only take one computer science concept with you into industry, let it be the finite state machine.

More than one would obviously be preferable :)


business decisions. a broadly supported and internally consistent scripting language. a compiled language that's dominant in the broader fields that you happen to share an interest in.


If you are serious about programming and have the resources to go to a university for a CS degree program by all means do so, there is a hardly a better way to get at the same time:

A) broad overview of the existing knowledge related to computers

B) evaluation of your own skills and knowledge as compared to others and as compared to what knowledge is out there in the first place

C) high-intellectual-quality contact with other people interested in the same things

I have a CS degree via evening classes, and have been additionally self-studying Mathematics and topics in Computer Science, based on that experience I would estimate that learning a given subject to a basic decent level is at least 2-3 times slower than at the university, and will only work at all if you have any existing basis to work from and assuming you will have the motivation to learn, which also is much harder to get if you are working on your own. I probably wouldn't be able to meaningfully self-study math if I didn't take a really good discrete mathematics class at the university (in parallel with "self studying" the discrete math class at http://aduni.org/, which I think is a really great way to get started with theory).

Working this way through 20-30 subjects, as what you would get in the course of a degree, is for mostly psychological reasons close to impossible, so I think the broad basis is best gained at the university. I personally do regularly reach in my work for things I have learnt in at least 15 university courses, one week it will come handy to know what a hash join is, next week it will be helpful to understand virtual memory and another week I will use some statistics or even things I learnt in supplementary business classes. With my growing interest in mathematics in the last years, I personally really wish I had the time and resources myself to now get at least the first year of a mathematics degree, I know it would take my independent study to a new level and that it will be hard to get to this level of competence otherwise, especially I don't really have anyone to discuss things with.

In general I think people without degree tend to suffer much more often from something akin to:

http://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect

It's well elaborated here:

http://notch.tumblr.com/post/15782716917/coding-skill-and-th...

Those people will often claim that they are doing great without having a degree, but more often then not the gaps show up quickly if their skills are scrutinized. As we are talking of thousands of people here, there are of course exceptions, but in general it is harder to get a honest self-picture if you are not learning among other people.

Then of course you still have to do a lot of self-study during your studies and ever afterwards...


This comment is partially correct, and partially bullshit.

I've been to university and studied computer science. What computer science gives you is a foundation in understanding computing and what's going on at various levels, which simply toying around with languages does not.

What a computer science degree generally does not do, is teach you how to be a particularly good programmer. There might be brief parts about patterns and whatnot, but I found there was bugger all about reusable, maintainable, elegant code. There is so much about "programming" that is very much removed from "computer science".

This is where the bullshit part comes in - what stiff is saying about the Dunning Kruger effect is quite frankly obnoxious drivel. You can be an excellent programmer by learning to program via toying around, and a degree suits a very specific learning style. I've known many programmers who do not have a degree who are far better than those with a degree.

I found a degree very difficult as I have ADHD and am terrible at sitting in a lecture sucking up information, then vomiting it into an exam - it is also not particularly representative of how the knowledge is actually applied.

tl;dr CS degree gives you a good foundation and a bunch of in-depth insight into how computers work (and a are good tool to make you realise the breadth of the subject), but are not for everyone as people have different learning styles, and certainly do not make you a great programmer - and are not necessary in order to be one.

We just need to remember that the people who invented Computer Science did not have a degree in Computer Science.


This might be a bit of a tangent, but "learning styles" have very little supporting evidence (if any): http://www.danielwillingham.com/learning-styles-faq.html

Just to be clear: I have absolutely no doubt that having ADHD will impact your ability to sit through lectures -- I'm just commenting on the generalisation to "learning styles".

And, yes: CS has very little to do with actual programming unless you actively work towards that in terms of your chosen niche of programming.


If by 'learning styles' you mean the formalized definitions of the term by educational theorists, sure those lack evidence...

I think the parent comment was using the literal definition though; that is to say he meant that people learn in different ways.

Are you saying that you think there's little supporting evidence that people have different styles in which they learn? I would assume not as that would be a pretty ludicrous world-view; especially considering the article you linked even concedes that kids learn differently. The only point of that article really was to point out the flaws in forming groups of children based on learning style; not because people don't learn differently, but because any 'learning style' categorizations thus far haven't been proven sufficiently meaningful.

In summary, you have no reason to call him out on something he isn't even promoting.


People who invented Computer Science did so for others not to reinvent it again. People who did not studied it are more prone to reinventing wheel. Good take on it has Bruce Schneier it is same as in crypto:

"Regularly, someone from outside cryptography -- who has no idea how crypto works -- pops up and says "hey, I can solve their problems." Invariably, they make some trivial encryption scheme because they don't know better."


The people who invented computer science were certainly driven by theoretical and mathematical concepts. Lambda calculus, church numerals, etc all played a role in the formative days of cs


Interesting viewpoint. Overall, though, I disagree with your tone; particularly the importance you're implying a degree has over your ability to learn.

I've found that, for me personally, learning with a group of peers interested in whatever aspect of computer science I'm learning has been far more beneficial than an academic setting or formal curriculum has. That is to say, I learn faster and receive more accurate/timely feedback sitting in IRC than I do listening to lectures.

I don't think college is necessary or even well suited for everyone, in fact I wish I didn't personally feel obligated to check the box myself; it's certainly one of the best places to meet people in your field though.

In the end I'd say I learn fastest when I a.) Don't understand something and want to, or b.)Want to make something. In both of those cases effectively using the resources available(docs/irc/mailing-lists/google)helps enormously.

That said, I think people learn in different ways so I'm hesitant to offer any concrete advice based on anecdotal experience.

I'd also be hesitant to imply that people without a degree are more susceptible to Dunning-Kruger effect.(especially when some of the most incompetent people I've faced are recent graduates with a lot of confidence)

Also that 'well elaborating' post is under 500 words and consequently elaborates very little.


OP didn't say that "degree" increases the ability to learn! He said that the process of studying in an academic setting has some benefits especially since the curriculum is often designed to give a broad coverage on many important subjects. That does not in any way imply that some could learn well on their own as you have experienced.

Disclaimer: MSc in Computer Science (never really liked lectures myself, but I highly respect the process of forcing oneself to learn by completing a degree)


>"Working this way through 20-30 subjects, as what you would get in the course of a degree, is for mostly psychological reasons close to impossible"

You're truly of the opinion that the parent comment, particularly the above portion, doesn't strongly imply that the process of obtaining a degree increases your ability to learn? I'm honestly curious, because that's the tone I personally perceived from the comment.

>He said that the process of studying in an academic setting has some benefits especially since the curriculum is often designed to give a broad coverage on many important subjects.

Totally agree with this; the diversity in curriculum, human interaction, industry connections, and most importantly the ability to interface with peers in your field are all hugely important. I think University is great for most people, but I don't think people choosing a different route should be considered prone to being underskilled and overconfident unless proven otherwise.

>That does not in any way imply that some could learn well on their own as you have experienced.

As I stated previously I think the fastest way to learn/improve is through interacting with peers, especially those more educated in the subject matter than myself. It's mainly for this reason that I find university extremely useful, and irc/mailing-lists/docs even more so. I wouldn't attribute my learning via the internet to myself any more(or less) than I would attribute my learning via lectures to myself.

I wouldn't have even made the comment if the parent comment hadn't ended by insinuating that not obtaining a degree makes you more susceptible to Dunning-Kruger and then proceeded to call a 500-word post 'elaborate'.


You're truly of the opinion that the parent comment, particularly the above portion, doesn't strongly imply that the process of obtaining a degree increases your ability to learn?

I don't think that is what the parent comment is saying. There are a lot more psychological issues you need to deal with when going it alone. You can spend a lot of time and energy overcoming these issues, or you can go to college and not have to deal with them, or you can ignore them and end up with serious deficiencies.

For example there are going to be areas of software development that you dislike (for me anything networking or db related) but are very important to your overall development. Will you invest the time required in the areas you dislike?

If you really don't like algorithms will you spend the 300+ hours on it that you should? Quite a lot of people who study via college can and do. A rare few people going it alone can.


sicp is my recommendation


It is like literature - one have to read the best books of the best guys (roughly, Nobel prize winners and nominees).

Similarly, one have to study physics with Feynman lectures, etc.

One have to study concepts and ideas, not the "tricks" or "classes", so, Smalltalk and ideas of Alan Kay behind it (message passing, not the cryptic syntax), Erlang, and Armstrong thesis it is based upon, all the great ideas behind Scheme - SICP, Brian Harvey's course, and CL - pg's books, especially On Lisp. Big ideas behind Haskell (syntax and semantics). Philosophy behind Python[3] and how it was followed. Dynamic, Lisp-like nature of Ruby (methods could be added and shadowed at run-time, "true OO" is of last importance). Go, and design decisions it was built upon.

One has to read parts of standard libraries of [great] languages. At least of Haskell (list functions) and of CL (usage of macros).

There is also Arc as a great example of bootstrapping of a language (and Clojure, but it is cluttered and messy compared to Arc). So at least try to understand arc.arc and core.clj

Algorithms and Data structures is the must. At least introduction to these topics, as they been taught by Brian Harvey's CS61A. Then one could use a thick book (from MIT Press) as a reference.

This is so-called high level. There is also one in the middle.

The "basic goodness" of C (with asm snippets when needed), how executables (binaries) and shared libraries were created from "object files", how dynamic linking works, and how FFIs are implemented.

One has to understand the process and thread abstractions (in terms of an OS), what is a syscall and an interrupt, etc.

There is also a machine level. One has to understand what is a machine word, what is an address (and pointer), what different encodings for different [machine] data types are (such as integers and floats) what are registers and how different machine instructions designed for different [machine] data types.

One must be able to follow and understand how a high-level code, such as in Lisp or Haskell is to be transformed into a machine code, and how C code is transformed into an executable file or a shared library for this or that particular OS and CPU architecture (EABI).

btw, these are just the very basics.)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: