Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think these types of stories are misleading for startups.

Many startups would do better to add server capacity in the short term, rather than spend lots of time optimizing to cut costs, when this is typically hidden from the user.

For example, a 4GB linode VPS is $160/month, so you can have 34 of those ($5440/month) for the cost of one developer (Salary of $67k based on: http://www.simplyhired.com/a/salary/search/q-php+developer/l...). Also, many startups struggle to recruit good developers, so would it make sense for them to spend all their time optimising code to perform on cheap hardware? rather than improving the product in a visible way to the user?



Possibly true, but your accounting ignores recurring vs. one-time costs. If you can pay some external person $10k to tune your setup and save $4000/month, and aren't already swimming in money, obviously that's a smart thing to do.


It also ignores all the very real other side effects of inefficient design (the most common cause of poor performance). For example, bad user experience (if you need to spread out your traffic onto many servers that usually implies a sizeable latency on each page view), engineering drag due to technical debt, and engineering drag due to cumbersome infrastructure and deployment overhead. All of these things matter.

In general a company with a more efficient solution will have an easier time with just about every aspect of development and deployment, which pays huge dividends. However, if you find that engineering such efficiency is too difficult due to fundamental design choices or legacy systems then sometimes it's not worth killing yourself to fix.


Definitely, nobody should take us as an example for their startup/business. We're literally a small forum that inherited huge success and had to rapidly deal with scaling up, we're not a business and money is not our goal, so if someone were to base their business off of what we've done it might not turn out too well.

The current servers we operate were paid for with donations from our users because our ad revenue has yet to arrive heh.


Distributing your app across 34 servers will require a non-trivial development effort itself.

Not to mention, there was a story here just this week about how communication between EC2 nodes may be a lot of reddit's performance trouble. Scaling horizontally is inevitable at a certain scale, but it's no panacea.


For $200 per month you can get a quad core X3220 with 8 GB RAM and 2x 500GB disk with a large amount of bandwidth included: http://www.100tb.com/ .

I don't fully understand the love for large VPSes (that aren't even all that large) compared to dedicated hardware that have a better chance of having higher memory bandwidth, more RAM, and faster disk access; though I do understand that many are very happy with Linode as a business.


You can get a six-core 980x with 24GB of RAM and 4x1.5TB of HDD for $200 per month at http://www.hetzner.de/en/hosting/produkte_rootserver/eq10/

Not 100TB bandwidth though.


The ability to grow with a vps is much easier than with dedicated hardware.

Also on a semi related note, I (like you) suggested 100tb.com but we tried them out (just to test speeds) and they're pretty poor...


May I ask what part of the speeds were poor for you? I am curious.


I'll talk to the guy who actually tested them when he wakes up, but from what I understand network speeds were terrible. I'll get back to you when I know :-)


You really think for $200/month they're going to let you saturate the equivalent of a 300 megabit connection 24/7? You'll get shut off if you approach anything close to that, I'd bet, just like all the "unlimited bandwidth" hosts. That, or the transfer speeds your server gets will be nothing near the 300 megabits that'd be required to use 100 terabytes in a month.


more to the point, for $5K a month you get 2x32GiB ram 8 core 4 disk servers, capacity for 16 of your 4GiB VPSs. so with two months up front cost, then another $200/month you can get the same capacity as those 32 linodes. Even if you have to pay $100/hr for your hardware guy (which is above market) you are saving a ridiculous amount of money.


And then add in the bandwidth and the savings explode several times over...


It depends on the type of optimization. For page caching I definitely agree, not because it's a waste of effort, but two other distinct reasons:

First, because you might be papering over more fundamental issues performance issues that will still hurt you in the long term, and will indeed be harder to spot once caching is in place.

Second, because cache invalidation is quite often non-trivial, and doing it right may require a somewhat thick layer of code, and a certain discipline moving forward. This will slow down development if you are in rapid pivot mode.

However if your performance fundamentals are already resolved, and the business model is in place, and you expect the code to be around for a while, then putting the effort into caching will be amortized over many years and pay many dividends not only in server costs, but also in user experience due to fast page loads, and also in correctness because you will have time to get the caching right rather than scrambling to add it at scale and potentially serving stale data to millions of people instead of thousands.


A decent php dev can figure out how to hook up a memcached server and install apc. Most of the work is already done, I think it would be worth the couple hours to learn about caching, IMO. A little caching can go very far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: