Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Almost all the cases where I've heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

I wholeheartedly disagree with this point.

I've found that if I build monolith first, it becomes harder to draw the line of how to separate endpoints, services, and code within the system(s).

If I design in a "microservice first," or just a service oriented design -- I find that there is much more clarity in system design. In terms of exposing parts of a system, I find that the microservice first approach makes me consider future optimizations, such as caching policies, whereas, in a monolith, I would proceed in a naive, "I'll figure it out later" approach.

Each school of thought has its downsides. Monoliths move fast and abstracting parts of the system later that arise as bottlenecks is a tried and true pattern; however, there aren't too many product / business folks who want to hear:

"Hey, we just built this great MVP for you. It probably won't handle significant load, so we're going to go off in a corner and make it do that now. Oh yeah, we won't have time to develop new features because we'll be too busy migrating tests and writing the ones we didn't write in the beginning."

The flip side is, microservice first has a lot of overhead, and (as things evolve in one system) refactoring can be extremely painful. This is an okay trade off where I'm at... for others, maybe not so much.



I don't mean to pick on you, but your post is a litany of "doctor, it hurts when I do this!". There's nothing that should be preventing your monolithic application from handling significant load. You add more instances of your monolithic application. If you're encountering problems where horizontally scaling your monolithic application is falling down, you have much more severe problems that introducing network boundaries don't solve.

Monolithic applications don't encourage naive approaches; your conscious and unconscious choices around desired development rigorousness do. If those choices are causing you problems, there are much easier ways to prevent yourself from doing it than introducing network boundaries.


> There's nothing that should be preventing your monolithic application from handling significant load.

I don't disagree at all. As with most things in life, throwing money at the problem will solve this. In a lot of positions I've been in, this cost becomes "the cost of doing business" that everyone assumes rather than, "we can do better, but for now this is our reality."

> Monolithic applications don't encourage naive approaches;

Perhaps, naive is a poor choice of words. Maybe, an example is better:

You develop a large system that's going to support an entire state's childrens meal plans. You can easily do this in a monolithic architecture, but is it worth investing time to save on peeling apart the obvious points of high use now?

Children / parents only have to sign up once, but they need access to their meal plans everyday. I see two services here in this simple example, and if I'm strapped for time -- I defer a user management service to some sort of SAAS / PAAS so I don't have to deal with that overhead.

I see a great deal of value to everyone in investing in the optimizations that have high return, but minding Knuth's rhetoric that "premature optimization is the root of all evil."


Dude...you're misunderstanding me. "Throwing money" at a monolithic application doesn't make it faster. Writing code that isn't kinda shit is. And it is just as easy to write that code when multiple modules are in a single process and intermediated by method calls between them and putting each module in a separate process and intermediating between them with HTTP. All you're doing is adding a network boundary. If you're gonna write bad code, you're gonna write bad code in both and you're gonna have the added problem on top of having to then wrestle with bad code and having to understand how to debug/flow-monitor/etc. everything.

Your example is an emoji shrug, because I use Passport, a Postgres table, and a user token in a cookie and I'm done. It's not a big enough problem to break out across a network boundary because it's just code that may or may not be hit on any given server and I totally don't care if it is or not because a given node not running a subroutine costs nothing. It's trivial; don't complicate it.


> If I design in a "microservice first," or just a service oriented design -- I find that there is much more clarity in system design.

Why is this though? There is nothing stopping you from thinking about architecture in a monolith, or deploying a monolithic code base to different server classes to optimize workloads.

Where microservices really come into their own is when you want to scale and decouple your engineering teams. At that point, the effort of defining and maintaining "public" interfaces between services pays dividends by providing a defacto specification that serves as a talking point between teams who literally do not have to know the inside of the others' black box. If everyone has to know the internals of multiple microservices, then why are you paying for that overhead instead of an internal method call that has all the benefits and assurances that a language can give in a single process rather than whatever pale imitation you get through RPC.

I'll concede that it depends a lot on the problem domain. Perhaps the service boundaries are obvious, the interfaces stable, and so you can easily reap the benefits without a lot of refactoring. Okay, that's a possibility. But in most cases I have to agree with Martin Fowler that when you embark on a new project you just don't know enough about the requirements to make that call. Unless you've already built the thing you're about to build, I think you very rarely will have the prescience to design the service boundaries correctly on the first go.


> I find that the microservice first approach makes me consider future optimizations, such as caching policies, whereas, in a monolith, I would proceed in a naive, "I'll figure it out later" approach.

This goes to his point, the time spent considering these points is time gained in delivering an MVP to market.

> "Hey, we just built this great MVP for you. It probably won't handle significant load, so we're going to go off in a corner and make it do that now. Oh yeah, we won't have time to develop new features because we'll be too busy migrating tests and writing the ones we didn't write in the beginning."

You don't need that level of scaling potential in an MVP, and you get to have something out -- in time. While if you had started with a microservices approach you would still be off in that corner trying to get something usable out. Now you have a product in the hands of users, and your refactoring can consider that feedback.


> This goes to his point, the time spent considering these points is time gained in delivering an MVP to market.

In undiluted markets where you're entering 'new territory' -- sure. Get a product in user's hands and let it evolve how it should. There are different scenarios where companies already have a user base and you have to be able to respond to that load within your MVP.

Referring to what I said in my original comment, there are always trade-offs, and monoliths have their place. If you have the luxury of time and money, I would say that a monolith is less than ideal.

> You don't need that level of scaling potential in an MVP, and you get to have something out -- in time. While if you had started with a microservices approach you would still be off in that corner trying to get something usable out. Now you have a product in the hands of users, and your refactoring can consider that feedback.

Don't disagree with this at all. Context of your market probably dictates whether you'll be in a corner or still figuring out service boundaries.


> significant load

Please define significant load.


Sometimes it's not about load, but speed of innovation. A huge complex monolithic codebase might not have a lot of load, but it can still limit a team's ability to experiment with new features due to big ball of mud. Decomposing areas into service might enable faster innovation better than refactoring the whole monolith.


That seems orthogonal to me: is adding a network boundary really the only way to enforce basic software engineering practices? It seems just as likely that the same organizational issues would lead to e.g. learning that your data model is wrong and part of the app now needs to dispatch thousands of queries, and fixing this is harder than refactoring a couple parts of the same codebase.

(Note: I'm not saying microservices are bad – I just think that the process which lead to that ball of mud will unfold similarly with a different methodology)


30+ requests/sec

Some would consider it light, but I work alongside a company that is currently struggling to get beyond that mark.


So it would probably surprise you to hear that companies I've worked for in the past have built monolithic applications serving pages that were per-user dynamic that can handle upwards of 20,000 requests per second?

This is why I hate this subject. People use terms and don't define them. If you think microservices is the only way to scale past 30 requests per second you're extremely wrong.


I need to give you an internet fistbump for this. I mean...you know what you can even do instead of The Holy Microservice? You can take parts of your API, facade them behind a different load balancer, and call into different instances of your monolith that only handle user management or billing or whatever. Un-run code's cost is, in the general case, basically zero--act like it. You don't need to do something like this, of course, unless your system has very spiky/expensive calls that have to be intelligently routed to systems that have capacity, but heck, you've just insulated yourself from load spikes and can granularly scale.

If you need to. You probably don't, and you definitely don't if you're struggling with 30 reqs/second. This isn't just YAGNI. This is YAHYBDI. You Are Hurting Yourself By Doing It. You need to write better code and examine the assumptions that have created the mess you're dealing with.


That sounds like it's SoA. It's the forgotten intermediary step between a monolith and a microservice architecture.


Not sure why the contempt with an opinion.

I get that microservices are trite, and most people think they need them long before necessary; however, they have uses beyond scale.

I'm sure that there are ways to mitigate every point I can make within monoliths.

My points are just opinions.


It's not contempt, it's frustration because you're making assertions that are not backed up by reality.

This is what I do for a living, and I am regularly but-but-microserviced by people who are equally ignorant of competent application design and who think that breaking it into HTTP-intermediated chunks will solve that they are choosing to write bad code. That segmentation doesn't--it does nothing. It isn't YAGNI, but YAHYBDI, and I'll get hot under the T-shirt occasionally 'cause people who read discussions like this will get the wrong idea and stick their hands into the saw, too.

I could get paid more by letting people mangle themselves, but it'd be mean.


Their main use is scale, but not tech scale. Scale of engineering teams.

There's the occasional case where a single language won't fit the bill, but there's a big difference between 2 services and 200.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: