gwbas1c
> In practice microservices can be just as tough to wrangle as monoliths.

What's worse: Premature scalability.

I joined one project that failed because the developers spent so much time on scalability, without realizing that some basic optimization of their ORM would be enough for a single instance to scale to handle any predictable load.

Now I'm wrangling a product that has premature scalability. It was designed with a lot of loosely coupled services and high degrees of flexibility, but it's impossible to understand and maintain with a small team. A lot of "cleanup" often results in merging modules or cutting out abstraction.

mattnewton
As an ex-fang engineer myself, I have never advocated for more services, usually have pushed for unifying repos and multiple build targets on the same codebase. I am forever chasing the zen of google3 the way I remember it.

If anything my sin has been forgetting how much engineering went into supporting the monorepo at Google and duo-repo at Facebook when advocating for it.

andy_ppp
Elixir + Phoenix is so great at this with contexts and eventually umbrella apps. So easy to make things into apps that receive messages and services with a structure. I’m amazed it isn’t more popular really given it’s great at everything from runtime analysis to RPC/message passing to things like sockets/channels/presence and Live View.
mushufasa
Would Django's concept of an 'app' fit your definition of modular monoliths?

https://docs.djangoproject.com/en/5.1/ref/applications/

In a nutshell, each django project is an 'app' and you can 'install' multiple apps together. They can come with their own database tables + migrations. But all live under the same gunicorn and on the same infra, within the same codebase. Many Django plugins are setup as an 'app'.

ljm
I can't help but feel like the author has taken some fairly specific experiences with microservice architecture and drawn a set of conclusions that still results in microservices, but in a monorepo. There's nothing about microservices that suggests you have to go to the trouble of setting up K8s, service meshes, individual databases per service, RPC frameworks, and so on. It's all cargo culting and all this...infra... simply lines the pockets of your cloud provider of choice.

The end result in the context of a monolith reads more like domain driven design with a service-oriented approach and for most people working in a monolithic service, the amount of abstraction you have to layer in to make that make sense is liable to cause more trouble than it's worth. For a small, pizza-sized team it's probably going to be overkill where more time is spent managing the abstraction instead of shipping functionality that is easy to remove.

If you're going to pull in something like Bazel or even an epic Makefile, and the end result is that you are publishing multiple build artifacts as part of your deploy, it's not really a monolith any more, it's just a monorepo. Nothing wrong with that either; certainly a lot easier to work with compared to bouncing around multiple separate repos.

Fundamentally I think that you're just choosing if you want a wide codebase or a deep one. If somehow you end up with both at the same time then you end up with experiences similar to OP.

bluGill
The thing microservices give is an enforced api boundry. OOP classes tried to do that with public/private but fail because something public for this module is private outside. I've written many classes thinking they were for my module only and then someone discovered and abused it elsewhere. Now their code is tightly coupled to mine in a place I didn't intend to be coupled.

i don't know the answer to this it is just a problem I'm fighting.

jillesvangurp
Modules are almost as old as compiler technology. A good module structure is a time proven way to deal with growing code bases. If you know your SOLID principles, they apply to most module systems at any granularity. It doesn't matter if they are C header files, functions, Java classes or packages, libraries, python modules, micro services, etc.

I like to think of this in terms of cohesiveness and coupling rather than the SOLID principles. Much easier to reason about and it boils down to the same kind of outcomes.

You don't want a lot of dependencies on other modules (tight coupling) and you don't want to have one module do too many things (lack of cohesiveness). And circular dependencies between modules are generally a bad idea (and sadly quite common in a lot of code bases).

You can trivially break dependency cycles by introducing new modules. This is both good and bad. As soon as you have two modules, you will soon find reasons to have three, four, etc. This seems to be true with any kind of module technology. Modules lead to more modules.

That's good when modules are cheap and easy. E.g. most compilers can deal with inlining and things like functions don't have a high cost. Small functions, classes, etc. are easy to test and easy to reason about. Being able to isolate modules from everything else is a nice property. If you stick to the SOLID principles, you get to have that.

But lots of modules is a problem with micro services because they are kind of expensive as a module relative to alternatives. Having a lot of them isn't necessarily a great idea. You get overhead in the form of build scripts, separate deployments, network traffic, etc. That means increased cost, performance issues, increased complexity, long build times, etc.

Add circular dependencies to the mix and you now get extra headaches resulting from that as well (which one do you deploy first?). Things like graphql (aka. doing database joins outside the database) are making this worse (coupling). And of course many companies confuse their org chart with their internal architecture and run into all sorts of issues when those no longer align. If you have 1 team per service, that's probably going to be an issue. It's called Conway's law. If you have more services than teams you are over engineering. If you struggle to have teams collaborate on a large code base, you definitely have modularization issues. Micro services aren't the solution.

paperplatter
"To get similar characteristics from a monolith, developers need:

    Incremental build systems

    Incremental testing frameworks

    Branch management tooling

    Code isolation enforcement

    Database isolation enforcement"
This sounds a lot like microservices, most of all the last point. Is the only difference that you don't use RPCs?
alganet
A good analogy is lacking though. "Modular Monolith" sounds like a contradiction. It doesn't help the idea.

It inherits culture from OOP stuff, that abstraction was leaked to repositories, then it was leaked to packages, and it's being roughly patched together into meaningless buzzwords.

It's no surprise no one understands all of this. I see the react folks trying to come up with a chemical analogy (atoms, molecules and so on), and the functional guys borrowed from a pretty solid mathematical frame of mind.

What is the OOP point of view missing here? Maybe it was a doomed analogy from the beginning. Let's not go into biology though, that can't do any good.

Spare parts, connectors, moving parts versus passive mechanisms, subsystems. Hard separation and soft separation. It's all about that when doing component stuff. And it has been figured all out, we just keep messing how we frame it for no reason.

IshKebab
To me it seems like the main advantages of microservices are

a) you can use different languages

b) you can run different parts of your system on different servers

I feel like you can solve both without giving up the niceties of a monolith just with a good RPC framework. A really good one would even give you the flexibility to run "microservices" as separate local threads for easy development.

I've never seen anyone actually do that though.

devit
Well, that's just the normal way to write software, no?

Aside from some websites and small scripts, all software is written like that.

You simply create a hierarchical directory structure where the directories correspond to modules and submodules and try to make sure that the code is well split and public interfaces are minimal.

eichi
It doesn't matter if all of the team welcome the idea toward better productivity and enhance architecture iteratively. Culture and talent matters.
stephen
I mean, of course they are a good idea, what we need is more examples of actually doing them in practice. :-)

I.e. quoting from the post:

- monolithic databases need to be broken up - Tables must be grouped by module and isolated from other modules - Tables must then be migrated to separate schemas - I am not aware of any tools that help detect such boundaries

Exactly.

For as much press as "modular monoliths" have gotten, breaking up a large codebase is cool/fine/whatever--breaking up a large domain model is imo the "killer app" of modular monoliths, and what we're missing (basically the Rails of modular monoliths).

zem
modular monolith + monorepo, so you get the benefit of continuous integration and automated code maintenance across the codebase.
throwaway984393
I'm nearing greybeard status, so I have to chime in on the "get off my lawn" aspect.

There is no one general "good engineering". Everything is different. Labels suck because even if you called one thing "microservices", or even "monolith of microservices", I can show you 10 different ways that can end up. So "modular monolith" is just as useless a descriptor; it's too vague.

Outside of the HN echo chamber, good engineering practice has been happening for decades. Take open source for example. Many different projects exist with many different designs. The common thread is that if a project creates some valuable functionality, they tend to expose it both at the application layer and library layer. They know some external app will want to integrate with it, but also they know somebody might want to extend the core functionality.

I personally haven't seen that method used at corporations. If there are libraries, they're almost always completely independent from an application. And because of that, they then become shared across many applications. And then they suddenly discover the thing open source has been dealing with for decades: dependency.

If you aren't aware, there is an entire universe out there of people working solely on managing dependencies so that you, a developer or user, can "just" install software into your computer and have it magically work. It is fucking hard and complicated and necessary. If you've never done packaging for a distro or a language (and I mean 250+ hours of it), you won't understand how much work it is or how it will affect your own projects.

So yes, there are modular moniliths, and unmodular monoliths, and microservices, and libraries, and a whole lot of varied designs and use cases. Don't just learn about these by reading trendy blog posts on HN. Go find some open source code and examine it. Package some annoying ass complex software. Patch a bug and release an update. These are practical lessons you can take with you when you design for a corporation.

pictur
[dead]