If anything my sin has been forgetting how much engineering went into supporting the monorepo at Google and duo-repo at Facebook when advocating for it.
https://docs.djangoproject.com/en/5.1/ref/applications/
In a nutshell, each django project is an 'app' and you can 'install' multiple apps together. They can come with their own database tables + migrations. But all live under the same gunicorn and on the same infra, within the same codebase. Many Django plugins are setup as an 'app'.
The end result in the context of a monolith reads more like domain driven design with a service-oriented approach and for most people working in a monolithic service, the amount of abstraction you have to layer in to make that make sense is liable to cause more trouble than it's worth. For a small, pizza-sized team it's probably going to be overkill where more time is spent managing the abstraction instead of shipping functionality that is easy to remove.
If you're going to pull in something like Bazel or even an epic Makefile, and the end result is that you are publishing multiple build artifacts as part of your deploy, it's not really a monolith any more, it's just a monorepo. Nothing wrong with that either; certainly a lot easier to work with compared to bouncing around multiple separate repos.
Fundamentally I think that you're just choosing if you want a wide codebase or a deep one. If somehow you end up with both at the same time then you end up with experiences similar to OP.
i don't know the answer to this it is just a problem I'm fighting.
I like to think of this in terms of cohesiveness and coupling rather than the SOLID principles. Much easier to reason about and it boils down to the same kind of outcomes.
You don't want a lot of dependencies on other modules (tight coupling) and you don't want to have one module do too many things (lack of cohesiveness). And circular dependencies between modules are generally a bad idea (and sadly quite common in a lot of code bases).
You can trivially break dependency cycles by introducing new modules. This is both good and bad. As soon as you have two modules, you will soon find reasons to have three, four, etc. This seems to be true with any kind of module technology. Modules lead to more modules.
That's good when modules are cheap and easy. E.g. most compilers can deal with inlining and things like functions don't have a high cost. Small functions, classes, etc. are easy to test and easy to reason about. Being able to isolate modules from everything else is a nice property. If you stick to the SOLID principles, you get to have that.
But lots of modules is a problem with micro services because they are kind of expensive as a module relative to alternatives. Having a lot of them isn't necessarily a great idea. You get overhead in the form of build scripts, separate deployments, network traffic, etc. That means increased cost, performance issues, increased complexity, long build times, etc.
Add circular dependencies to the mix and you now get extra headaches resulting from that as well (which one do you deploy first?). Things like graphql (aka. doing database joins outside the database) are making this worse (coupling). And of course many companies confuse their org chart with their internal architecture and run into all sorts of issues when those no longer align. If you have 1 team per service, that's probably going to be an issue. It's called Conway's law. If you have more services than teams you are over engineering. If you struggle to have teams collaborate on a large code base, you definitely have modularization issues. Micro services aren't the solution.
Incremental build systems
Incremental testing frameworks
Branch management tooling
Code isolation enforcement
Database isolation enforcement"
This sounds a lot like microservices, most of all the last point. Is the only difference that you don't use RPCs?
It inherits culture from OOP stuff, that abstraction was leaked to repositories, then it was leaked to packages, and it's being roughly patched together into meaningless buzzwords.
It's no surprise no one understands all of this. I see the react folks trying to come up with a chemical analogy (atoms, molecules and so on), and the functional guys borrowed from a pretty solid mathematical frame of mind.
What is the OOP point of view missing here? Maybe it was a doomed analogy from the beginning. Let's not go into biology though, that can't do any good.
Spare parts, connectors, moving parts versus passive mechanisms, subsystems. Hard separation and soft separation. It's all about that when doing component stuff. And it has been figured all out, we just keep messing how we frame it for no reason.
a) you can use different languages
b) you can run different parts of your system on different servers
I feel like you can solve both without giving up the niceties of a monolith just with a good RPC framework. A really good one would even give you the flexibility to run "microservices" as separate local threads for easy development.
I've never seen anyone actually do that though.
Aside from some websites and small scripts, all software is written like that.
You simply create a hierarchical directory structure where the directories correspond to modules and submodules and try to make sure that the code is well split and public interfaces are minimal.
I.e. quoting from the post:
- monolithic databases need to be broken up - Tables must be grouped by module and isolated from other modules - Tables must then be migrated to separate schemas - I am not aware of any tools that help detect such boundaries
Exactly.
For as much press as "modular monoliths" have gotten, breaking up a large codebase is cool/fine/whatever--breaking up a large domain model is imo the "killer app" of modular monoliths, and what we're missing (basically the Rails of modular monoliths).
There is no one general "good engineering". Everything is different. Labels suck because even if you called one thing "microservices", or even "monolith of microservices", I can show you 10 different ways that can end up. So "modular monolith" is just as useless a descriptor; it's too vague.
Outside of the HN echo chamber, good engineering practice has been happening for decades. Take open source for example. Many different projects exist with many different designs. The common thread is that if a project creates some valuable functionality, they tend to expose it both at the application layer and library layer. They know some external app will want to integrate with it, but also they know somebody might want to extend the core functionality.
I personally haven't seen that method used at corporations. If there are libraries, they're almost always completely independent from an application. And because of that, they then become shared across many applications. And then they suddenly discover the thing open source has been dealing with for decades: dependency.
If you aren't aware, there is an entire universe out there of people working solely on managing dependencies so that you, a developer or user, can "just" install software into your computer and have it magically work. It is fucking hard and complicated and necessary. If you've never done packaging for a distro or a language (and I mean 250+ hours of it), you won't understand how much work it is or how it will affect your own projects.
So yes, there are modular moniliths, and unmodular monoliths, and microservices, and libraries, and a whole lot of varied designs and use cases. Don't just learn about these by reading trendy blog posts on HN. Go find some open source code and examine it. Package some annoying ass complex software. Patch a bug and release an update. These are practical lessons you can take with you when you design for a corporation.
What's worse: Premature scalability.
I joined one project that failed because the developers spent so much time on scalability, without realizing that some basic optimization of their ORM would be enough for a single instance to scale to handle any predictable load.
Now I'm wrangling a product that has premature scalability. It was designed with a lot of loosely coupled services and high degrees of flexibility, but it's impossible to understand and maintain with a small team. A lot of "cleanup" often results in merging modules or cutting out abstraction.