hn_throwaway_99
I like a lot of the answers, but something else I'd add: lots of "popular" architectures from the late 00s and early 2010s have fallen by the wayside because people realized "You're not Google. Your company will never be Google."

That is, there was a big desire around that time period to "build it how the big successful companies built it." But since then, a lot of us have realized that complexity isn't necessary for 99% of companies. When you couple that with hardware and standard databases getting much better, there are just fewer and fewer companies who need all of these "scalability tricks".

My bar for "Is there a reason we can't just do this all in Postgres?" is much, much higher than it was a decade ago.

bigiain
My, perhaps overly cynical view, is that Message Queue architecture and blogging was all about "Resume Driven Development" - where almost everybody doing it was unlikely to ever need to scale past what a simple monolith could support running on a single laptop. All the same people who were building nightmare micro service disasters requiring tens of thousand of dollars a month of AWS services.

These days all those people who prioritise career building technical feats over solving actual business problems in pragmatic ways - they're all hyping and blogging about AI, with similar results for the companies they (allegedly) are working for: https://www.theregister.com/2024/06/12/survey_ai_projects/

tuckerconnelly
I can offer one data point. This is from purely startup-based experience (seed to Series A).

A while ago I moved from microservices to monolith because they were too complicated and had a lot of duplicated code. Without microservices there's less need for a message queue.

For async stuff, I used RabbitMQ for one project, but it just felt...old and over-architected? And a lot of the tooling around it (celery) just wasn't as good as the modern stuff built around redis (bullmq).

For multi-step, DAG-style processes, I prefer to KISS and just do that all in a single, large job if I can, or break it into a small number of jobs.

If I REALLY needed a DAG thing, there are tools out there that are specifically built for that (Airflow). But I hear they're difficult to debug issues in, so would avoid at most costs.

I have run into scaling issues with redis, because their multi-node architectures are just ridiculously over-complicated, and so I stick with single-node. But sharding by hand is fine for me, and works well.

democracy
I think this: "* The technology just got mature enough that it's not exciting to write about, but it's still really widely used."

Messaging-based architecture is very popular

burutthrow1234
I think "message queues" have become pretty commoditized. You can buy Confluent or RedPanda or MSK as a service and never have to administer Kafka yourself.

Change Data Capture (CDC) has also gotten really good and mainstream. It's relatively easy to write your data to a RDBMS and then capture the change data and propagate it to other systems. This pattern means people aren't writing about Kafka, for instance, because the message queue is just the backbone that the CDC system uses to relay messages.

These architectures definitely still exist and they mostly satisfy organizational constraints - if you have a write-once, read-many queue like Kafka you're exposing an API to other parts of the organization. A lot of companies use this pattern to shuffle data between different teams.

A small team owning a lot of microservices feels like resume-driven developnent. But in companies with 100+ engineers it makes sense.

busterarm
Going to give the unpopular answer. Queues, Streams and Pub/Sub are poorly understood concepts by most engineers. They don't know when they need them, don't know how to use them properly and choose to use them for the wrong things. I still work with all of the above (SQS/SNS/RabbitMQ/Kafka/Google Pub/Sub).

I work at a company that only hires the best and brightest engineers from the top 3-4 schools in North America and for almost every engineer here this is their first job.

My engineers have done crazy things like:

- Try to queue up tens of thousands of 100mb messages in RabbitMQ instantaneously and wonder why it blows up.

- Send significantly oversized messages in RabbitMQ in general despite all of the warnings saying not to do this

- Start new projects in 2024 on the latest RabbitMQ version and try to use classic queues

- Creating quorum queues without replication policies or doing literally anything to make them HA.

- Expose clusters on the internet with the admin user being guest/guest.

- The most senior architect in the org declared a new architecture pattern, held an organization-wide meeting and demo to extol the new virtues/pattern of ... sticking messages into a queue and then creating a backchannel so that a second consumer could process those queued messages on demand, out of order (and making it no longer a queue). And nobody except me said "why are you putting messages that you need to process out of order into a queue?"...and the 'pattern' caught on!

- Use Kafka as a basic message queue

- Send data from a central datacenter to globally distributed datacenters with a global lock on the object and all operations on it until each target DC confirms it has received the updated object. Insist that this process is asynchronous, because the data was sent with AJAX requests.

As it turns out, people don't really need to do all that great of a job and we still get by. So tools get misused, overused and underused.

In the places where it's being used well, you probably just don't hear about it.

Edit: I forgot to list something significant. There's over 30 microservices in our org to every 1 engineer. Please kill me. I would literally rather Kurt Cobain myself than work at another organization that has thousands of microservices in a gigantic monorepo.

angarg12
Queues are a tool in your distributed system toolbox. When it's suitable it works wonderfully (typical caveats apply).

If your perception is indeed correct it'd attribute it to your 3rd point. People usually write blogposts about new shiny stuff.

I personally use queues in my design all the time, particularly to transfer data between different systems with higher decoupling. The only pain I have ever experienced was when an upstream system backfilled 7 days of data, which clogged our queues with old requests. Running normally it would have taken over 100 hours to process all the data, while massively increasing the latency of fresh data. The solution was to manually purge the queue, and manually backfill the most recent missing data.

Even if you need to be careful around unbound queue sizes I still believe they are a great tool.

rossdavidh
Message queues have moved on past the "Peak of inflated expectations" and past the "trough of disillusinment" into the "slope of enlightenment", perhaps even the "plateau of productivity".

https://en.wikipedia.org/wiki/Gartner_hype_cycle

pm90
They have become boring so there are less blogs about them.

Thats good. The documentation for eg RabbitMQ is much better and very helpful. People use it as a workhorse just like they use Postgres/MySQL. There’s not much surprising behavior needed to architect around etc.

I love boring software.

robertclaus
I find it super interesting that the comments calling out "obviously we all still use message queues and workers, we just don't write about them" are buried half way down the comments section by arguments about Microservices and practical scalability. A junior engineer reading the responses could definitely get the false impression that they shouldn't offload heavy computation from their web servers to workers at all anymore.
vishnugupta
Speaking from my own experience message queues haven’t disappeared as much as have been abstracted away. For example enqueue to SQS + poll became invoke server less process. There is a message queue in there somewhere just that it’s not as exposed.

Or take AWS SNS which IMO is one level of abstraction higher than SQS. It became so feature rich that it can practically replace SQS.

What might have disappeared is those use cases which used Queues to handle short bursts of peak traffic?

Also streaming has become very reliable tech so a class of usecases that used Queues as streaming pipe have migrated to the streaming proper.

ilaksh
I think it's simple: async runtimes/modules in JavaScript/Node, Python (asyncio), and Rust. Those basically handle message queues for you transparently inside of a single application. You end up writing "async" and "await" all over the place, but that's all you need to do to get your MVP out. And it will work fine until you really become popular. And then that can actually still work without external queues etc. if you can scale horizontally such as giving each tenant their own container and subdomain or something.

There are places where you need a queue just for basic synchronization, but you can use modules that are more convenient than external queues. And you can start testing your program without even doing that.

Actually async is being used a lot with Rust also, which can stretch that out to scale even farther with an individual server.

Without an async runtime or similar, you have to invent an internal async runtime, or use something like queues, because otherwise you are blocked waiting for IO.

You may still eventually end up with queues down the line if you have some large number of users, but that complexity is completely unnecessary for getting a system deployed towards the beginning.

memset
It may be that lambdas (cloud functions, etc) have become more popular and supported on other platforms.

When you enqueue something, you eventually need to dequeue and process it. A lambda just does that in a single call. It also removes the need to run or scale a worker.

I think Kafka continues to be popular because it is used as a temporary data store, and there is a large ecosystem around ingesting from streams.

I personally use queues a lot and am building an open source SQS alternative. I wonder if an open source lambda replacement would be useful too. https://github.com/poundifdef/SmoothMQ

jackbauer24
Regarding this issue, I have some observations of my own. I've noticed that systems based on queues, such as Kafka, AMQP, etc., are still very widespread, for example in vehicle networking, transaction systems, and so on. I recently encountered a customer deploying Kafka on AWS, with monthly consumption of Kafka-related computing storage exceeding $1 million. The cluster scale is huge, containing various system events, logs, etc. I've also seen customers building IoT platforms based on Kafka. Kafka has become very central to the IoT platform, and any problems can cause the entire IoT platform to be unavailable. I personally have written over 80% of the code for Apache RocketMQ, and today I have created a new project, AutoMQ (https://github.com/AutoMQ/automq). At the same time, we also see that competition in this field is very fierce. Redpanda, Confluent, WarpStream, StreamNative, etc., are all projects built based on the Kafka ecosystem. Therefore, the architecture based on message queues has not become obsolete. A large part of the business has transformed into a Streaming form. I think Streaming and MQ are highly related. Streaming leans more towards data flow, while MQ leans more towards individual messages.
mrj
People got excited about it as a pattern, but usually apps don't have that many things that really have to go in the background. And once you do, it becomes really hard to ensure transactional safety across that boundary. Usually that's work you want to do in a request in order to return a timely error to the client. So most jobs these days tend to be background things, pre-caching and moving bits around on cdns. But every single one of those comes with a cost and most of us don't really want a mess of background jobs or distributed tasks.

I just added a RabbitMQ-based worker to replace some jobs that Temporal.io was bad at (previous devs threw everything at it, but it's not really suited to high throughput things like email). I'd bet that Temporal took a chunk of the new greenfield apps mindshare though.

liampulles
"The technology just got mature enough that it's not exciting to write about, but it's still really widely used."

My money is on this. I think the simple usecase of async communication, with simple pub/sub messaging, is hugely useful and not too hard to use.

We (as a Dev community) have just gotten over event sourcing, complex networks and building for unnecessary scale. I.e. we're past the hype cycle.

My team uses NATS for Async pub/sub and synchronous request/response. It's a command driven model and we have a huge log table with all the messages we have sent. Schemas and usage of these messages are internal to our team, and are discarded from NATS after consumption. We do at-least-once delivery and message handlers are expected to be idempotent.

We have had one or two issues with misconfiguration in NATS resulting in message replay or missed messages, but largely it has been very successful. And we were a 3 person dev team.

It's the same thing as Kubernetes in my mind - it works well if you keep to the bare essentials and don't try to be clever.

m1keil
I think it's all of the above.

In large enterprises, there is usually some sort of global message bus on top of Kafka, AWS Kinesis or similar.

In smaller shops, the need for dedicated message bus is over engineering and can be avoided by using the db or something like redis. It is still a message queue, just without a dedicated platform.

ryapric
I think it's very much your last theory -- used everywhere but not as interesting to tell people about as it might have been a decade ago. Queues are now Boring Technology(tm), and that's a good thing.
ehnto
They aren't a general solution and don't really add much to your average application. But there are still instances where they make a lot of sense.

What I would need to see required before bothering with a message queue architecture:

* High concurrency, atomic transactions

* Multiple stages of processing of a message required

* Traceability of process actions required

* Event triggers that will actually be used required

* Horizontal scaling actually the right choice

* Message queues can be the core architecture and not an add on to a Frankenstein API

Probably others, and yes you can achieve all of the above without message queues as the core architecture but the above is when I would think "I wonder if this system should be based on async message queues".

grenbys
My company heavily relies on Amazon SQS for background jobs. We use Redis as well but it is hard to run at scale. Hence, anything critical goes to SQS by default. SQS usage is so ubiquitous I can’t imagine anyone be interested in writing a blog post or presenting on a conference. Once you get used to SQS specifics (more than once delivery, message size limit, client/server tooling built, expiration settings, DLQ) I doubt there’s anything that can beat it in terms of performance/reliability Unless you have resources to run Redis/Kafka/etc yourself. I would recommend searching for talks by Shopify eng folks on their experience, in particular from Kir (e.g. https://kirshatrov.com/posts/state-of-background-jobs)
socketcluster
I never fully understood the need for back end message queues TBH. You can just poll the database or data store every few seconds and process tasks in batches... IMO, the 'real time' aspect was only ever useful for front end use cases for performance reasons since short polling every second with HTTP (with all its headers/overheads) is prohibitively expensive. Also, HTTP long polling introduces some architectural complexity which is not worth it (e.g. sticky sessions are required when you have multiple app servers).

Unfortunately, moving real-time messaging complexity entirely to the back end has been the norm for a very long time. My experience is that, in general, it makes the architecture way more difficult to manage. I've been promoting end-to-end pub/sub as an alternative for over a decade (see https://socketcluster.io/) but, although I've been getting great feedback, this approach has never managed to spread beyond a certain niche. I think it's partly because most devs just don't realize how much complexity is added by micromanaging message queues on the back end and figuring out which message belongs to what client socket instead of letting the clients themselves decide what channels to subscribe to directly from the front end (and the back end only focuses on access control).

I think part of the problem was the separation between front end and back end developer responsibilities. Back end developers like to ignore the front end as much as possible; when it comes to architecture, their thinking rarely extends beyond the API endpoint definition; gains which can be made from better integrating the back end with the front end is 'not their job'. From the perspective of front-end developers, they see anything performance-related or related to 'architectural simplicity' as 'not their job' either... There weren't enough full stack developers with the required insights to push for integration efficiency/simplicity.

mannyv
Our whole backend is queue-based. If it's asynchronous and you don't need a fast response time use a queue. It's easy, reliable, and the queue can drive lambdas. Queues also makes it easier to collect metrics and performance data.

During heavy load the queue bloats up to a few million messages, then drains off over time. Or it spawns a few hundred lambdas to chow all the messages down...depending on what we want.

karmakaze
I've gone though all of these at different scales. What I find these days are: 1. databases have gotten very good where a separate queue infrastructure isn't worth it, 2. databases provide much better observability (you can query the contents). If you really do want a queue, you're still better off using a stream (e.g. Kafka) for multiple producers and/or consumers. Using a database table as a 'transactional outbox' is a poor-man's Kafka which works well enough without the additional infrastructure for most scales.

Redis taking some of the duty as you mentioned and microservices/distributed systems being less fashionable likely also factor into it.

SillyUsername
1) Distributed databases do the same job, putting data local. They are slower, however they have less overhead and work involved if you already have a db layer.

2) Serverless, e.g. AWS lambdas can be joined with step functions instead and scale without a queue.

3) People have been burned. Multiple configurations, multiple queues, multiple libraries and languages, multiple backing stores, multiple serialisation standards and bugs - it's just overly complex engineering a distributed system for small to medium business. YAGNI.

4) Simpler architectures, e.g. microservices for better or worse tend to be fat monoliths made more efficient, mostly in my experience because those in charge don't actually don't understand the pattern, but the side effect is fewer queues compared to real microservices arch .

O/T I cringe whenever I hear our tech architects discuss our arch as microservices and datamesh. The former because it's not (as above, it's multiple small services) the latter also a problem because datamesh is an antiquated pattern that's better filled with segregated schemas on a distributed database, and scoped access per system to the data each needs, instead of adapter patterns, multiple dbs with slightly different schemas and fascades/specialised endpoints all the fucking way down.

KaiserPro
I think it depends but to add some noise to the discussion:

People really abused kafka: https://www.confluent.io/en-gb/blog/publishing-apache-kafka-... like really abused it.

Kafka is hard to use, has lots of rough edges, doesn't scale all that easily, and isn't nice to use as a programmer. but you can make it do lots of stupid shit, like turn it into a database.

People tried to use message queues for synchronous stuff, or things that should be synchronous, and realised that queuing those requests is a really bad idea. I assume they went back to REST calls or something.

Databases are much much faster now, with both SSD, design and fucktonnes of ram. postgres isn't really the bottleneck it once was.

SQS and NATS cover most of the design usecases for pure message queues (as in no half arse RPC or other feature tacked in) and just works.

Message queues are brilliant, I use them a lot, but only for data processing pipelines. But I only use them to pass messages, not actual data. so I might generate a million messages, but each message is <2k

could I use a database? probably, but then I'd have to make an interface for that, and do loads of testing and junk.

steve_adams_86
The only place I find the complexity of message queues worth the trouble is in the embedded world. The limited resources make messaging a sensible way to communicate across devices with a fairly agnostic perspective on what runs on the devices apart from the message queue.

Most of us in the desktop computing world don't actually need the distribution, reliability features, implementation-agnostic benefits of a queue. We can integrate our code very directly if we choose to. It seems to me that many of us didn't for a while because it was an exciting paradigm, but it rarely made sense in the places I encountered it.

There are certainly cases where they're extremely useful and I wouldn't want anything else, but again, this is typically in settings where I'm very constrained and need to talk to a lot of devices rather than when writing software for the web or desktop computers.

As for your last point, the Internet of Things is driven by message queues (like MQTT), so depending on the type of work you're doing, message queues are all over the place but certainly not exciting to write about. It's day-to-day stuff that isn't rapidly evolving or requiring new exciting insights. It just works.

threecheese
A few things I’ve noticed, in a large “I pick things up and put them down” solution for high dollar-value remote transactions: - a lot of services we need to communicate with have much higher resiliency than in the past, and so we’ve seen a big decrease in operational tasks for our queues that are “guarding “ those transactions; newer workloads might have less of a need to guard; - many services we use support asynchronous execution themselves, using patterns like events/callbacks etc, and while they may use message queues internally we don’t necessarily have to do so; - in what would have been called an “enterprise integration” environment, we are using internal event buses more and more, given they enable looser coupling and everyone speaks http.

From a technology perspective, message queuing has been commodified, I can pull an SQS off the shelf and get right to work. And so maybe the ubiquity of cloud based solutions that can be wired together has just removed the need for choice. If I need mqtt, there’s an app for that. Fanout? App for that. Needs to be up 25/7? …

ingvar77
I am under impression people are still actively using MQs but it’s just become a commodity and not as exciting as it was. I think two major cases - you need to do something asynchronous and in specific order. Simple example from past project: in a workflow/process management app (task manager on steroids) there’s a group of tasks (branch) that can be completed by multiple people in any order. When all tasks are done we have to mark the whole branch as completed and move workflow further. Many instances of the workflow are running at same time. Logic is much simpler to implement when you process all task completions within same workflow instance in order, but from different instances in parallel. It’s also much easier to provide close to realtime experience to users - when user clicks on a checkbox task is shown completed instantly as well as other effects- next task becomes active, branch shown as completed, whole workflow is shown as completed etc.
mcqueenjordan
I think it's a mix of:

1. Queues are actually used a lot, esp. at high scale, and you just don't hear about it.

2. Hardware/compute advances are outpacing user growth (e.g. 1 billion users 10 years ago was a unicorn; 1 billion users today is still a unicorn), but serving (for the sake of argument) 100 million users on a single large box is much more plausible today than 10 years ago. (These numbers are made up; keep the proportions and adjust as you see fit.)

3. Given (2), if you can get away with stuffing your queue into e.g. Redis or a RDBMS, you probably should. It simplifies deployment, architecture, centralizes queries across systems, etc. However, depending on your requirements for scale, reliability, failure (in)dependence, it may not be advisable. I think this is also correlated with a broader understanding that (1) if you can get away with out-of-order task processing, you should, (2) architectural simplicity was underrated in the 2010s industry-wide, (3) YAGNI.

slowmovintarget
Most of those architectures were run on company data centers. The swap to cloud and making small stateless services (the rise of SPA) meant that a complex staged event-driven system was less needed.

On AWS for example, you use SQS and a sprinkling of SNS, or perhaps Kinesis for a few things and you're good. There isn't a lot to talk about there, so the queues no longer become the center of the design.

Message-queue based architectures are great for data processing, but not great for interactive web sites, and if most people are building interactive web sites, then the choices seem a little obvious. I still design event systems for data processing (especially with immutable business data where you have new facts but still need to know that you were "wrong" or had a different picture at some earlier time). But for most apps... you just don't need it.

IndrekR
For example MQTT finds plenty of use with IoT device communication.

https://en.wikipedia.org/wiki/MQTT

bdcravens
The web got faster, and it became easier to build and consume APIs, so we eliminated the need for an intermediary. More "native" event-driven architectures emerged.
jonahbenton
I see a fair amount of Kafka, while most other platforms have diminished. I think that is because people treat Kafka like a database/system of record. A queue is not a system of record.

A lot of the difficulty in modeling a complex system has to do with deciding what is durable state vs what is transient state.

Almost all state should be durable and/but durability is more expensive upfront. So people make tradeoffs to model a transition as transient and put in a queue. One or two or three years in, that is almost always a regretted decision.

Message queues that are not databases/systems of record wind up glossing this durable/transient state problem, and then when you have also this unique piece of infrastructure to support, this is a now you have two problems moment.

kaladin_1
Personal experience: I needed a message broker while working with multiple sensors constantly streaming data at high frequency.

I have seen a startup where RabbitMQ was being used to hand-off requests to APIs (services) that take long to respond. I argued for unifying queueing and data persistence technology using Postgres even though I know a simple webhook would suffice.

Given that AWS has to sell and complexity tends to make people look smart, another server was spurn up for RabbitMQ :)

Many companies that have run a highly distributed system have figured what works for them. If requests are within the read and write rates of what Redis or Postgres can handle why introduce RabbitMQ or Kafka :?

Always remember that the Engineer nudging you towards more complexity will not be there when the chips are down.

caleblloyd
Another theory: HTTP + Service Discovery gained popularity, alleviating the need to integrate with message brokers. Most popular languages have lightweight HTTP servers that can run in-process and don't need heavy Application servers to run them now. And Service Discovery routes the requests to the right places without the need for going through a central broker.

Message brokers need client libraries for every language and serialization support. HTTP clients and JSON Serialization have first-class support already, so many software distributors ship those APIs and clients first. Everyone got used to working with it and started writing their own APIs with it too.

perlgeek
I maintain and develop a message queue-based architecture at work (started around 2014), so here's my take:

* message queues solve some problems that are nowadays easily solved by cloud or k8s or other "smart" infrastructure, like service discovery, load balancer, authentication

* the tooling for HTTPS has gotten much better, so using something else seems less appealing

* it's much easier to get others to write a HTTPS service than one that listens on a RabbitMQ queue, for example

* testing components is easier without the message queue

* I agree with your point about databases

* the need for actual asynchronous and 1:n communication is much lower than we thought.

throwaway38375
Because you can set up a rudimentary queueing system in MySQL/PostgreSQL very quickly these days. And it scales really well for small to medium sized applications!

I maintain a web application with a few hundred daily users and with the following table I have never had any problems:

CREATE TABLE `jobs` ( `id` BIGINT NOT NULL AUTO_INCREMENT, `queue` VARCHAR NOT NULL, `payload` JSON NOT NULL, `created_at` DATETIME NOT NULL, PRIMARY KEY `id`, INDEX `queue` );

Using MySQL's LOCK and UNLOCK I can ensure the same job doesn't get picked up twice.

All in all, it's a very simple solution. And simple is better!

fennecbutt
I think probably for garbage collection reasons.

At my [place of work] we have built a simple event system in top of lambda functions, sqs, s3, eventbridge etc to ingest and add metadata to events before sending them on to various consumers.

We replaced an older kafka system that did lots of transformations to the data making it impossible to source the origin of a field at the consumer level; the newer system uses an extremely KISS approach - collate related data without transformation, add metadata and tags for consumers to use as a heads up and then leave it at that.

I agree that most regular stuff should just be http (or whatever) microservices as the garbage collection is free; requests, sockets, etc time out and then there's no rubbish left over. In an event based system if you have issues then suddenly you might have dozens of queues filled with garbage that requires cleanup.

There are definitely pros to event based, the whole idea of "replaying" etc is cool but like...I've never felt the need to do that...ever.

The event volume that we do process is quite low though, maybe a couple hundred k messages a day.

berkes
I think the premise is wrong. That "we" are still using it en masse and in ever increasing amounts.

We just call it different. Or use different underlying products. Nearly all web frameworks have some worker system built in. Many languages have async abilities using threads and messaging built in.

The only popular language and ecosystem I can think of that doesn't offer "worker queues" and messaging OOTB is JavaScript.

We are using it more than ever before. We just don't talk about it anymore, because it has become "boring" tech.

willturman
I implemented RabbitMQ based messaging queues as a mechanism to coordinate execution among discrete components of a handful of ambitious laboratory automation systems ~4-8 years ago.

Given a recent opportunity to rethink messaging based architectures, I chose the simplicity and flexibility of Redis to implement stack and queue based data-structures accessible across distributed nodes.

With even a handful of nodes, it was challenging to coordinate a messaging based system and the overhead of configuring a messaging architecture, essentially developing an ad-hoc messaging schema with each project (typically simple JSON objects), and relatively opaque infrastructure that often required advanced technical support led messaging systems to fall out of favor for me.

Kafka seems to be the current flavor of the day as far as messaging based systems, but I don't think I'll ever support a system that approaches the throughput required to even think about implementing something like Kafka in the laboratory automation space - maybe there's a use case for high-content imaging pipelines?

Right now, I'd probably choose Redis for intra-system communication if absolutely necessary, then something like hitting a Zapier webhook with content in a JSON object to route information to a different platform or software context, but I'm not in a space where I'm handling Terabytes of data or millions of requests a second.

taylodl
The technology just got mature enough that it's not exciting to write about, but it's still really widely used.

Message queue-based architectures are the backbone of distributed, event-driven systems. Think of systems where when this particular event happens, then several downstream systems need to be aware and take action. A message queue allows these systems to be loosely coupled and supports multiple end system integration patterns.

Notice, this is systems or enterprise development, not application development. If you're using a message queue as part of your application architecture, then you may be at risk of over-engineering your solution.

hnaccountme
My sense is the hype just died down. There are genuine cases where message queues are the correct solution. But since most of the developers are clueless they just jump on the latest trend and write blog posts and make Youtube videos.

This effect has also happened with microservices/monaliths, lambda/serverless, agile/scrum(still no concrete definition on these). Even cloud as a whole, there are so many articles of how companies managed to cut cloud cost to a fraction just by going bare metal.

MegaSpaceHamlet
I kind of stumbled into it. I have a server that's processing a lot of incoming data from field devices that expect quick 200 responses. The processing I was required to do on the data was pretty expensive time-wise, mainly because I have to make multiple calls to a third-party API that isn't highly performant. In order to keep everything stable, I had to delegate the data processing to a separate process via a message broker (Redis, with the Bull npm package as an abstraction layer to handle the message-passing), and I have no regrets. This pattern was suggested in the NestJS documentation, the framework I am using. After I realized the power of this pattern (especially because of my heavy leaning on mentioned third-party API), I started using it in other areas of my application as well, and I find to be a helpful pattern. As far as maintenance goes, I just have Heroku take care of my Redis instance. I can easily upgrade my specs with a simple CLI command. There was a slight learning curve in the beginning, but I got the hang of it pretty quickly, and it's been easy to reason about since.
Rury
It's the same reason why any old fad isn't in vogue anymore - it's just how popular things go. I mean, message queues weren't exactly new things in the late 2000s... your standard GUI and mouse uses message queues, pretty much since the 1980s. More and more people just caught on over time, popularity hit a peak, and then people eventually moved on. They're still used in many places, just no longer what's being crazed about.
Copenjin
I really hope that people are slowly starting to understand that using kafka and turning it in a single point of failure (yes, it fails) of your architecture is not a good idea.

This pattern has it's uses, but if you are using it everywhere, every time you have some sort of notifications because "it's easy" or whatever, you are likely doing it wrong and you will understand this at some point and it will not be pleasant.

Joel_Mckay
We often use RabbitMQ like middle-ware, and it is boring because it has proven very reliable.

Most people that deal with <40k users a day will have low server concurrency loads, and can get away with database abstracted state-machine structures.

Distributed systems is hard to get right, and there are a few key areas one needs to design right...

If I was to give some guidance, than these tips should help:

1. user UUID to allow separable concurrent transactions with credential caching

2. global UTC time backed by GPS/RTC/NTP

3. client side application-layer load-balancing through Time-division multiplexing (can reduce peak loads by several orders of magnitude)

4. store, filter, and forward _meaningful_ data

5. peer-to-peer AMQP with role enforcement reduces the producer->consumer design to a single library. i.e. if done right the entire infrastructure becomes ridiculously simple, but if people YOLO it... failure is certain.

6. automatic route permission and credential management from other languages can require a bit of effort to sync up reliably. Essentially you end up writing a distributed user account management system in whatever ecosystem you are trying to bolt on. The client user login x509 certs can make this less laborious.

7. redacted

8. batched payload 128kB AMQP messages, as depending how the consumer is implemented this can help reduce hits to the APIs (some user UUID owns those 400 insertions for example.)

9. One might be able to just use Erlang/Elixir channels instead, and that simplifies the design further.

Have a great day, =3

langsoul-com
Message based tech is less popular to talk about, not that it's less used overall.

These days, with AI, vector dbs are all the rage, so everyone hops onto that train.

1290cc
I use web hooks now and they work really well for asynchronous situations. Spinning up a base app in node to do this is super simple and much easier to maintain than doing it in a kafka message bus.

Nice to see so many developers owning up to the "resume building" and being pragmatic about solving human/business problems versus technology for the sake of it.

foolfoolz
- message queues are still widely used and more cloud hosted than self hosted so you will see them less in arch diagrams

- event log (stateful multi-consumers) have taken a portion of the message queue workflow. this is more likely than moving them to redis/database

message queuing works incredibly well for many problems. it’s as critical to most companies architectures as an application database

avikalp
Nice observation. I still use Google Pub/Sub in my application - I recently also gave a talk on how we use Pub/Sub for our use case in a GCDG event (GCDG stands for Google Cloud Developer Group).

But now that I think about it, we don't use it in the traditional sense. Most of our regular operations work well enough by just using the "async" pattern in our programming (in JS and Rust).

The only place we use Pub/Sub is for communication between our NodeJS backend server and the Rust servers that we deploy on our client's VMs. We didn't want to expose a public endpoint on the Rust server (for security). And there was no need for a response from the Rust servers when the NodeJS server told it to do anything.

We don't fully utilize the features of a messaging queue (like GCP's Pub/Sub), but there just wasn't a better way for our specific kind of communication.

jasonlotito
* The technology just got mature enough that it's not exciting to write about, but it's still really widely used.

That’s it. Full stop.

aspyct
We have a new project (~ 6 years now) where we implemented a queue with RabbitMQ to temporarily store business events before they are stored in a database for reporting later.

It's awesome!

It absorbs the peaks, smoothes them out, acts as a buffer for when the database is down for upgrades, and I think over all these years we only had one small issue with it.

10/10 would recommend.

SatvikBeri
In our case, we managed to solve most of the use cases with less specialized tools.

I still think queues are great, but most of the time I can just run my queues using language constructs (like Channels) communicating between threads. If I need communication between machines, I can usually do that with Postgres or even s3. If you're writing constantly but only reading occasionally, you don't need to consume every message – you can select the last five minutes of rows from a table.

I've also seen a general trend in my line of work (data engineering) to try and write workloads that only interact with external services at the beginning or end, and do the rest within a node. That makes a lot of sense when you're basically doing a data -> data transformation, and is easier to test.

There are still cases where we need the full power of Kinesis, but it's just a lot less common than we thought it would be 10 years ago.

johnwatson11218
Last year I left a job that was about to give me the maintenance of a poorly implemented kafka system. I'm so glad I left before I really had to work with that system.

Since then I've been reading about async and await in the newer versions of javascript and it really threw me for a loop. I needed this to slow down some executing code but as I worked through my problems I realized "my god! this is exactly what we could have used for pub/sub at my last job".

We could have replaced a kafka system as well an enterprise workflow system with javascript and the async/await paradigm. Some of these systems cost millions per year to license and administer.

n_ary
IMHO, message queues were all the hype at the time because cloud was picking up steam and people discovered how to decouple and handle massive loads.

For the blog posts, most were garbage(and still are) if my memory serves right. I recall reading a lot of blog posts and all of those were basically a different derivative of same “quick-start tutorial” that you would find on any decently documented piece of software. Once you delve into the real trenches, the blog posts start showing their limits immediately and the shallowness of the depth.

That all being said, message queues are very crucial part of most complex systems these days, same as your typical tools(containers, git, your language of choice etc.), it has moved onto mature and boring.

aristofun
Hype is a function of number of discussions not number of applications.

There is no hype because not much news there.

That doesn’t mean it is less used.

mountainriver
It’s because they were billed as a way to reduce complexity but in reality just added a ton.

The fundamental issue with event driven architecture is getting out of sync with the source of truth.

Every single design doc I’ve seen in an organization pitching event driven architecture has overlooked the sync issue and ultimately been bitten by it.

chadrstewart
So others have mentioned this in previous posts that 99% of companies building an app will not actually need this level of infrastructure especially considering how much better computers have gotten since the 2000s.

But...

Isn't another reason why we don't see hype around message queues and such in Distributed Systems because they are standard practice now? This discourse feels around this feels like, "message queues will make your architecture so much better, you should try it!" and more like, "just use a message queue..." To feel like the hype isn't there anymore because the technology is just standard practice now.

I could be wrong but whenever I come across any articles on building Distributed Systems, message queues and their variants are one of the first things mentioned as a necessity.

wkyleg
Corollary Question: what do people with prior experience scaling Elixir/Phoenix think about scaling with it?

I've read very strong reports favorable of it. For instance, it also can be scaled using progressively better hardware (like better CPU or more RAM), or horizontally scaling too. that Also that with a database on the same network, there won't be much need for a cache. Presumably, ability to throw CPU and RAM would lessen some needs for queue too.

At the same time, I don't notice much Elixir usage in practice and it has remained a small community.

ram_rar
IMO, many "microservices" just needed a way to do async processing without needing to hold on to connections, and landed up leveraging msg queues for such use cases. Now this is mostly getting replaced by new orchestration workflows like was step functions/temporal/orkes etc
junto
It’s that it’s well established and there’s little need for hype on the topic. I work in a large enterprise with lots of autonomous teams where decoupling is key tenet for loose coupling those team systems.

We are now based firmly in the Azure landscape and Event Grids provide us with an effective team service boundary for other teams to consume our events all with the appropriate RBAC. Internal team Azure Service Bus is the underlying internal team driver for building decoupled and resilient services where we have to guarantee eventual consistency between our internal system landscape and the external SaaS services we actively leverage. At this scale it works very effectively, especially when pods can be dropped at any point within our k8s clusters.

greatpostman
I think that’s just standard software engineering now. Like no one is struggling to build these architectures.
perrygeo
Message queues are great for flowing information to external systems, one-way. But it's always worth asking why you have separate systems in the first place. And if your app has to also consume from a queue, forming any sort of cycle, look out.

Especially if they are services that are conceptually related, you will immediately hit consistency problems - message queues offer few guarantees about data relationships, and those relationships are key to the correctness of every system I've seen. Treating message queues as an ACID data store is a categorical mistake if you need referential integrity - ACID semantics are ultimately what most business processes need at the end of the day and MQ architectures end up accumulating cruft to compensate for this.

mindcrime
Simple: messaging systems / event-driven systems aren't the "fad of the day" anymore, so you don't have a gazillion vendors pumping out blog posts and Youtube videos to hawk their wares, or a million companies writing about them to look "hip" and "cool" to help recruit developers, or endless legions of consulting companies writing about them in order to attract customers, etc.

Basically every "cool, shiny, new" tech goes through this process. Arguably it's all related to the Garnter Hype Cycle[1], although I do think the process I'm describing exists somewhat independently of the GHC.

[1]: https://en.wikipedia.org/wiki/Gartner_hype_cycle

koliber
Message queues are often chosen as a communication protocol in microservice-based architectures. Microservices were a fad and people have sobered up. People have learned when microservices deliver a benefit, and when they are unnecessary. In many cases, they are unnecessary.

Queues are still very useful for queueing up asynchronous work. Most SaaS apps I've worked with use one. However, there is a difference to what kind of queue you need to queue a few thousand tasks per day, vs using the queue as the backbone of all of your inter-service communications. For the first use case, using a DB table or Redis as a queue backend is often enough.

tichiian
The other answers around here are mostly right, but I'd like to add another one, which is right in some situations:

Message queues are often the wrong tool. Often you rather want something like RPC, and message queues were wrongly used as poor man's async DIY RPC.

throw__away7391
They aren't, you just don't hear about it.

Basically everything in tech has gone through a hype cycle when everyone was talking about it, when it was the shiny new hammer that needed to be applied to every problem and appear in every resume.

A bit over 20 years ago I interviewed with a company who was hiring someone to help them "move everything to XML". Over the course of the two hour interview I tired unsuccessfully to figure out what they actually wanted to do. I don't think they actually understood what XML was, but I still wonder from time to time what they were actually trying to achieve and if they ever accomplished it.

Steve248
Maybe one of the reason why it become unpopular is the additional code you have to implement with asynchronous processing in a separate system and tracking in case of errors during processing in the target system.

Its easier and faster to make a Webservice request where you get an instant result you can handle directly in the source System.

Mostly the queue is implemented in the source System where you can monitor and see the processing status in Realtime without delays.

ingvar77
A lot of great comments about overcomplicating architectures and using unnecessary tech but also you need to consider that almost any service you can use on aws will cost you less per month than a few hrs of development time.
guybedo
Queueing systems haven't disappeared as thery're an important part of distributed systems. If you need to distribute work/data asynchronously to multiple workers, you're gonna use a queuing system.

Although queuing systems can be implemented on top of a database, message queues like RabbitMQ / ZeroMQ are doing a fine job. I use RabbitMQ all the time, precisely because i need to transfer data between systems and i have multiple workers working asynchronously on that data.

I guess these architectures might be less popular, or less talked about, because monoliths and simple webapps are more talked about than complex systems ?

breckenedge
We all learned that a distributed monolith is worse than just having a monolith. Truly independent event-based systems still are very useful, but not when they have to communicate stuff back and forth to achieve a single problem.
fxtentacle
We're using lots of RabbitMQ queues in production. It works well, is efficient, low-maintenance and scales well up to 10k msg/s. By all means, I'd say queues aren't unpopular. It's just that I'm not the kind of person to loudly shill whatever tech I'm using, so you probably won't hear from people like me without asking.

And for a consulting company, a solid message-based deployment is not a good business strategy. If things just work and temporary load spikes get buffered automatically, there's very little reason for clients to buy a maintenance retainer.

tamiral
We highly use messaging queues to power millions of daily transactions and requests. I think it’s because the patterns have been written about and it is no longer a fun hot topic. AWS SQS, Azure Event Hub etc all very standard for what I’ve seen in recent architectural diagrams from many companies.
wjossey
It’s important to keep in mind that 12 years ago, the patterns for high scale message queues (and often even low scale) were still in flux and getting established.

In the ruby world, delayed job was getting upended by sidekiq, but redis was still a relatively new tool in a lot of tool-belts, and organizations had to approach redis at that time with (appropriate) caution. Even Kafka by the mid 10s was still a bit scary to deploy and manage yourself, so it might have been the optimal solution but you potentially wanted to avoid it to save yourself headaches.

Today, there are so many robust solutions that you can choose from many options and not shoot yourself in the foot. You might end up with a slightly over complicated architecture or some quirky challenges, but it’s just far less complex to get it right.

That means fewer blog posts. Fewer people touting their strategy. Because, to be frank, it’s more of a “solved” problem with lots of pre existing art.

All that being said, I still personally find this stuff interesting. I love the stuff getting produced by Mike Perham. Kafka is a powerful platform that can sit next to redis. Tooling getting built on top of Postgres continues to impress and show how simple even high scale applications can be——

But, maybe not everyone cares quite the way we do.

speed_spread
Hypothesis: The performance of systems has increased so much that many things that required a queue before can now just be made into regular synchronous transactions. aka PostgreSQL is eating the world.
hughesjj
Message queues are still definitely in use, it's just behind the scenes in most frameworks you're using now. They're still great for the highest scale stuff when you can't pay the abstraction cost and don't need stuff like FIFO semantics.

Along with much more mentioned in this thread, I think a lot of companies realized that they indeed are not AWS/Google/Meta/Twitter scale, won't be in the next decade, and probably never will need to be to be successful or to support their product.

retrocryptid
All the devs have already put "event driven" on their resumes. They need something new to be seen to be at the forefront of technology. I think we're in the AI hype phase where everyone is competing for those tasty $500k per year AI jobs at google, so the ACS systems don't know what to do with resumes that have "event driven."

The last thing you want on your LinkedIn profile is a link to a video you made in 2015 about your cool Kafka solution. The ACS would spit you out so fast...

artdigital
Have they gone away? I still use message queues a lot, be it rabbitmq, through Google PubSub, Redis, etc. They are such a normal thing nowadays, just another tool in the toolbox really
ic_fly2
We’re in the process of shifting an entire business from db stored state driven by large (Java and .net) apps to an AMQP based system.

I can’t really say I’m enjoying it. But it does help with scale

erikerikson
Most MQs had architectural side effects. Lost/re order, some required duplication for multiple listeners, lacked history (and complected debugging), introduced/required messy reconciliation and remediation processes.

Distributed logs such as Kafka, bookkeeper, and more stepped in to take some market share and most of the hype.

MQs retain their benefits and are still useful in some cases but the more modern alternatives can offer fewer asterisks.

synthc
People discovered that using messaging can add tons of overhead, and async messaging architectures are harder to manage and monitor.

Serialization and going over the network are an order of magnitude slower and error prone than good ol' function calls.

I've seen too many systems that spent more time on marshalling from and to json and passing messages around than actual processing.

teyc
I have a couple of datapoints.

One project I know of started with message queues for request response pattern. It performed poorly because Windows Service Bus writes messages to a database. That increased latency for a UI heavy application.

Second project used message queues but the front end was a HTTP API. When overloaded the API timed out at 30 seconds but the job was still in the queue and wasn’t cancelled. It led to a lot of wastage.

darksaints
It's still around, and it's still going strong in the embedded space. Examples:

* PX4

* Ardupilot

* Betaflight

* DroneCAN

* Cyphal

* ROS/ROS2

* Klipper

* GNU Radio

Also would like to mention that all of the most popular RTOSes implement queues as a very common way of communicating between processes.

giantg2
At least what I have seen is that MQ was mostly used for batches in the areas I worked in.

The new thing is "event driven architecture" (or whatever they can pass off as that hype). In a lot of cases, it's a better architecture. For fhe remaining batches, we are running against S3 buckets, or looking at no SQL entries in a specific status in a DB. And we still use a little SQS, but not that often.

jsumrall
I’m concerned that a lot of commenters don’t appreciate the difference between a queue and a log. Kafka is not a queue.

I think like most have said is that it’s just not a popular topic anymore to blog about but it’s still used. OTOH logs like Kafka have become more ubiquitous. Even new and exciting systems like Apache Pulsar (a log system that can emulate a queue) have implemented the Kafka API.

andrewstuart
There is a constant flow of HN posts about how people have built their own message queues.

I personally have built three. It's the latest thing to do.

znpy
> * Databases (broadly defined) got a lot better at handling high scale, so system designers moved more of the "transient" application state into the main data stores.

This but also: computers got incredibly more capable. You can now have several terabytes of ram and literally hundreds of cpu cores in a single box.

Chances are you can take queuing off your design.

petervandijck
We use queues all the time, they’re practical, effective and easy to use. Too mature to talk about is my explanation.
otabdeveloper4
> RabbitMQ, ZeroMQ

There is literally nothing in common between RabbitMQ and ZeroMQ except for the two symbols 'MQ' in the name.

slau
ZeroMQ has never been a managed queue. It was always more of a networking library on top of which you could implement some interesting paradigms, but it has never been on the same playing field as MQs (on purpose).

SQS is still very much alive. It is more than likely the first or second most deployed resource in AWS in my daily work.

captainbland
I think it's both. They're boring if you need them, because you probably started implementing these things over a decade ago. They're boring if you don't need it because you don't need it - e.g. maybe you're a small team able to work on a single monolithic codebase effectively.
zzyzxd
To process all your tasks, it cost the same to either run 100 EC2 instances for an hour, or 10 EC2 instances for 10 hours. It's much easier than before to design a stateless scalable system, so people would rather scale out and get things done quickly than waiting in a queue.
habosa
In some ways they’re still popular, but the abstraction has changed.

AWS Lambda, Google Cloud Functions, etc often resemble message queue architectures when used with non-HTTP triggers. An even happens and a worker spawns to handle the event.

hipadev23
I continue to use redis as a multipurpose tool including acting as a message queue.
rokob
I think it is primarily your last bullet point, it is less exciting to write about unless you really are looking for specifically that content. They are widely used but they are normal parts of a lot of architectures now.
BenFranklin100
I miss the days of Microsoft’s Robotics Studio, a message passing architecture for controlling robotics and other hardware. If only they had continued development instead of stopping halfway before it reached maturity.
fintechie
Hype died down along with "microservices" where queues made more sense.
moltar
I believe it’s the last one. I use queues more than ever.

Almost every AWS architecture diagram will have a queue.

SQS is extremely stable, mature and cheap. Integrates with so many services and comes out of the box with good defaults.

slt2021
Kafka is still used heavily, but the barrier is very high.

if you are running truly global (as planetary scale) distributed service, and have multiple teams developing independent components then it makes sense.

abhiyerra
Every one of my customers uses message queues either Redis, SQS and rarely Kafka using Celery, Sidekiq, etc. If anything it is boring and works and just part of the default stack everyone uses.
pyrolistical
Consensus in distributed systems is hard and MQs don’t help.

If possible boiled down systems to a transitional db replica set. That is a local minimum in complexity that will serve you for a long time.

mamcx
Another:

* The Log is the superior solution

edgarvaldes
Tech cycles, hype, maturity. I remember the prevalence of javascript frameworks, big data bashing, and crypto currency debates for years. Then... nothing.
curious_cat_163
I have personally found them useful as “glue” layer in heterogeneous and large systems with moving and decoupled parts.

There are any number of ways to do the same thing — context matters.

UK-Al05
It's boring tech. They're quietly being used everywhere in big companies doing the heavy lifting.
1oooqooq
it's disk speeds.

suddenly everyone could scale much much more, but by then they were moving to the cloud and execs don't understand two buzzwords at the same time.

adeptima
The hype was cooled down once companies realized "they have more microservices than actual users"

Was never obsessed with "event-driven" distributed systems using message queues.

The major issue is to keep syncing state between services.

Quite for a long time used to get decent results with simple golang and postgres scripts to distribute work between workers on multiple bare metal machines

Took ideas from projects similar to MessageDB

https://redis.io/docs/latest/develop/data-types/streams/

CREATE TABLE IF NOT EXISTS message_store.messages ( global_position bigserial NOT NULL, position bigint NOT NULL, time TIMESTAMP WITHOUT TIME ZONE DEFAULT (now() AT TIME ZONE 'utc') NOT NULL, stream_name text NOT NULL, type text NOT NULL, data jsonb, metadata jsonb, id UUID NOT NULL DEFAULT gen_random_uuid() );

along with Notify https://www.postgresql.org/docs/9.0/sql-notify.html

or polling techinques

Redis Stream here and there worked well too https://redis.io/docs/latest/develop/data-types/streams/

Another possible alternative can be "durable execution" platforms like Temporal, Hatchet, Inngest mentioned on HN many times

- Temporal - https://temporal.io/ - https://github.com/temporalio - https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

- Hatchet https://news.ycombinator.com/item?id=39643136

- Inngest https://news.ycombinator.com/item?id=36403014

- Windmill https://news.ycombinator.com/item?id=35920082

Biggest issues I had with workflow platforms - it requires a huge congnitive investments in understanding SDKs, best practices, deciding how properly persist data. Especially, if you optout to host it yourself.

lysecret
Using them all the time in an ETL/ML-Eng type context. So I would say it's the last point for me.
rexarex
On mobile the phones are so powerful now that you can just make them do most of the heavy lifting :)
airocker
I think dbs like etcd mongo Postgres all have some notion of events and notifications. The only drawback is that you have to still poll from the ui if you use these. This is sufficient for most use cases. If you want real time updates, Kafka provides long running queries which keep Kafka still relevant.
nextworddev
Because it’s massive over engineering of dubious value, unless you are Uber etc
hi-v-rocknroll
These things are more-or-less proven, and so aren't sexy because they either work or there are better ways to do them.

Be careful not to conflate message transport with message storage, although message brokers usually do both.

SQS was slower than frozen molasses when I used it c. 2010. ZMQ, ZK, rabbit, and MQTT are oft mentioned. ESBs always come up and a large % of technical people hate them because they come with "software architect" historical baggage.

It's risky to have a SPoF one-grand-system-to-rule-them-all when you can have a standardized API for every department or m[ia]cro-service exposed as RESTful, gRPC, and/or GraphQL in a more isolated manner.

Redis isn't needed in some ecosystems like Elixir/Erlang/BEAM. Memcache is simpler if you just need a temporary cache without persistence.

DeathArrow
We use message queues for everything that doesn't have to be real time. It's just another tool in the toolbox and pretty old and well known. I don't think it needs extensive blogging as pretty much anyone who need to use message queues knows how to do it.

It has some downsides as it if something happens it's harder to debug than just using good old REST API calls.

joshuanapoli
It is much easier now to scale services (microservices, FaaS, etc.) to meet high or fluctuating demand than it used to be. So there are fewer cases where there is much to gain by carefully controlling how messages are buffered up.
dsotirovski
too boring(to write about) for engineers, too complex for AI.

Still, very much present/popular in the ecosystems I dabble in.

dogline
Like micro-services, it's not a bad idea, but it's not the hammer for every nail as people who write books and blog posts seem to push for (they've now moved on past blockchain to AI).

If you have a problem that logically has different async services, sure, use Redis or something. Databases also were able to handle this problem, but weren't as sexy, and they explicitly handle the problem better now. Just another tool in the toolbelt.

NoSQL was another solution in search of problems, but databases can handle this use-case better now too.

pyrale
Kafka is everywhere. But I guess there is a point where hyped new tech becomes boring and mainstream, and people stop writing about it.
bokohut
"If people have experience designing or implementing greenfield systems based on message queues, I'd be curious to hear about it."

Ok, you asked.

It was early Summer 1997 and I had just graduated college with a computer information systems degree. I was employed at the time as a truck driver doing deliveries to Red Lobster and my father was not happy and offered me double my current pay and guaranteed 40 hours per week to return to work for him as an electrician. I returned to work with my Dad for the Summer but after 14+ years of electrical work with him I decided I needed to get a job with computers. Labor day weekend of 1997 I blasted out over 40 resumes and on Tuesday 9/9/97 I had my first interview at a small payments startup dotcom in Wilmington Delaware, the credit card capital of the world at that time. I was hired on the spot and started that day as employee #5 and was the first software hire for the company yet I had NO IDEA what I was doing. I was tasked with creating a program that could take payments from a TCPIP WAN interface and proxy it back out over serial modems to Visa and batch ACH files to the U.S. FED. This started as a monolith design and we were processing automated clearing house as well as credit cards by late 1997. I would continue in this architectural design and sole software developer support of this critical 100% uptime system for many years. Somewhere around mid 1998 volume started to increase and the monolith design experienced network socket congestion, the volume from the firehose continued to increase and I was the sole guy tasked with solving it. That volume increase came from a little known company at the time, PayPal. Since the 'mafia' was very demanding they knew I was the guy however management isolated me since the 'mafia' demanded extensive daily reporting that only the guy who built the platform could provide. This however took a backseat to the network connection issues which were growing at an increasing rate. I was involved in a lot of technology firsts as a result and herein starts the queue story. After processing payments for Microsoft TechEd too in 1998 I was given an NT Option Pack CD. I was constantly seeking a solution to reduce the network congestion on the monolith and within this option pack was something called "Microsoft Message Queue". I spent several months nonstop of nights and weekends redesigning the entire system using an XML interface API from the ground up while writing individual services that read from an ingress queue and output to and egress queue, this structure is now know as microservices and this design solved all the load problems since it scaled extremely well. This new redesigned system had many personal experience enhancements added such as globally unique identifiers as well as the API being fully extensible but the greatest unseen win was the ability to 100% recreate software bugs since all message passing was recorded. Paypal ended up leaving us in ?2003? for JPMorgan after the majority holder refused to sell to the 'mafia'. Some years later I was informed by several management executives that Paypal also offered to exclusively hire only me but I was of course never informed of that.

I have many a business horror story however in over a decade of using MSMQ at my first payments company, 1998-2010, I only had one data corruption event in the queue broker which required me to reverse engineer the MSMQ binary and thus file format to recover live payments records. This corruption was linked to bad ECC memory on one of those beige 1U Compaq servers that many here likely recall.

This story may reveal my age but while my ride has been exciting it isn't over yet as so many opportunities still exist. A rolling rock gathers no moss!

Stay Healthy!

masfoobar
I honestly cannot comment on RabbitMQ or Kafka. I have not used them. I have also not used Redis other than learning.

However, a few years ago, I did use MSMQ (Microsoft Message Queue) and worked very well. However, at the time, I wanted something that didn't limit me to Windows.

In the end, I ended up learning ZeroMQ. Once I understood their "patterns" I created my own Broker software.

Originally, all requests (messages) sent to the Broker were being stored in files. Eventually, I moved over to Sqlite. As the broker is designed to process one thing at a time, was not worried multiple requests were going to sqlite. So now my Broker requires little dependencies.. except ZeroMQ and Sqlite.

(I did not need to worry about async processing as they get passed to the workers)

So, a broker is communicated with a client and a worker/consumer.

- client can ask for state (health of a queue, etc)

- client can send a message (to a queue, etc)

The worker communicates with the broker

- please connect me to this queue

- is there anything in this queue for me?

- here is the result of this message (success or failure)

etc.

I also made use of the pub-sub pattern. I made a GUI app that subscribes to these queues and feeds updates. If there are problems (failures) you can see them, here. I leave it to staff to re-send the message. If its already sent, maybe the subscriber missed that packet. pub-sub is not reliable, afterall.. but works 99% of time.

Overall it has been a great system for my needs -- again, it is lightweight, fast, and hardly costs anything. I do not need a beefy machine, either.

Honestly, I swear by this application (broker) i made. Now, am I comparing it to RabbitMQ or Kafka? No! I am sure these products are very good at what they do. However, especially for smaller companies I work for, this software has saved them a few pennies.

In all I found "Distributed computing" to be rewarding, and ZeroMQ+Sqlite have been a nice combination for my broker.

I have been experimenting with nanomsg-NG as a replacement for ZeroMQ but I just haven't spent proper time of it due to other commitments.

revskill
Because Kafka is using Java. And Java should die.
throwaway984393
The pendulum just swung the other way. Message queues were complicated and don't solve every problem, so rather than think about what the individual needs, the "community" took a 180 and adopted solutions that were overly simplistic and didn't solve every problem. It happens every 5 years or so with a different tech thing. HN is nothing if not constantly chasing any contrarian opinion that seems novel.
mfreeman451
[dead]
treenewtreenew
[flagged]