I'm really surprised at the price of Kafka clusters, to be honest. Hundreds of dollars per month extra at minimum.
It was a queue for a distributed worker pool. The simpler alternatives I was used to at the time (RabbitMQ) did not support joining (i.e. run task Y when all of X1~X20 are complete) and therefore every task was stored in the database, anyway. I don't remember the exact numbers, but it was a light/moderate load--thousands, maybe tens of thousands of rows per day. It ran smoothly with an external message queue. I'd vacuum maybe every 4 months.
For one iteration, I decided to try using PostgreSQL as the queue system as well, to decrease the number of moving parts. It performed fine for a bit, then slowed to a crawl. I was sure I was doing something wrong--except every guide told me this was how to use a table as a queue. If I missed anything, it must've been PG-as-a-queue-specific.
I'm working on something similar, and these are the options I've been mulling over. They each come with pretty significant drawbacks. My current plan is to use listen/notify because it's low hanging fruit and to see how that pans out, but was wondering if anyone has been down this path before and has wisdom they're willing to share.