Thanks for the long answer! So responding to some of the points one by one:
- using PostgreSQL w/ extensions for event sourcing: definitely, if you don’t need the sharding, scaling & replication features of Kafka, and the performance of Postgres is satisfactory, that’s the way to go! In fact, as I mentioned in the beginning, I’ve also explored implementing event sourcing on top of an SQL storage. However, if the volume of events that your are handling exceeds the capabilities of a single PostgreSQL storage, I think Kafka becomes a very good option (of course, you might also consider sharding/replicating PostgreSQL, but then the complexity of that system is at least as high as the Kafka one)
- I’m not sure I would say that “Kafka is trying to become everything”. At its core, it is still a replicated, high-performance, durable log. There are very few new features added in the core system (one such feature is transactions, but that’s about it). All other improvements are done on top of that — as new ways of using what’s already available. So I would say that it’s definitely Kafka’s strenght — the core system implements just the right features to enable a lot of applications
- comparing to other messaging systems, Kafka doesn’t seem heavy. I think any system which supports replication and sharding will be complex to properly cluster. That’s beacuse it’s simple a hard problem: establishing consensus, dealing with network partitions etc. I’m sure some operational aspects can be simplified, but overall I don’t think it might get significantly simpler
- please correct me if I’m wrong, but isn’t Chronicle Queue implementing a different messaging topology? With Kafka, you have a central broker which provides data durability and replication, and using it you can implement messaging topics where a single message is consumed by potentially multiple clients. As far as I understand Chronicle Queue, it implements peer-to-peer messaging, so the sender of the message communicates directly with other nodes?