Hacker News new | past | comments | ask | show | jobs | submit login
A quick message queue benchmark: ActiveMQ, RabbitMQ, HornetQ, QPID, Apollo (x-aeon.com)
64 points by liotier on April 11, 2013 | hide | past | favorite | 44 comments



I suspect this benchmark is actually just testing the difference between STOMP and AMQP. RabbitMQ with STOMP is in the same regime as ActiveMQ / Apollo while RabbitMQ with AMQP is always 3x-5x faster than itself and others with STOMP.

QPID is the only other AMQP competitor and it seems to have performance issues with persistence but the transient performance is always in the same regime as AMQP RabbitMQ.

STOMP is a UTF-8 text protocol while AMQP is a binary protocol. This means that every message you send using STOMP requires encoding it in a safe-for-text format (e.g. base64 encoding or similar. I'm not intimately familiar with STOMP).

Going on the hypothesis that a STOMP message parser is much slower than an AMQP parser, this would explain why RabbitMQ AMQP does not perform well in the 200 message case while it trounces in the 20k and 200k message case. In the 20k/200k message case the benchmark is mostly looking at time to decode 200k text messages versus 200k binary messages.

It's so exceedingly hard to do a benchmark and measure the thing you think you are measuring.

Also, I agree with others that the minimal difference between persistent and transient setups is very non-intuitive. The author left out an important detail of whether he was using an SSD or not. Otherwise, I suspect an error in the measurement setup.


Please keep in mind:

- All these products have a lot of knobs to set. Performance depends on these knobs. It depends on whether you do batch sends(which some can do), whether you do transactions, whether you chose at-least-once, at-most-once etc. delivery, queue prefetch sizes, I/O worker size, and numerous other settings.

It'd also be interresting to see qpid-cpp in there as well, not just the Java version.


This is a test of the default configurations of various brokers, NOT the brokers themselves.

Apart from declaring the testing queues in some brokers’ configuration and the persistence settings, all brokers were running with their default configuration out of the box (no tuning made).

He also casually mentions that ZeroMQ is using an in-memory configuration.

A home-made ZeroMQ 2.2.0 broker, working in memory only (no persistence).

The default configuration of AMQ persists messages to journal on disk. So apples to oranges?

Why do people keep writing "benchmark" blog posts like this?


My thoughts exactly, this is useless. Benchmarking is something everyone can do, so people do it, but it doesn't provide any real value unless you put an insane amount of time into it, and even then you'll get a lot of hate mail :)

For example, look at the "Yahoo! Cloud System Benchmark" AKA YCSB paper. Those researchers spent a lot of time designing the benchmarks and properly configuring each database that they were testing. They even dedicated a researcher per database to sit down with the developers to review the configurations and the test runs. I was part of this process for the Apache HBase as a dev. In the end, everyone was still critical of the results once they saw the graphs comparing the DBs.

But, I'm still glad they did it as I use YCSB as one of my tools to benchmark HBase.


Interesting results! In my testing, I came to a different conclusion.

Background: PHP and STOMP, with loads of several million messages per hour, ranging in size from 2-20Kb. Multiple processes enqueuing, and many more dequeuing. Tests performed about this time last year.

Out of ActiveMQ, RabbitMQ and Apollo:

ActiveMQ crashed constantly under load.

RabbitMQ could not enqueue/dequeue fast enough.

Apollo blew them all out of the water and, long-term, has proven extremely stable.

I must admit that haven't spent hours performance tuning. The results shown here, however, make me think I should go back and re-evaluate.


> ActiveMQ crashed constantly under load.

That has been my experience as well, to this day I have an ActiveMQ broker for which the JVM just stops responding, its not the broker that goes unresponsive, its the entire JVM, with no error, no log, no nothing, it just stops replying to the service wrapper and sits at 100% cpu usage. The wrapper kills that JVM and restarts it once every couple days because this happens.

Oh and this was fun: yesterday I went to check on the broker as it was running a bit slow, the DLQ had ~50,000 messages in it that were not there last week, all were timestamped to the same second in ~July 2012...9 months ago...

I'll likely never use ActiveMQ again.


AMQ is sensitive to proper memory configuration. Typically what will happen is the memory configuration in the activemq.xml doesn't line up with the JVM heap and this causes garbage collection thrashing. It reaches a point where the CPU spikes and the monitor process zaps the JVM process, which causes a restart.

If everything is tuned and lined up properly this won't happen. I've used AMQ in high-volume production systems for years without this problem.


I've played with this and am not getting anywhere near the -Xmx set for the JVM and producer flow control isn't kicking it. Load on this broker isn't very high either, 5k to 10k messages/day, all messages << 1KB.


"RabbitMQ could not enqueue/dequeue fast enough"? I think the STOMP plugin was still pre-production a year ago. You may want to try using RabbitMQ 3.x..


I think the STOMP plugin was still pre-production a year ago. You may want to try using RabbitMQ 3.x.

Cool, thanks, I'll take a look and see how we go :)


Sidebar --- anyone interested in running tests with RabbitMQ should check out these posts as a starting point.

http://www.rabbitmq.com/blog/2012/04/17/rabbitmq-performance...

http://www.rabbitmq.com/blog/2012/04/25/rabbitmq-performance...

Please note that:

1. the results are from last year, and so not 100% representative of RabbitMQ 3.x which was released after these blogs were posted

2. what actually matters with message queue performance is stability and scalability over time .. testing this is extremely hard without making over-specific assumptions


I appreciate that some effort went into coding this benchmark but I don't understand no time was spent trying to optimise each MQ. You could even email the developers and ask for help since you're going to be publishing this as a comparative benchmark.


I really dislike graphs that leave labels off. When presenting results its nice to have a clear understanding of whats being displayed, the missing Y axis on these graphs is not helping.


"And now, the results (processing time measured in seconds: the lower the better)."


Anyone here use NSQ in production (https://github.com/bitly/nsq). If so what are your thoughts in comparison to the article?


we do; rather happy with it. do you have any specific questions about it? the article's approach is.. strange, so I'm not sure how I can meaningfully contrast from a high-level.

(disclosure: i've contributed to nsq, after we began using it in production)


Any gotchas moving from RabbitMQ to nsq?

Also, are you missing any features in nsq?


didn't run into any gotchas, but I tested things very thoroughly before going into production.

to the contrary- deploying static binaries (instead of the erlang environment) simplified things nicely, in my opinion. to be fair, i (personally) don't have the requisite experience tuning BEAM which probably biases my preference.

while relatively high volume, our usage of RabbitMQ was straightforward and covered by the functionality offered in NSQ.

like most, we were using AMQP.. so the switch to NSQ's concise wire protocol (and the associated reduction in pkt/s) saved us a lot of pain given the highly-variable performance we see in the EC2 network.


there are a few (growing) number of production installations of NSQ...

our (bitly's) cluster spans a a few datacenters and hits peaks of 80k messages/second.

I can answer any questions you have (one of the authors)


Any advice on dealing with de-duping?

Also, how was performance using JSON data format compared with ProtoBuffers & MsgPack?


NSQ treats the message data as an opaque blob so the format wouldn't directly affect it (except on some lower level related to overall message size I suppose). It would impact your producers (encoding) and consumers (decoding), obviously.

re: de-duping - there are lots of things to consider, I highly recommend reading through http://cs.brown.edu/courses/csci2270/archives/2012/papers/we..., it's a fantastic paper. At a high level the answer is idempotency.

What sort of use case are you thinking of? (context would help answer your de-dupe question)


I'm interested to see ZMQ being compared to the traditional brokers as I've been considering swapping from ActiveMQ. Although not surprised at it's results. It very much re-enforces the ideal of keeping software simple and barebones, rather than bloating it with stuff most people never use.

Mainly because I'm finding the latency from ActiveMQ is starting to affect my overall system latency. RabbitMQ was the next broker on my list to test, but ZMQ makes more sense (if you don't mind writing the broker part).

Background; I'm currently running a system which at peak runs with about 15 million messages per hour using ActiveMQ, with several producers and consumers on the same topic. Apart from speed, not had any issues with it.


Background; I'm currently running a system which at peak runs with about 15 million messages per hour using ActiveMQ, with several producers and consumers on the same topic.

Another vote for RabbitMQ. We're currently using a small RabbitMQ cluster that is averaging 5000+ msgs/sec and its not straining the sytem. At times, we've experienced bursts approaching 10,000 msgs/sec without any issues.

We have around 30 producers and 45 consumers spread out over a wide range of queues & exchanges.

Whilst ZMQ is generally faster, it does require more effort to be useful. Whereas RabbitMQ, I believe, provides the best of both worlds. Blazing fast messaging combined with ease of use and setup.


I would not use ZeroMQ for anything but the most non-critical queues. ZeroMQ is not persistent: if your server crashes you lose all your queue contents.

My current favorite is RabbitMQ. It has improved steadily over the years, performs pretty well and very easy to setup.


ZeroMQ is a messaging library only.

There is nothing stopping you from using ZeroMQ to build a daemon that implements a persistent queue— in fact, I have, several times. It's very simple.


And then you've made your own broker. At that point why not use an existing, more mature broker?


The zero in ZeroMQ is because brokerlessness is a strength in some aspects.

A broker can be an SPOF, for instance. 0MQ makes it easy to do shared-nothing.


I know what ZeroMQ does. But you argue that a one can easily write a broker on top of ZeroMQ that provides persistency, but that invalidates your biggest reason of using ZeroMQ, namely brokerlessness.

Besides SPOF is not an argument. All serious brokers have supported clustering and failover for quite a while, and they support persistency. I don't see a good reason why you may want to write a ZeroMQ-based broker instead of just using RabbitMQ.


ZeroMQ is not persistent: if your server crashes you lose all your queue contents.

With my requirements if I haven't processed the message quick enough then I don't care if it gets lost, as it would already be outdated data.


+1 for the carrot muncher here is well


Background; I'm currently running a system which at peak runs with about 15 million messages per hour using ActiveMQ, with several producers and consumers on the same topic. Apart from speed, not had any issues with it.

ActiveMQ or ActiveMQ Apollo? Definitely give Apollo and try if you haven't, it's incredibly easy to drop it and requires very little configuration.

I'm processing similar peak levels with Apollo, and so far have been amazed at how well it handles things.


ActiveMQ or ActiveMQ Apollo?

ActiveMQ, haven't looked too much into Apollo since almost it's original announcement. How is it from a stability standpoint?


How is it from a stability standpoint?

The only issues I encountered were been due to not assigning enough memory to the JVM. Other than that it's done the job admirably.


>>"Except for big messages, RabbitMQ seems to be the best bet as it outperforms others by a factor of 3."

RabbitMQ? Does the OP mean to say ZeroMQ?

ZeroMQ leads every benchmark in the blog.


True, but it's in-memory only (there was no persistence with ZeroMQ in the given setup).


Then why even test ZeroMQ then?

If you plan to setup a benchmark test, and then decide to throw out the results (winner/ZeroMQ) because it doesn't produce the desired outcome you wanted ... then this benchmark is a joke to begin with.


He did say ZeroMQ broker outperforms all others. This means that unless you have a need for complex broker features, ZeroMQ is a perfect message dispatcher among processes.


Because ZeroMQ is cool. Just look at all the buzz it gets on HN.


I don't understand the point of making benchmark of these message brokers in the default configuration (usually tailored for ease of development, not performance). Who is using message brokers in the default configuration in real deployments?


I would like to see Kestrel (and Darner) in this benchmark. My personal experience with Darner is that it handles extreme loads. Granted, the MQ is not distributed and it is difficult to do "topics" in the JMS sense.


When is a message queue required? And when is a pub/sub setup with Redis good enough? I'm sure it's apples to oranges, but there must be a few places with ActiveMQ installed where it wasn't really needed.


I'm a bit sceptical of this, as RabbitMQ performance is very similar between transient and persistent. In my own tests, they were up to an order of magnitude different, even on an SSD.


I would rather see number of processes varied.


What, no ZeroMQ? (Page won't load)




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: