Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Race condition in 2.10.1 release #119

Closed
errordaiwa opened this issue Jul 9, 2015 · 3 comments
Closed

Race condition in 2.10.1 release #119

errordaiwa opened this issue Jul 9, 2015 · 3 comments

Comments

@errordaiwa
Copy link

I see this fix in 2.10.2 Released (21-Aug-2012) release note.

 Bug fix, potential race condition in BlockingWaitStrategy.

We are using 2.10.1 release and I'm wondering in which situation will race condition happen? The code seems rigorous.
Thanks!

@mikeb01
Copy link
Contributor

mikeb01 commented Jul 10, 2015

The problem is to do with the optimisation in the BlockingWaitStrategy that tries to avoid waking up the consumers if none of them are waiting. However due to a bug in that code it possible for the consumers to miss a potential wake-up and the ring buffer ends up with a message sitting unconsumed in the buffer and it won't be processed until the next message arrives. This actually happened in our production system. We had a latency spike of 6 seconds do to a missed wake-up.

Definitely upgrade, either to 2.10.4 - which removes the optimisation altogether or even better use the latest version 3 of the Disruptor which has some significant performance improvements for multiple publishers.

@errordaiwa
Copy link
Author

@mikeb01 , sincerely thank you for your help.

We are trying to apply ver2.10.4 and use waitfor instead of waitfor(timeout). So we want to do unit test to make sure this bug is fixed. But I can't reproduce it using ver2.10.1. So is there any way to reproduce this race condition?

@mikeb01
Copy link
Contributor

mikeb01 commented Jul 13, 2015

I can't think of a test that will deterministically fail for the v2.10.1. This code ran fine in production for over 7 months before we noticed the issue. To reproduce I think you need to have a producer sleep for a short (random) amount of time then publish a bunch of messages. Then wait to see if they've all been consumed. Eventually you see a situation where the consumer will be sitting idle and not all of the messages will have been consumed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants