Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Range overhead reduction. #3138

Merged
merged 1 commit into from Aug 12, 2015
Merged

Range overhead reduction. #3138

merged 1 commit into from Aug 12, 2015

Conversation

akarnokd
Copy link
Member

@akarnokd akarnokd commented Aug 9, 2015

Applied some refactorings and local variable usage to reduce the overhead.

Few observations:

  • Having too many local variables may cause register spill, even on x64 which makes some size benchmark faster while other slower.
  • The observeOn benchmarks are quite hectic because of receiving thread migration caused by the round-robin worker assignment. It affects the benchmarks with 1 or 1000 elements in the stream.
  • Note that the previous OperatorRangePerf size = 1 measured the speed of just due to the optimization of range(). The updated perf now instantiates the OnSubscribeRange.
  • Note that the observeOn benchmark with size = 1 run the just() as well.

Benchmark comparison (i7 4770K, Windows 7 x64, Java 8u51)
image

@akarnokd
Copy link
Member Author

Updated the PR to fix the lack of widening in the fast path if the end is Integer.MAX_VALUE by adding 1L (the slow path adds the long idx value which does this widening).

@akarnokd
Copy link
Member Author

I'm merging this; the values are better for the thread-stable range perf and observeOn is simply too hectic with the round-robin core usage.

akarnokd added a commit that referenced this pull request Aug 12, 2015
@akarnokd akarnokd merged commit 054ba58 into ReactiveX:1.x Aug 12, 2015
@akarnokd akarnokd deleted the RangePerf branch August 12, 2015 20:14
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

1 participant