Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[#4615] Customizable estimation for messages written outside the EventLoop #4620

Closed
wants to merge 1 commit into from

Conversation

technocoreai
Copy link
Contributor

Motivation:

Estimation algorithm currently used for WriteTasks is complicated and wrong. Additionally, some code relies on outbound buffer size incremented only on actual writes to the outbound buffer.

Modifications:

  • Throw away the old estimator and replace with a simple algorithm that uses the client-provided estimator along with a statically configured WriteTask overhead (io.netty.transport.writeTaskSizeOverhead system property with the default value of 48 bytes)
  • Add a io.netty.transport.estimateSizeOnSubmit boolean system property allowing the clients to disable the message estimation outside the event loop

Result:

Task estimation is user controllable and produces better results bydefault

@technocoreai
Copy link
Contributor Author

This is for the 4.1 branch, I'll cross-port to 4.0/5.0 once you approve. Default task size is hardcoded, I could do something more fancy like detecting the JVM bitness and CompressedOOPS, but it's way too complicated and the users can just change the value as necessary.

buffer.incrementPendingOutboundBytes(task.size);

if (ESTIMATE_TASK_SIZE_ON_SUBMIT) {
task.size = ((AbstractChannel) ctx.channel()).estimatorHandle().size(msg) + WRITE_TASK_OVERHEAD;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@technocoreai just one nit.. can you move this in the if (buffer != null) { ... } block and if the buffer == null set the size = 0 ?

@normanmaurer
Copy link
Member

@technocoreai only one tiny comment. No need to port this to other branches, I will just cherry-pick it from 4.1 once in there.

@technocoreai
Copy link
Contributor Author

@normanmaurer I've updated the patch, it now initializes the size to 0 and then updates it if the estimation is enabled and channel is still open, is that fine?

// Check for null as it may be set to null if the channel is closed already
if (buffer != null) {
buffer.incrementPendingOutboundBytes(task.size);
task.size = 0;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not need to set this to 0 if ESTIMATE_TASK_SIZE_ON_SUBMIT is true. move this to an else block please.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@normanmaurer Fixed, although I don't think that the size of a task written to a closed channel matters much.

… EventLoop

Motivation:

Estimation algorithm currently used for WriteTasks is complicated and
wrong. Additionally, some code relies on outbound buffer size
incremented only on actual writes to the outbound buffer.

Modifications:

- Throw away the old estimator and replace with a simple algorithm that
  uses the client-provided estimator along with a statically configured
  WriteTask overhead (io.netty.transport.writeTaskSizeOverhead system
  property with the default value of 48 bytes)
- Add a io.netty.transport.estimateSizeOnSubmit boolean system property
  allowing the clients to disable the message estimation outside the
  event loop

Result:

Task estimation is user controllable and produces better results by
default
@normanmaurer
Copy link
Member

Cherry-picked into 4.0 (d2ddb52) and 4.1 (cfd6793)

@technocoreai thanks again!

@technocoreai
Copy link
Contributor Author

@normanmaurer The commit looks the same for branches and I don't see it in the log. Do you intend to push the 4.0 fix later?

@normanmaurer
Copy link
Member

@technocoreai sorry I somehow thought I pushed to 4.0 as well. Just updated the comment to link to the correct commit sha and pushed it. d2ddb52

@technocoreai
Copy link
Contributor Author

@normanmaurer thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants