Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Seg fault in dispatchee_registration_t::mark_ready() #4875

Closed
danielmewes opened this issue Sep 22, 2015 · 3 comments
Closed

Seg fault in dispatchee_registration_t::mark_ready() #4875

danielmewes opened this issue Sep 22, 2015 · 3 comments
Assignees
Labels
Milestone

Comments

@danielmewes
Copy link
Member

Got this after about 5 minutes of running table_fuzzer.py --servers 8 --threads 64 --serve-flags "--cache-size 100" on next with a release mode binary.

2015-09-22T15:50:38.238761331 376.607789s error: Error in src/arch/runtime/thread_pool.cc at line 359:
2015-09-22T15:50:38.240173685 376.609201s error: Segmentation fault from reading the address 0x28.
2015-09-22T15:50:38.241654689 376.610682s error: Backtrace:
2015-09-22T15:50:42.376505458 380.745535s info: Table df720ae6-8978-47dc-92df-48a377e3bcdc: Starting a new Raft election for term 2.
2015-09-22T15:50:42.481147447 380.850175s error:

Tue Sep 22 15:50:38 2015

1: backtrace_t::backtrace_t() at backtrace.cc:203
2: lazy_backtrace_formatter_t::lazy_backtrace_formatter_t() at basic_string.h:269
3: format_backtrace(bool) at backtrace.cc:197
4: report_fatal_error(char const*, int, char const*, ...) at basic_string.h:287
5: linux_thread_pool_t::fatal_signal_handler(int, siginfo*, void*) at thread_pool.cc:359
6: /lib/x86_64-linux-gnu/libpthread.so.0(+0xfcb0) [0x7f6c4a847cb0] at 0x7f6c4a847cb0 (/lib/x86_64-linux-gnu/libpthread.so.0)
7: primary_dispatcher_t::dispatchee_registration_t::mark_ready() at primary_dispatcher.cc:51
8: mailbox_manager_t::mailbox_read_coroutine(connectivity_cluster_t::connection_t*, auto_drainer_t::lock_t, threadnum_t, unsigned long, std::vector<char, std::allocator<char> >*, long, mailbox_manager_t::force_yield_t) at mailbox.cc:277
9: /home/ssd3/daniel/rethinkdb/build/release_clang/rethinkdb() [0x777938] at 0x777938 ()
10: coro_t::run() at coroutines.cc:214
2015-09-22T15:50:42.484775028 380.853803s error: Exiting
@danielmewes danielmewes added this to the 2.1.x milestone Sep 22, 2015
@danielmewes
Copy link
Member Author

This might just be a destruction order race in remote_replicator_server_t

@danielmewes danielmewes self-assigned this Sep 22, 2015
@danielmewes
Copy link
Member Author

The likely fix is in CR 3239

@danielmewes
Copy link
Member Author

Fixed in next 7ed3be2 and v2.1.x 8b4bba6

@danielmewes danielmewes modified the milestones: 2.1.x, 2.1.5 Oct 1, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant