2.1.0 beta
Pre-releaseThis is a beta release for RethinkDB 2.1. It is not for production use and has known
bugs. Please do not use this version for production data.
We are looking forward to your bug reports on GitHub
or on our mailing list.
Release highlights:
- Automatic failover using a Raft-based protocol
- More flexible administration for servers and tables
- Advanced recovery features
Read the blog post for more details.
Download
Update 07/27/2015: The server downloads have been updated to include additional bug fixes and improvements.
1. Download the server
- Source tarball
- OS X 64 bit dmg
- CentOS 6 and 7 64 bit | 32 bit
- Ubuntu 10.04 lucid 64 bit | 32 bit
- Ubuntu 12.04 precise 64 bit | 32 bit
- Ubuntu 13.10 saucy 64 bit | 32 bit
- Ubuntu 14.04 trusty 64 bit | 32 bit
- Ubuntu 14.10 utopic 64 bit | 32 bit
- Ubuntu 15.04 vivid 64 bit | 32 bit
- Debian wheezy 64 bit | 32 bit
- Debian jessie 64 bit | 32 bit
2. Download a driver
JavaScript
$ npm install http://download.rethinkdb.com/dev/2.1.0-0BETA1/rethinkdb-2.1.0-BETA1.nodejs.tgz
Python
$ pip install http://download.rethinkdb.com/dev/2.1.0-0BETA1/rethinkdb-2.1.0beta1.python.tar.gz
Ruby
$ wget http://download.rethinkdb.com/dev/2.1.0-0BETA1/rethinkdb-2.1.0.beta.1.gem
$ gem install rethinkdb-2.1.0.beta.1.gem
Compatibility
This beta release does not include automatic migration of data
directories from older versions of RethinkDB. The final release of RethinkDB 2.1 will
automatically migrate data from RethinkDB 1.14 and up.
If you're upgrading directly from RethinkDB 1.13 or earlier, you will need to manually
upgrade using rethinkdb dump
.
Changed handling of server failures
This release introduces a new system for dealing with server failures and network
partitions based on the Raft consensus algorithm.
Previously, unreachable servers had to be manually removed from the cluster in order to
restore availability. RethinkDB 2.1 can resolve many cases of availability loss
automatically, and keeps the cluster in an administrable state even while servers are
missing.
There are three important scenarios in RethinkDB 2.1 when it comes to restoring the
availability of a given table after a server failure:
- The table has three or more replicas, and a majority of the servers that are hosting
these replicas are connected. RethinkDB 2.1 automatically elects new primary replicas
to replace unavailable servers and restore availability. No manual intervention is
required, and data consistency is maintained. - A majority of the servers for the table are connected, regardless of the number of
replicas. The table can be manually reconfigured using the usual commands, and data
consistency is always maintained. - A majority of servers for the table are unavailable. The new
emergency_repair
option
totable.reconfigure
can be used to restore table availability in this case.
System table changes
To reflect changes in the underlying cluster administration logic, some of the tables in
the rethinkdb
database changed.
Changes to table_config
:
- Each shard subdocument now has a new field
nonvoting_replicas
, that can be set to a
subset of the servers in thereplicas
field. write_acks
must now be either"single"
or"majority"
. Custom write ack
specifications are no longer supported. Instead, non-voting replicas can be used to set
up replicas that do not count towards the write ack requirements.- Tables that have all of their replicas disconnected are now listed as special documents
with an"error"
field. - Servers that are disconnected from the cluster are no longer included in the table.
Changes to table_status
:
- The
primary_replica
field is now calledprimary_replicas
and has an array of
current primary replicas as its value. While under normal circumstances only a single
server will be serving as the primary replica for a given shard, there can temporarily
be multiple primary replicas during handover or while data is being transferred between
servers. - The possible values of the
state
field now are"ready"
,"transitioning"
,
"backfilling"
,"disconnected"
,"waiting_for_primary"
and"waiting_for_quorum"
. - Servers that are disconnected from the cluster are no longer included in the table.
Changes to current_issues
:
- The issue types
"table_needs_primary"
,"data_lost"
,"write_acks"
,
"server_ghost"
and"server_disconnected"
can no longer occur. - A new issue type
"table_availability"
was added and appears whenever a table is
missing at least one server. Note that no issue is generated if a server which is not
hosting any replicas disconnects.
Other API-breaking changes
.split('')
now treats the input as UTF-8 instead of an array of bytesnull
values in compound index are no longer discarded- The new
read_mode="outdated"
optional argument replacesuse_outdated=True
New features
- Server
- Added automatic failover and semi-lossless rebalance based on Raft (#223)
- Backfills are now interuptible and reversible (#3886, #3885)
table.reconfigure
now works even if some servers are disconnected (#3913)- Replicas can now be marked as voting or non-voting (#3891)
- Added an emergency repair feature to restore table availability if consensus is lost
(#3893) - Reads can now be made against a majority of replicas (#3895)
- Added an emergency read mode that extracts data directly from a given replica for data
recovery purposes (#4388) - Servers with no responsibilities can now be removed from clusters without raising an
issue (#1790)
- ReQL
- Added
ceil
,floor
andround
(#866)
- Added
- All drivers
- Python driver
Improvements
- Server
- Improved the handling of cluster membership and removal of servers (#3262, #3897,
#1790) - Changed the formatting of the
table_status
system table (#3882, #4196) - Added an
indexes
field to thetable_config
system table (#4525) - Improved efficiency by making
datum_t
movable (#4056) - ReQL backtraces are now faster and smaller (#2900)
- Replaced cJSON with rapidjson (#3844)
- Failed meta operations are now transparently retried (#4199)
- Added more detailed logging of cluster events (#3878)
- Improved unsaved data limit throttling (#4441)
- Improved the handling of cluster membership and removal of servers (#3262, #3897,
- ReQL
- Web UI
- JavaScript driver
- Python driver
- Added an
r.__version__
property (#3100)
- Added an
Bug fixes
time_of_date
anddate
now respect timezones (#4149)- Added code to work around a bug in some versions of GLIBC and EGLIBC (#4470)
- Python driver
- Fixed a missing argument error (#4402)
- JavaScript driver
- Made the handling of the
db
optional argument torun
consistent with the Ruby and
Python drivers (#4347)
- Made the handling of the
Contributors
Many thanks to external contributors from the RethinkDB community for helping
us ship RethinkDB 2.1. In no particular order:
- Thomas Kluyver (@takluyver)
- Jonathan Phillips (@jipperinbham)
- Yohan Graterol (@yograterol)
- Adam Grandquist (@grandquista)
- Peter Hamilton (@hamiltop)
- Marshall Cottrell (@marshall007)
- Elias Levy (@eliaslevy)
- Ian Beringer (@ianberinger)
- Jason Dobry (@jmdobry)
- Wankai Zhang (@wankai)
- Elifarley Cruz (@elifarley)
- Brandon Mills (@btmills)
- Daniel Compton (@danielcompton)
- Ed Costello (@epc)
- Lowe Thiderman (@thiderman)
- Andy Wilson (@wilsaj)