Consistent hashing
The solution is consistent hashing. Introduced as a term in 1997, consistent hashing was originally used as a means of routing requests among a large number of web servers. It's easy to see how the web could benefit from a hash mechanism that allows any node in the network to efficiently determine the location of an object, in spite of the constant shifting of nodes in and out of the network. This is the fundamental objective of consistent hashing.
How it works
With consistent hashing, the buckets are arranged in a ring with a predefined range. The exact range depends on the partitioner being used. Keys are then hashed to produce a value that lies somewhere along the ring. Nodes are assigned a range, which is computed as follows:
Tip
The following examples assume the default Murmur3Partitioner is used. For more information on this partitioner, take a look at the documentation, which can be found here: http://docs.datastax.com/en/cassandra/3.x/cassandra/architecture/archPartitionerM3P.html
Therefore, for a five-node cluster, a ring with evenly distributed token ranges would look like this, presuming the default Murmur3Partitioner is used:
The primary replica for each key is assigned to a node based on its hashed value. Each node is responsible for the region of the ring between itself (inclusive) and its predecessor (exclusive).
This diagram represents data ranges (the letters) and the nodes (the numbers) that own those ranges. It may also be helpful to visualize this in table form, which may be more familiar to those who have used the nodetool ring
command to view Cassandra's topology:
When Cassandra receives a key for either a read or a write, the same hash function is applied to the key to determine where it lies in the range. Since all nodes in the cluster are aware of the other nodes' ranges, any node can handle a request for any other node's range. The node receiving the request is called the coordinator, and any node can act in this role. If a key does not belong to the coordinator's range, it forwards the request to replicas in the correct range.
Following our previous example, we can now examine how our names might map to a hash, using the Murmur3 hash algorithm. Once the values are computed, they can be matched to the range of one of the nodes in the cluster, as follows:
The placement of these keys might be easier to understand by visualizing their position in the ring:
The hash value of the name keys determines their placement in the cluster
Now that you understand the basics of consistent hashing, let's turn our focus to the mechanism by which Cassandra assigns data ranges.