Clustering redis to maximize uptime and scale
At Recurly we use Redis to power website caching, queuing and some application data storage. Just like all the systems we use internally, it has to be stable, have automatic failover incase of failure, and be easily scalable should we need to add more capacity.
What is redis?
Redis is an open source, BSD licensed, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.
On its own, Redis does not fit our requirements but with some extra software, we can get to where we want to be.
Core Solution
Initially we looked at Redis Cluster. This looks great but our testing and the general consensus showed that it's not ready for production use just yet. We will definitely be keeping an eye on this project.
The solution we decided to go with was a mix of different solutions Integrated together: Redis, keepalived, nutcracker/twemproxy, sentinel and smitty give us the scalability, stability and automatic failover we require.
Before I go into further detail on implementation, I think it's important to describe what this setup looks like.
Â
Application traffic first passes through the virtual IP address provided by keepalived, which gives us IP failover should we lose a server.
Traffic then passes to the twemproxy application. We have three traditional Redis primary > replica setups behind twemproxy. The reason for this is that twemproxy knows about all the primary servers in the cluster and automatically shards data between them. The primaries then replicate data to their, well, replicas. This means that we have 3 copies of the sharded data, split across all Redis servers at all times.
The rest of the setup is better explained in a failover scenario.
Â
If we lose a machine in the cluster, sentinel will notice and promote one of the replicas within its cluster to a primary role; a more in-depth explanation can be found in the sentinel spec. Smitty monitors sentinel for primary DOWN events and updates twemproxy with the new primary; thus, we remain up and data consistent should we lose a node.
Monitoring
Every solution we deploy at Recurly is heavily monitored. Statistics taken from the monitoring software are used to alert our operations team to potential problems. This could be based on checks or trends from time series data.
We use Nagios for active checks, detailed below:
Monitor running processes using check_procs
Monit keeps an eye on sentinel, smitty, twemproxy and restarts them if necessary
Ports using check_tcp
For time series data we use graphite, this allows us to spot trends and alert on various metrics. We grab information from the cluster using the following tools:
Backup
Redis by default, stores all data in memory but it has two options for data persistence if required: AOF and RDB. I won't go into too much detail as a good explanation may be found here. We use AOF for persistence and backups due to the way RDP stops handling client requests while it runs.
Security
Redis provides the option of using password authentication, but twemproxy does not support this currently and it can be easily cracked. Instead we opted for a network-based security approach: access to our Redis clusters are tightly controlled by firewall rules and we deploy multiple Redis clusters to different VLANs on our network.
Conclusion
Redis is a very fast, in-memory information store that we love. Using the techniques described above will allow you to scale to multiple machines and handle outages elegantly.