Implement Nginx Redis cluster caching for high availability

Advanced 45 min Apr 05, 2026 209 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Set up a Redis cluster with Nginx caching for high availability and improved performance. This configuration provides distributed caching with automatic failover and enhanced scalability for production web applications.

Prerequisites

  • Root or sudo access
  • At least 4GB RAM
  • Basic understanding of Redis and Nginx
  • Network connectivity between cluster nodes

What this solves

Redis cluster caching with Nginx provides distributed, fault-tolerant caching that automatically handles node failures while maintaining high performance. This setup eliminates single points of failure in your caching layer and scales horizontally to handle increased traffic loads. You need this when your application requires high availability caching that can survive individual Redis node failures without losing service availability.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you have the latest security patches and package versions.

sudo apt update && sudo apt upgrade -y
sudo dnf update -y

Install Redis server and development tools

Install Redis and the necessary development packages to build Nginx with Redis module support.

sudo apt install -y redis-server build-essential libpcre3-dev zlib1g-dev libssl-dev libgeoip-dev libhiredis-dev git
sudo dnf install -y redis gcc gcc-c++ pcre-devel zlib-devel openssl-devel geoip-devel hiredis-devel git make

Download and compile Nginx with Redis module

Download Nginx source code and the Redis module, then compile them together for Redis caching support.

cd /tmp
wget http://nginx.org/download/nginx-1.24.0.tar.gz
tar -xzf nginx-1.24.0.tar.gz
git clone https://github.com/openresty/redis2-nginx-module.git
git clone https://github.com/openresty/set-misc-nginx-module.git
git clone https://github.com/openresty/echo-nginx-module.git

Configure and compile Nginx

Configure Nginx with the Redis module and other necessary modules, then compile and install it.

cd /tmp/nginx-1.24.0
./configure --prefix=/etc/nginx \
--sbin-path=/usr/sbin/nginx \
--modules-path=/usr/lib/nginx/modules \
--conf-path=/etc/nginx/nginx.conf \
--error-log-path=/var/log/nginx/error.log \
--http-log-path=/var/log/nginx/access.log \
--pid-path=/var/run/nginx.pid \
--lock-path=/var/run/nginx.lock \
--http-client-body-temp-path=/var/cache/nginx/client_temp \
--http-proxy-temp-path=/var/cache/nginx/proxy_temp \
--http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp \
--http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp \
--http-scgi-temp-path=/var/cache/nginx/scgi_temp \
--with-http_ssl_module \
--with-http_realip_module \
--with-http_addition_module \
--with-http_sub_module \
--with-http_dav_module \
--with-http_flv_module \
--with-http_mp4_module \
--with-http_gunzip_module \
--with-http_gzip_static_module \
--with-http_random_index_module \
--with-http_secure_link_module \
--with-http_stub_status_module \
--with-http_auth_request_module \
--with-threads \
--with-stream \
--with-stream_ssl_module \
--with-http_slice_module \
--with-http_v2_module \
--add-module=/tmp/redis2-nginx-module \
--add-module=/tmp/set-misc-nginx-module \
--add-module=/tmp/echo-nginx-module

make -j$(nproc)
sudo make install

Create Nginx user and directories

Create the nginx user account and necessary directories with proper permissions for security.

sudo useradd --system --home /var/cache/nginx --shell /sbin/nologin --comment "nginx user" --user-group nginx
sudo mkdir -p /var/cache/nginx/client_temp
sudo mkdir -p /var/cache/nginx/proxy_temp
sudo mkdir -p /var/cache/nginx/fastcgi_temp
sudo mkdir -p /var/cache/nginx/uwsgi_temp
sudo mkdir -p /var/cache/nginx/scgi_temp
sudo chown -R nginx:nginx /var/cache/nginx
sudo chmod 755 /var/cache/nginx

Create Nginx systemd service

Create a systemd service file to manage Nginx with proper service controls and security settings.

[Unit]
Description=The nginx HTTP and reverse proxy server
Documentation=http://nginx.org/en/docs/
After=network.target remote-fs.target nss-lookup.target
Wants=network-online.target

[Service]
Type=forking
PIDFile=/var/run/nginx.pid
ExecStartPre=/usr/sbin/nginx -t
ExecStart=/usr/sbin/nginx
ExecReload=/bin/kill -s HUP $MAINPID
ExecStop=/bin/kill -s QUIT $MAINPID
KillMode=mixed
PrivateTmp=true
User=nginx
Group=nginx

[Install]
WantedBy=multi-user.target

Configure Redis cluster nodes

Create Redis configuration files for a 6-node cluster (3 masters, 3 replicas) with cluster mode enabled.

sudo mkdir -p /etc/redis/cluster
sudo mkdir -p /var/lib/redis/cluster
for port in 7000 7001 7002 7003 7004 7005; do
    sudo mkdir -p /var/lib/redis/cluster/$port
done

Create Redis cluster configuration

Generate configuration files for each Redis cluster node with appropriate settings for clustering and persistence.

for port in 7000 7001 7002 7003 7004 7005; do
sudo tee /etc/redis/cluster/redis-$port.conf > /dev/null <

Create Redis cluster systemd services

Create individual systemd service files for each Redis cluster node to manage them independently.

for port in 7000 7001 7002 7003 7004 7005; do
sudo tee /etc/systemd/system/redis-cluster-$port.service > /dev/null <

Set proper ownership and start Redis cluster

Configure file ownership and start all Redis cluster nodes with proper permissions.

sudo chown -R redis:redis /etc/redis/cluster
sudo chown -R redis:redis /var/lib/redis/cluster
sudo chmod 640 /etc/redis/cluster/*.conf
sudo chmod 755 /var/lib/redis/cluster/*

sudo systemctl daemon-reload
for port in 7000 7001 7002 7003 7004 7005; do
    sudo systemctl enable redis-cluster-$port
    sudo systemctl start redis-cluster-$port
done

Initialize Redis cluster

Create the Redis cluster by joining all nodes together and assigning slots for data distribution.

sleep 5
HOST_IP=$(hostname -I | awk '{print $1}')
redis-cli --cluster create \
$HOST_IP:7000 $HOST_IP:7001 $HOST_IP:7002 \
$HOST_IP:7003 $HOST_IP:7004 $HOST_IP:7005 \
--cluster-replicas 1 --cluster-yes

Configure Nginx with Redis cluster caching

Create the main Nginx configuration with upstream Redis cluster nodes and caching directives.

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
    use epoll;
    multi_accept on;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;
    
    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                   '$status $body_bytes_sent "$http_referer" '
                   '"$http_user_agent" "$http_x_forwarded_for" '
                   'rt=$request_time uct="$upstream_connect_time" '
                   'uht="$upstream_header_time" urt="$upstream_response_time"';
    
    access_log /var/log/nginx/access.log main;
    
    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;
    client_max_body_size 16M;
    
    # Gzip compression
    gzip on;
    gzip_vary on;
    gzip_min_length 1024;
    gzip_proxied any;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
    
    # Redis upstream cluster
    upstream redis_cluster {
        server 127.0.0.1:7000 max_fails=3 fail_timeout=30s;
        server 127.0.0.1:7001 max_fails=3 fail_timeout=30s;
        server 127.0.0.1:7002 max_fails=3 fail_timeout=30s;
        keepalive 32;
    }
    
    # Cache zones
    proxy_cache_path /var/cache/nginx/redis_cache levels=1:2 keys_zone=redis_cache:10m max_size=1g inactive=1h use_temp_path=off;
    
    include /etc/nginx/conf.d/*.conf;
}

Create Redis cache configuration

Create a specific configuration file for Redis caching functionality with cache management and failover logic.

server {
    listen 80;
    server_name example.com www.example.com;
    
    # Cache status endpoint
    location /nginx_status {
        stub_status on;
        access_log off;
        allow 127.0.0.1;
        deny all;
    }
    
    # Redis cache endpoint
    location /redis/ {
        internal;
        redis2_query auth $redis_password;
        redis2_query $args;
        redis2_pass redis_cluster;
        redis2_connect_timeout 1s;
        redis2_send_timeout 1s;
        redis2_read_timeout 1s;
        redis2_buffer_size 4k;
    }
    
    # Main application with Redis caching
    location / {
        set $redis_key "cache:$scheme:$host:$request_uri";
        
        # Try to get from Redis first
        redis2_query get $redis_key;
        redis2_pass redis_cluster;
        
        # If Redis fails or cache miss, go to backend
        error_page 502 504 = @fallback;
        
        # Add cache headers
        add_header X-Cache-Status "HIT from Redis";
    }
    
    # Fallback to backend when Redis fails
    location @fallback {
        proxy_pass http://backend_servers;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        
        # Cache the response in Redis for future requests
        access_by_lua_block {
            local redis_key = ngx.var.redis_key
            local response = ngx.var.upstream_response_body
            if response then
                -- Store in Redis with 1 hour expiration
                local redis_store = ngx.location.capture("/redis/", {
                    args = "setex " .. redis_key .. " 3600 " .. response
                })
            end
        }
        
        add_header X-Cache-Status "MISS - Stored in Redis";
    }
    
    # Backend servers (replace with your actual backend)
    location @backend {
        proxy_pass http://127.0.0.1:8080;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_connect_timeout 5s;
        proxy_send_timeout 5s;
        proxy_read_timeout 5s;
    }
}

Upstream backend servers

upstream backend_servers { server 127.0.0.1:8080 max_fails=3 fail_timeout=30s; # Add more backend servers as needed # server 127.0.0.1:8081 max_fails=3 fail_timeout=30s; keepalive 16; }

Create cache directory and set permissions

Create the Nginx cache directory with proper ownership and permissions for secure cache storage.

sudo mkdir -p /var/cache/nginx/redis_cache
sudo chown -R nginx:nginx /var/cache/nginx
sudo chmod 755 /var/cache/nginx
sudo chmod 755 /var/cache/nginx/redis_cache
Never use chmod 777. It gives every user on the system full access to your files. Instead, fix ownership with chown and use minimal permissions like 755 for directories that need to be accessible by specific users.

Enable and start services

Enable both Nginx and all Redis cluster services to start automatically on system boot.

sudo systemctl daemon-reload
sudo systemctl enable nginx
sudo systemctl start nginx

Verify all services are running

sudo systemctl status nginx for port in 7000 7001 7002 7003 7004 7005; do sudo systemctl status redis-cluster-$port done

Configure Redis cluster monitoring script

Create a monitoring script to check cluster health and automatically handle failover scenarios.

#!/bin/bash

Redis cluster monitoring script

LOG_FILE="/var/log/redis-cluster-monitor.log" HOST_IP=$(hostname -I | awk '{print $1}') PORTS=(7000 7001 7002 7003 7004 7005) log_message() { echo "$(date '+%Y-%m-%d %H:%M:%S') - $1" >> $LOG_FILE } check_cluster_health() { for port in "${PORTS[@]}"; do if ! redis-cli -h $HOST_IP -p $port ping > /dev/null 2>&1; then log_message "WARNING: Redis node $HOST_IP:$port is not responding" # Attempt to restart the node sudo systemctl restart redis-cluster-$port log_message "INFO: Attempted to restart redis-cluster-$port" else log_message "INFO: Redis node $HOST_IP:$port is healthy" fi done # Check cluster state cluster_state=$(redis-cli -h $HOST_IP -p 7000 cluster info | grep cluster_state | cut -d: -f2) if [[ "$cluster_state" != "ok" ]]; then log_message "WARNING: Cluster state is not OK: $cluster_state" else log_message "INFO: Cluster state is healthy" fi }

Main execution

log_message "Starting Redis cluster health check" check_cluster_health log_message "Redis cluster health check completed"

Set up monitoring cron job

Configure automated cluster health monitoring to run every 5 minutes and handle failures automatically.

sudo chmod +x /usr/local/bin/redis-cluster-monitor.sh
sudo chown root:root /usr/local/bin/redis-cluster-monitor.sh

Add cron job for monitoring

echo "/5 * /usr/local/bin/redis-cluster-monitor.sh" | sudo crontab -

Configure performance optimization

Tune Redis cluster memory settings

Optimize Redis memory usage and eviction policies for better cache performance across the cluster.

for port in 7000 7001 7002 7003 7004 7005; do
    redis-cli -h 127.0.0.1 -p $port CONFIG SET maxmemory-samples 5
    redis-cli -h 127.0.0.1 -p $port CONFIG SET tcp-keepalive 300
    redis-cli -h 127.0.0.1 -p $port CONFIG SET timeout 0
done

Optimize Nginx worker settings

Configure Nginx worker processes and connections for optimal Redis cluster communication performance.

# Performance optimizations
worker_rlimit_nofile 65535;

Upstream keepalive settings

upstream redis_cluster { server 127.0.0.1:7000 max_fails=3 fail_timeout=30s weight=1; server 127.0.0.1:7001 max_fails=3 fail_timeout=30s weight=1; server 127.0.0.1:7002 max_fails=3 fail_timeout=30s weight=1; keepalive 64; keepalive_requests 1000; keepalive_timeout 60s; }

Cache performance settings

proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_background_update on; proxy_cache_lock on; proxy_cache_lock_timeout 5s;

Verify your setup

Test the Redis cluster functionality and Nginx caching integration to ensure everything works correctly.

# Check Nginx status
sudo systemctl status nginx
nginx -t

Verify Redis cluster status

redis-cli -h 127.0.0.1 -p 7000 cluster info redis-cli -h 127.0.0.1 -p 7000 cluster nodes

Test cache functionality

curl -I http://localhost/ curl -H "Host: example.com" http://localhost/

Check cache hit statistics

for port in 7000 7001 7002; do echo "Node $port stats:" redis-cli -h 127.0.0.1 -p $port info stats | grep keyspace done

Monitor cache performance

tail -f /var/log/nginx/access.log tail -f /var/log/redis-cluster-monitor.log

Common issues

SymptomCauseFix
Redis cluster fails to formNetwork connectivity issuesCheck firewall rules, ensure ports 7000-7005 are accessible
Nginx fails to connect to RedisModule not compiled correctlyVerify redis2 module in nginx -V output
Cache misses are highInsufficient memory or wrong eviction policyIncrease maxmemory and check eviction policy settings
Cluster nodes show as failedTimeout settings too aggressiveIncrease cluster-node-timeout in Redis config
Permission denied on cache directoryWrong ownership or permissionssudo chown -R nginx:nginx /var/cache/nginx && sudo chmod 755 /var/cache/nginx
Monitoring script not loggingLog file permissionssudo touch /var/log/redis-cluster-monitor.log && sudo chmod 644 /var/log/redis-cluster-monitor.log

Next steps

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle infrastructure performance optimization for businesses that depend on uptime. From initial setup to ongoing operations.