Deploy a production-ready ELK stack for centralized log aggregation with Elasticsearch 8, Logstash 8, and Kibana 8. Configure secure log shipping from multiple sources with authentication and SSL encryption.
Prerequisites
- Server with minimum 8GB RAM
- Root or sudo access
- Java 17 or higher
What this solves
Centralized log aggregation with the ELK stack allows you to collect, parse, and visualize logs from multiple servers and applications in one location. This tutorial sets up Elasticsearch 8 for log storage, Logstash 8 for log processing, and Kibana 8 for visualization with security features enabled.
Step-by-step installation
Update system packages
Start by updating your package manager to ensure you get the latest versions of all dependencies.
sudo apt update && sudo apt upgrade -y
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
Install Java OpenJDK 17
The ELK stack requires Java 17 or higher. Install OpenJDK 17 which is the recommended version for Elasticsearch 8.
sudo apt install -y openjdk-17-jdk
Verify Java installation:
java -version
Add Elasticsearch APT repository
Add the official Elasticsearch repository to install the latest version 8.x packages.
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
sudo apt update
Install Elasticsearch 8
Install Elasticsearch which will serve as the search and analytics engine for storing and indexing log data.
sudo apt install -y elasticsearch
The installation will generate security credentials. Save the elastic user password that appears in the output.
Configure Elasticsearch
Configure Elasticsearch for production use with appropriate memory settings and network binding.
cluster.name: elk-cluster
node.name: elk-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: localhost
http.port: 9200
discovery.type: single-node
Security settings
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/transport.p12
xpack.security.transport.ssl.truststore.path: certs/transport.p12
Configure Elasticsearch JVM heap
Set JVM heap size to half of available RAM (max 32GB). For a system with 8GB RAM, use 4GB heap.
-Xms4g
-Xmx4g
Start and enable Elasticsearch
Enable Elasticsearch to start automatically on boot and start the service.
sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch
Install Kibana 8
Install Kibana for web-based log visualization and dashboard creation.
sudo apt install -y kibana
Configure Kibana
Configure Kibana to connect to Elasticsearch with proper security settings.
server.port: 5601
server.host: "0.0.0.0"
server.name: "elk-kibana"
elasticsearch.hosts: ["https://localhost:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/certs/http_ca.crt"]
elasticsearch.ssl.verificationMode: "certificate"
xpack.security.encryptionKey: "something_at_least_32_characters_long_for_encryption"
xpack.encryptedSavedObjects.encryptionKey: "something_at_least_32_characters_long_for_saved_objects"
xpack.reporting.encryptionKey: "something_at_least_32_characters_long_for_reporting"
Copy Elasticsearch CA certificate
Copy the Elasticsearch CA certificate to Kibana configuration directory for SSL verification.
sudo mkdir -p /etc/kibana/certs
sudo cp /etc/elasticsearch/certs/http_ca.crt /etc/kibana/certs/
sudo chown -R kibana:kibana /etc/kibana/certs
sudo chmod 600 /etc/kibana/certs/http_ca.crt
Set Kibana system password
Generate a password for the kibana_system user that Kibana uses to connect to Elasticsearch.
sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system
Update the kibana.yml file with the generated password:
sudo sed -i 's/#elasticsearch.password: "pass"/elasticsearch.password: "GENERATED_PASSWORD"/' /etc/kibana/kibana.yml
Start and enable Kibana
Enable Kibana to start on boot and start the service.
sudo systemctl enable kibana
sudo systemctl start kibana
sudo systemctl status kibana
Install Logstash 8
Install Logstash for log processing, parsing, and forwarding to Elasticsearch.
sudo apt install -y logstash
Create Logstash user in Elasticsearch
Create a dedicated user for Logstash to write data to Elasticsearch with appropriate permissions.
curl -X POST "https://localhost:9200/_security/user/logstash_writer" \
-H "Content-Type: application/json" \
-u elastic:ELASTIC_PASSWORD \
--cacert /etc/elasticsearch/certs/http_ca.crt \
-d '{
"password" : "LogstashWriter123!",
"roles" : [ "logstash_writer" ],
"full_name" : "Logstash Writer"
}'
curl -X POST "https://localhost:9200/_security/role/logstash_writer" \
-H "Content-Type: application/json" \
-u elastic:ELASTIC_PASSWORD \
--cacert /etc/elasticsearch/certs/http_ca.crt \
-d '{
"cluster": ["monitor", "manage_index_templates", "manage_ilm"],
"indices": [
{
"names": [ "logstash-*" ],
"privileges": ["write", "create", "create_index", "manage", "manage_ilm"]
}
]
}'
Configure Logstash pipeline
Create a basic Logstash configuration to process syslog data and send it to Elasticsearch.
input {
beats {
port => 5044
}
syslog {
port => 5514
}
}
filter {
if [type] == "syslog" {
grok {
match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{IPORHOST:server} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:message}" }
overwrite => [ "message" ]
}
date {
match => [ "timestamp", "MMM d HH:mm:ss", "MMM dd HH:mm:ss" ]
}
}
}
output {
elasticsearch {
hosts => ["https://localhost:9200"]
user => "logstash_writer"
password => "LogstashWriter123!"
ssl => true
cacert => "/etc/logstash/certs/http_ca.crt"
index => "logstash-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
Copy CA certificate for Logstash
Copy the Elasticsearch CA certificate for Logstash to verify SSL connections.
sudo mkdir -p /etc/logstash/certs
sudo cp /etc/elasticsearch/certs/http_ca.crt /etc/logstash/certs/
sudo chown -R logstash:logstash /etc/logstash/certs
sudo chmod 600 /etc/logstash/certs/http_ca.crt
Configure Logstash JVM heap
Set appropriate JVM heap size for Logstash based on available memory.
-Xms2g
-Xmx2g
Start and enable Logstash
Enable Logstash to start on boot and start the service.
sudo systemctl enable logstash
sudo systemctl start logstash
sudo systemctl status logstash
Configure firewall rules
Open necessary ports for ELK stack communication and external access.
sudo ufw allow 5601/tcp comment 'Kibana'
sudo ufw allow 5044/tcp comment 'Logstash Beats'
sudo ufw allow 5514/tcp comment 'Logstash Syslog'
sudo ufw allow 5514/udp comment 'Logstash Syslog UDP'
sudo ufw reload
Configure rsyslog for log forwarding
Configure rsyslog on client systems to forward logs to your Logstash server.
# Forward all logs to Logstash
. @@203.0.113.10:5514
Stop processing after forwarding
& stop
Restart rsyslog on client systems:
sudo systemctl restart rsyslog
Verify your setup
Check that all ELK stack components are running properly:
# Check Elasticsearch cluster health
curl -X GET "https://localhost:9200/_cluster/health?pretty" \
-u elastic:ELASTIC_PASSWORD \
--cacert /etc/elasticsearch/certs/http_ca.crt
Check Logstash is receiving data
sudo tail -f /var/log/logstash/logstash-plain.log
Check all services are running
sudo systemctl status elasticsearch kibana logstash
Test log forwarding
logger "Test message from $(hostname)"
Access Kibana web interface
echo "Access Kibana at: http://your-server-ip:5601"
echo "Username: elastic"
echo "Password: ELASTIC_PASSWORD"
Common issues
| Symptom | Cause | Fix |
|---|---|---|
| Elasticsearch won't start | Insufficient heap memory | Check JVM heap settings in /etc/elasticsearch/jvm.options.d/ |
| Kibana can't connect to Elasticsearch | SSL certificate issues | Verify CA certificate path and kibana_system password |
| Logstash not processing logs | Configuration syntax error | Test config with: sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t |
| No logs appearing in Kibana | Missing index patterns | Create index patterns in Kibana: Management → Index Patterns → Create |
| Permission denied errors | Incorrect file ownership | Check ownership with ls -la and fix with appropriate chown commands |
| High memory usage | JVM heap size too large | Reduce heap size to max 50% of available RAM |
Next steps
Automated install script
Run this to automate the entire setup
#!/usr/bin/env bash
set -euo pipefail
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Function to print colored output
print_status() {
echo -e "${GREEN}[INFO]${NC} $1"
}
print_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
print_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
# Usage function
usage() {
echo "Usage: $0 [OPTIONS]"
echo "Options:"
echo " -h, --help Show this help message"
echo " -m, --memory Set Elasticsearch heap size in GB (default: auto-detect)"
echo ""
echo "Example: $0 --memory 4"
exit 1
}
# Cleanup function for rollback
cleanup() {
print_error "Installation failed. Cleaning up..."
systemctl stop elasticsearch || true
systemctl stop logstash || true
systemctl stop kibana || true
exit 1
}
trap cleanup ERR
# Parse arguments
HEAP_SIZE=""
while [[ $# -gt 0 ]]; do
case $1 in
-h|--help)
usage
;;
-m|--memory)
HEAP_SIZE="$2"
shift 2
;;
*)
print_error "Unknown option: $1"
usage
;;
esac
done
# Check if running as root or with sudo
if [[ $EUID -ne 0 ]]; then
print_error "This script must be run as root or with sudo"
exit 1
fi
# Detect distribution
if [ -f /etc/os-release ]; then
. /etc/os-release
case "$ID" in
ubuntu|debian)
PKG_MGR="apt"
PKG_INSTALL="apt install -y"
PKG_UPDATE="apt update"
PKG_UPGRADE="apt upgrade -y"
JAVA_PKG="openjdk-17-jdk"
;;
almalinux|rocky|centos|rhel|ol|fedora)
PKG_MGR="dnf"
PKG_INSTALL="dnf install -y"
PKG_UPDATE="dnf check-update || true"
PKG_UPGRADE="dnf update -y"
JAVA_PKG="java-17-openjdk java-17-openjdk-devel"
;;
amzn)
PKG_MGR="yum"
PKG_INSTALL="yum install -y"
PKG_UPDATE="yum check-update || true"
PKG_UPGRADE="yum update -y"
JAVA_PKG="java-17-openjdk java-17-openjdk-devel"
;;
*)
print_error "Unsupported distribution: $ID"
exit 1
;;
esac
else
print_error "Cannot detect distribution. /etc/os-release not found."
exit 1
fi
print_status "Detected distribution: $ID"
# Auto-detect memory for heap size
if [[ -z "$HEAP_SIZE" ]]; then
TOTAL_MEM_KB=$(grep MemTotal /proc/meminfo | awk '{print $2}')
TOTAL_MEM_GB=$((TOTAL_MEM_KB / 1024 / 1024))
HEAP_SIZE=$((TOTAL_MEM_GB / 2))
if [[ $HEAP_SIZE -gt 32 ]]; then
HEAP_SIZE=32
fi
if [[ $HEAP_SIZE -lt 1 ]]; then
HEAP_SIZE=1
fi
fi
print_status "Using ${HEAP_SIZE}GB heap size for Elasticsearch"
print_status "[1/10] Updating system packages..."
$PKG_UPDATE
$PKG_UPGRADE
print_status "[2/10] Installing prerequisites..."
if [[ "$PKG_MGR" == "apt" ]]; then
$PKG_INSTALL curl gnupg2 software-properties-common apt-transport-https ca-certificates
else
$PKG_INSTALL curl gnupg2 ca-certificates
fi
print_status "[3/10] Installing Java OpenJDK 17..."
$PKG_INSTALL $JAVA_PKG
java -version
print_status "[4/10] Adding Elasticsearch repository..."
curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
if [[ "$PKG_MGR" == "apt" ]]; then
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" > /etc/apt/sources.list.d/elastic-8.x.list
apt update
else
cat > /etc/yum.repos.d/elasticsearch.repo << EOF
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
EOF
fi
print_status "[5/10] Installing Elasticsearch..."
if [[ "$PKG_MGR" == "apt" ]]; then
$PKG_INSTALL elasticsearch
else
$PKG_INSTALL elasticsearch
fi
# Save the elastic password
print_warning "Please save the elastic user password shown above!"
print_status "[6/10] Configuring Elasticsearch..."
cat > /etc/elasticsearch/elasticsearch.yml << EOF
cluster.name: elk-cluster
node.name: elk-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: localhost
http.port: 9200
discovery.type: single-node
# Security settings
xpack.security.enabled: true
xpack.security.enrollment.enabled: true
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: certs/http.p12
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.client_authentication: required
xpack.security.transport.ssl.keystore.path: certs/transport.p12
xpack.security.transport.ssl.truststore.path: certs/transport.p12
EOF
# Configure JVM heap
cat > /etc/elasticsearch/jvm.options.d/heap.options << EOF
-Xms${HEAP_SIZE}g
-Xmx${HEAP_SIZE}g
EOF
chown root:elasticsearch /etc/elasticsearch/elasticsearch.yml
chmod 660 /etc/elasticsearch/elasticsearch.yml
print_status "[7/10] Starting Elasticsearch..."
systemctl daemon-reload
systemctl enable elasticsearch
systemctl start elasticsearch
print_status "[8/10] Installing Kibana..."
if [[ "$PKG_MGR" == "apt" ]]; then
$PKG_INSTALL kibana
else
$PKG_INSTALL kibana
fi
print_status "[9/10] Installing Logstash..."
if [[ "$PKG_MGR" == "apt" ]]; then
$PKG_INSTALL logstash
else
$PKG_INSTALL logstash
fi
# Basic Logstash configuration
cat > /etc/logstash/conf.d/basic.conf << EOF
input {
beats {
port => 5044
}
}
filter {
if [fileset][module] == "system" {
if [fileset][name] == "auth" {
grok {
match => { "message" => ["%{SYSLOGTIMESTAMP:[system][auth][timestamp]} %{IPORHOST:[system][auth][hostname]} sshd(?:\[%{POSINT:[system][auth][pid]}\])?: %{DATA:[system][auth][ssh][event]} %{DATA:[system][auth][ssh][method]} for (invalid user )?%{DATA:[system][auth][user]} from %{IPORHOST:[system][auth][ssh][ip]} port %{INT:[system][auth][ssh][port]} ssh2"] }
}
}
}
}
output {
elasticsearch {
hosts => ["localhost:9200"]
ssl => true
ssl_certificate_verification => false
user => "elastic"
password => "${ELASTIC_PASSWORD:-changeme}"
}
}
EOF
chown root:logstash /etc/logstash/conf.d/basic.conf
chmod 644 /etc/logstash/conf.d/basic.conf
print_status "[10/10] Starting services..."
systemctl enable kibana
systemctl enable logstash
systemctl start logstash
# Wait for Elasticsearch to be ready
print_status "Waiting for Elasticsearch to be ready..."
sleep 30
# Configure firewall if firewalld is active
if systemctl is-active --quiet firewalld 2>/dev/null; then
print_status "Configuring firewall..."
firewall-cmd --permanent --add-port=9200/tcp --add-port=5601/tcp --add-port=5044/tcp
firewall-cmd --reload
fi
# Configure UFW if it's active
if command -v ufw >/dev/null 2>&1 && ufw status | grep -q "Status: active"; then
print_status "Configuring UFW firewall..."
ufw allow 9200/tcp
ufw allow 5601/tcp
ufw allow 5044/tcp
fi
print_status "Verifying installation..."
systemctl is-active elasticsearch || print_error "Elasticsearch is not running"
systemctl is-active logstash || print_error "Logstash is not running"
print_status "ELK Stack installation completed successfully!"
print_warning "Next steps:"
echo "1. Get the Kibana enrollment token: /usr/share/elasticsearch/bin/elasticsearch-create-enrollment-token -s kibana"
echo "2. Configure Kibana: /usr/share/kibana/bin/kibana-setup --enrollment-token <token>"
echo "3. Start Kibana: systemctl start kibana"
echo "4. Access Kibana at: http://localhost:5601"
echo "5. Default elastic user password was shown during Elasticsearch installation"
Review the script before running. Execute with: bash install.sh