Setup centralized log aggregation with Elasticsearch 8, Logstash 8, and Kibana 8 (ELK Stack)

Intermediate 45 min Apr 04, 2026 154 views
Ubuntu 24.04 Debian 12 AlmaLinux 9 Rocky Linux 9

Deploy a production-ready ELK stack for centralized log aggregation with Elasticsearch 8, Logstash 8, and Kibana 8. Configure secure log shipping from multiple sources with authentication and SSL encryption.

Prerequisites

  • Server with minimum 8GB RAM
  • Root or sudo access
  • Java 17 or higher

What this solves

Centralized log aggregation with the ELK stack allows you to collect, parse, and visualize logs from multiple servers and applications in one location. This tutorial sets up Elasticsearch 8 for log storage, Logstash 8 for log processing, and Kibana 8 for visualization with security features enabled.

Step-by-step installation

Update system packages

Start by updating your package manager to ensure you get the latest versions of all dependencies.

sudo apt update && sudo apt upgrade -y
sudo apt install -y curl gnupg2 software-properties-common apt-transport-https ca-certificates
sudo dnf update -y
sudo dnf install -y curl gnupg2 ca-certificates

Install Java OpenJDK 17

The ELK stack requires Java 17 or higher. Install OpenJDK 17 which is the recommended version for Elasticsearch 8.

sudo apt install -y openjdk-17-jdk
sudo dnf install -y java-17-openjdk java-17-openjdk-devel

Verify Java installation:

java -version

Add Elasticsearch APT repository

Add the official Elasticsearch repository to install the latest version 8.x packages.

curl -fsSL https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo gpg --dearmor -o /usr/share/keyrings/elasticsearch-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/elasticsearch-keyring.gpg] https://artifacts.elastic.co/packages/8.x/apt stable main" | sudo tee /etc/apt/sources.list.d/elastic-8.x.list
sudo apt update
[elasticsearch]
name=Elasticsearch repository for 8.x packages
baseurl=https://artifacts.elastic.co/packages/8.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=0
autorefresh=1
type=rpm-md

Install Elasticsearch 8

Install Elasticsearch which will serve as the search and analytics engine for storing and indexing log data.

sudo apt install -y elasticsearch
sudo dnf install -y --enablerepo=elasticsearch elasticsearch

The installation will generate security credentials. Save the elastic user password that appears in the output.

Note: Elasticsearch 8 enables security by default with SSL and authentication. Save the generated elastic password and enrollment token for Kibana setup.

Configure Elasticsearch

Configure Elasticsearch for production use with appropriate memory settings and network binding.

cluster.name: elk-cluster
node.name: elk-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: localhost
http.port: 9200
discovery.type: single-node

Security settings

xpack.security.enabled: true xpack.security.enrollment.enabled: true xpack.security.http.ssl.enabled: true xpack.security.http.ssl.keystore.path: certs/http.p12 xpack.security.transport.ssl.enabled: true xpack.security.transport.ssl.verification_mode: certificate xpack.security.transport.ssl.client_authentication: required xpack.security.transport.ssl.keystore.path: certs/transport.p12 xpack.security.transport.ssl.truststore.path: certs/transport.p12

Configure Elasticsearch JVM heap

Set JVM heap size to half of available RAM (max 32GB). For a system with 8GB RAM, use 4GB heap.

-Xms4g
-Xmx4g
Warning: Never set heap size larger than 32GB or more than 50% of available RAM. This can cause performance issues.

Start and enable Elasticsearch

Enable Elasticsearch to start automatically on boot and start the service.

sudo systemctl enable elasticsearch
sudo systemctl start elasticsearch
sudo systemctl status elasticsearch

Install Kibana 8

Install Kibana for web-based log visualization and dashboard creation.

sudo apt install -y kibana
sudo dnf install -y --enablerepo=elasticsearch kibana

Configure Kibana

Configure Kibana to connect to Elasticsearch with proper security settings.

server.port: 5601
server.host: "0.0.0.0"
server.name: "elk-kibana"

elasticsearch.hosts: ["https://localhost:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.ssl.certificateAuthorities: ["/etc/kibana/certs/http_ca.crt"]
elasticsearch.ssl.verificationMode: "certificate"

xpack.security.encryptionKey: "something_at_least_32_characters_long_for_encryption"
xpack.encryptedSavedObjects.encryptionKey: "something_at_least_32_characters_long_for_saved_objects"
xpack.reporting.encryptionKey: "something_at_least_32_characters_long_for_reporting"

Copy Elasticsearch CA certificate

Copy the Elasticsearch CA certificate to Kibana configuration directory for SSL verification.

sudo mkdir -p /etc/kibana/certs
sudo cp /etc/elasticsearch/certs/http_ca.crt /etc/kibana/certs/
sudo chown -R kibana:kibana /etc/kibana/certs
sudo chmod 600 /etc/kibana/certs/http_ca.crt

Set Kibana system password

Generate a password for the kibana_system user that Kibana uses to connect to Elasticsearch.

sudo /usr/share/elasticsearch/bin/elasticsearch-reset-password -u kibana_system

Update the kibana.yml file with the generated password:

sudo sed -i 's/#elasticsearch.password: "pass"/elasticsearch.password: "GENERATED_PASSWORD"/' /etc/kibana/kibana.yml

Start and enable Kibana

Enable Kibana to start on boot and start the service.

sudo systemctl enable kibana
sudo systemctl start kibana
sudo systemctl status kibana

Install Logstash 8

Install Logstash for log processing, parsing, and forwarding to Elasticsearch.

sudo apt install -y logstash
sudo dnf install -y --enablerepo=elasticsearch logstash

Create Logstash user in Elasticsearch

Create a dedicated user for Logstash to write data to Elasticsearch with appropriate permissions.

curl -X POST "https://localhost:9200/_security/user/logstash_writer" \
  -H "Content-Type: application/json" \
  -u elastic:ELASTIC_PASSWORD \
  --cacert /etc/elasticsearch/certs/http_ca.crt \
  -d '{
    "password" : "LogstashWriter123!",
    "roles" : [ "logstash_writer" ],
    "full_name" : "Logstash Writer"
  }'

curl -X POST "https://localhost:9200/_security/role/logstash_writer" \
  -H "Content-Type: application/json" \
  -u elastic:ELASTIC_PASSWORD \
  --cacert /etc/elasticsearch/certs/http_ca.crt \
  -d '{
    "cluster": ["monitor", "manage_index_templates", "manage_ilm"],
    "indices": [
      {
        "names": [ "logstash-*" ],
        "privileges": ["write", "create", "create_index", "manage", "manage_ilm"]
      }
    ]
  }'

Configure Logstash pipeline

Create a basic Logstash configuration to process syslog data and send it to Elasticsearch.

input {
  beats {
    port => 5044
  }
  syslog {
    port => 5514
  }
}

filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:timestamp} %{IPORHOST:server} %{DATA:program}(?:\[%{POSINT:pid}\])?: %{GREEDYDATA:message}" }
      overwrite => [ "message" ]
    }
    date {
      match => [ "timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}

output {
  elasticsearch {
    hosts => ["https://localhost:9200"]
    user => "logstash_writer"
    password => "LogstashWriter123!"
    ssl => true
    cacert => "/etc/logstash/certs/http_ca.crt"
    index => "logstash-%{+YYYY.MM.dd}"
  }
  stdout { codec => rubydebug }
}

Copy CA certificate for Logstash

Copy the Elasticsearch CA certificate for Logstash to verify SSL connections.

sudo mkdir -p /etc/logstash/certs
sudo cp /etc/elasticsearch/certs/http_ca.crt /etc/logstash/certs/
sudo chown -R logstash:logstash /etc/logstash/certs
sudo chmod 600 /etc/logstash/certs/http_ca.crt

Configure Logstash JVM heap

Set appropriate JVM heap size for Logstash based on available memory.

-Xms2g
-Xmx2g

Start and enable Logstash

Enable Logstash to start on boot and start the service.

sudo systemctl enable logstash
sudo systemctl start logstash
sudo systemctl status logstash

Configure firewall rules

Open necessary ports for ELK stack communication and external access.

sudo ufw allow 5601/tcp comment 'Kibana'
sudo ufw allow 5044/tcp comment 'Logstash Beats'
sudo ufw allow 5514/tcp comment 'Logstash Syslog'
sudo ufw allow 5514/udp comment 'Logstash Syslog UDP'
sudo ufw reload
sudo firewall-cmd --permanent --add-port=5601/tcp --add-comment="Kibana"
sudo firewall-cmd --permanent --add-port=5044/tcp --add-comment="Logstash Beats"
sudo firewall-cmd --permanent --add-port=5514/tcp --add-comment="Logstash Syslog"
sudo firewall-cmd --permanent --add-port=5514/udp --add-comment="Logstash Syslog UDP"
sudo firewall-cmd --reload

Configure rsyslog for log forwarding

Configure rsyslog on client systems to forward logs to your Logstash server.

# Forward all logs to Logstash
. @@203.0.113.10:5514

Stop processing after forwarding

& stop

Restart rsyslog on client systems:

sudo systemctl restart rsyslog

Verify your setup

Check that all ELK stack components are running properly:

# Check Elasticsearch cluster health
curl -X GET "https://localhost:9200/_cluster/health?pretty" \
  -u elastic:ELASTIC_PASSWORD \
  --cacert /etc/elasticsearch/certs/http_ca.crt

Check Logstash is receiving data

sudo tail -f /var/log/logstash/logstash-plain.log

Check all services are running

sudo systemctl status elasticsearch kibana logstash

Test log forwarding

logger "Test message from $(hostname)"

Access Kibana web interface

echo "Access Kibana at: http://your-server-ip:5601" echo "Username: elastic" echo "Password: ELASTIC_PASSWORD"
Note: Replace ELASTIC_PASSWORD with the password generated during Elasticsearch installation.

Common issues

SymptomCauseFix
Elasticsearch won't startInsufficient heap memoryCheck JVM heap settings in /etc/elasticsearch/jvm.options.d/
Kibana can't connect to ElasticsearchSSL certificate issuesVerify CA certificate path and kibana_system password
Logstash not processing logsConfiguration syntax errorTest config with: sudo -u logstash /usr/share/logstash/bin/logstash --path.settings /etc/logstash -t
No logs appearing in KibanaMissing index patternsCreate index patterns in Kibana: Management → Index Patterns → Create
Permission denied errorsIncorrect file ownershipCheck ownership with ls -la and fix with appropriate chown commands
High memory usageJVM heap size too largeReduce heap size to max 50% of available RAM
Never use chmod 777. It gives every user on the system full access to your files. Instead, fix ownership with chown and use minimal permissions like 644 for files and 755 for directories.

Next steps

Automated install script

Run this to automate the entire setup

Need help?

Don't want to manage this yourself?

We handle managed devops services for businesses that depend on uptime. From initial setup to ongoing operations.