Limit the Exposure of Your ELK Stack Superuser Credentials

Issue Overview

The Elastic Stack X-Pack module combines RBAC defined within Elasticsearch to provide authentication and authorization to the Kibana GUI for visualizing data. The relationship between the two products leads to a chicken/egg scenario within the GUI when it comes to assigning granular permissions to custom roles before data exists.

kibana index pattern message no data

One of the more useful X-Pack features allows Kibana to limit what indexes a custom role can access in Kibana once defined within Elasticsearch. The mechanism used by Kibana to determine what Elasticsearch indexes a user has permission to access is the index pattern populated in the Indices drop-down on the Create Role page.

create elasticsearch role

In the Kibana GUI, index patterns can be created based on available indexes within Elasticsearch.

One might think you can use the Kibana Dev Console to create a zero-byte index to establish a viable index pattern to assign role permissions, but the process to create an index pattern on zero-byte index in Kibana can’t be completed.

dev console

zero bytes allowed

Staged data can be used to establish a viable index pattern, but this of course requires additional API calls and work to create and clean-up the unnecessary data.

The staged data allows the desired index pattern to be set in the Role object.

In order to populate an index with real data, a log shipper such as Filebeat, Winlogbeat or Logstash is used to send event data to Elasticsearch. Typically in a deployment using the Elastic Cloud service, authentication is required to use the available APIs. The configuration files used by the log shippers may be YAML or text files that contain credentials for the cloud environment, and usually the files are clear-text which can leave credentials vulnerable to malicious users.

Using only the Kibana GUI to configure RBAC, it’ll be necessary to use credentials that already exist in Elasticsearch with elevated permissions such as the superuser credentials, in order to create and populate indexes with real data. This means the configuration of the log shipper configuration files must be done twice! The first time with elevated credentials and privileges, to create the index with event data such as Windows security log events, create the index pattern, create the role, assign the appropriate index pattern to the role and assign the role to the non-superuser. Then the log shipper configuration file(s) can be updated with the new non-Superuser credentials to limit the risk to your cluster in the event the configuration file is compromised by a malicious user.

In an environment with many clients, and no centralized configuration management system such an Ansible or Puppet, this can also lead to forgotten and missed config files that could contain credentials with elevated permissions.


A more secure way to setup the log shipper configuration files is to first use the Kibana API to create the index pattern via a secure host. When the API is used to create the index pattern vice the Kibana GUI, the limited role and user credentials can then be configured within the Kibana GUI before any indexed data exists.

When this more secure method is used to assign granular permissions, in the event a configuration file containing credentials in clear-text is compromised, the malicious user will only be able to compromise data permitted by the index pattern assigned to the role, rather than compromise your entire cluster.

Remember to execute the API against the Kibana API, not Elasticsearch. Kibana uses user and roles defined in Elasticsearch to authorize access to indexes based on the patterns assigned in Kibana.

On the host used to execute the API calls, create adminpass.txt with the admin password, export the file contents to an environment variable and then delete the file. This method prevents the password from being saved in history or other CLI loggers.

Once real data exists, the pattern can be updated at any time in Kibana with all fields and types found in the real indexes.


# ADMIN_PASS=`cat adminpass.txt`
# rm adminpass.txt
# curl -X POST \
-H "kbn-xsrf: true" \
-H "Content-type: application/json" \
${KIBANA_URL}/api/saved_objects/index-pattern/<PATTERN_NAME> \
-d '{"attributes": {"title": "<PATTERN_NAME>"}}'

The index pattern now exists with no data and no indexes.

Create a limited role for the client for specific indexes matching the initial pattern. Create a user, assign the limited role and then the non-superuser credentials can be used in the Beats or Logstash YAML config files.




Related articles

Hair-pinning a Juniper SRX for Interzone Access

Each of your firewall vendors, or any device capable of Layer 3 traffic tries very hard to prevent traffic that will ingress on the same interface used to egress traffic. This is about preventing spoof […]

Learn More

Packet Capture on Palo Alto PAN Appliance

Similar to the SourceFire Unified Threat Management (UTM) devices, the Palo Alto PANOS appliances can perform packet captures. Capturing traffic at the perimeter of your network can be an invaluable security and troubleshooting capability.

Learn More

Juniper NetScreen NS500 Performance Troubleshooting

ScreenOS troubleshooting for high memory or high CPU situation on Juniper NetScreens, like the NS500. Login to the console and use the correct name for your vrouters. Confirm you are on the master device if […]

Learn More
Wildcard SSL