info@punkinnovation.com 571-572-2160

Limit the Exposure of Your ELK Stack Superuser Credentials

Issue Overview

The Elastic Stack X-Pack module combines RBAC defined within Elasticsearch to provide authentication and authorization to the Kibana GUI for visualizing data. The relationship between the two products leads to a chicken/egg scenario within the GUI when it comes to assigning granular permissions to custom roles before data exists.

kibana index pattern message no data

One of the more useful X-Pack features allows Kibana to limit what indexes a custom role can access in Kibana once defined within Elasticsearch. The mechanism used by Kibana to determine what Elasticsearch indexes a user has permission to access is the index pattern populated in the Indices drop-down on the Create Role page.

create elasticsearch role

In the Kibana GUI, index patterns can be created based on available indexes within Elasticsearch.

One might think you can use the Kibana Dev Console to create a zero-byte index to establish a viable index pattern to assign role permissions, but the process to create an index pattern on zero-byte index in Kibana can’t be completed.

dev console

zero bytes allowed

Staged data can be used to establish a viable index pattern, but this of course requires additional API calls and work to create and clean-up the unnecessary data.

The staged data allows the desired index pattern to be set in the Role object.

In order to populate an index with real data, a log shipper such as Filebeat, Winlogbeat or Logstash is used to send event data to Elasticsearch. Typically in a deployment using the Elastic Cloud service, authentication is required to use the available APIs. The configuration files used by the log shippers may be YAML or text files that contain credentials for the cloud environment, and usually the files are clear-text which can leave credentials vulnerable to malicious users.

Using only the Kibana GUI to configure RBAC, it’ll be necessary to use credentials that already exist in Elasticsearch with elevated permissions such as the superuser credentials, in order to create and populate indexes with real data. This means the configuration of the log shipper configuration files must be done twice! The first time with elevated credentials and privileges, to create the index with event data such as Windows security log events, create the index pattern, create the role, assign the appropriate index pattern to the role and assign the role to the non-superuser. Then the log shipper configuration file(s) can be updated with the new non-Superuser credentials to limit the risk to your cluster in the event the configuration file is compromised by a malicious user.

In an environment with many clients, and no centralized configuration management system such an Ansible or Puppet, this can also lead to forgotten and missed config files that could contain credentials with elevated permissions.

Solution

A more secure way to setup the log shipper configuration files is to first use the Kibana API to create the index pattern via a secure host. When the API is used to create the index pattern vice the Kibana GUI, the limited role and user credentials can then be configured within the Kibana GUI before any indexed data exists.

When this more secure method is used to assign granular permissions, in the event a configuration file containing credentials in clear-text is compromised, the malicious user will only be able to compromise data permitted by the index pattern assigned to the role, rather than compromise your entire cluster.

Remember to execute the API against the Kibana API, not Elasticsearch. Kibana uses user and roles defined in Elasticsearch to authorize access to indexes based on the patterns assigned in Kibana.

On the host used to execute the API calls, create adminpass.txt with the admin password, export the file contents to an environment variable and then delete the file. This method prevents the password from being saved in history or other CLI loggers.

Once real data exists, the pattern can be updated at any time in Kibana with all fields and types found in the real indexes.

Commands

# ADMIN_PASS=`cat adminpass.txt`
# rm adminpass.txt
# curl -X POST \
-H "kbn-xsrf: true" \
-H "Content-type: application/json" \
-u ADMIN_USER:${ADMIN_PASS} \
${KIBANA_URL}/api/saved_objects/index-pattern/<PATTERN_NAME> \
-d '{"attributes": {"title": "<PATTERN_NAME>"}}'

The index pattern now exists with no data and no indexes.

Create a limited role for the client for specific indexes matching the initial pattern. Create a user, assign the limited role and then the non-superuser credentials can be used in the Beats or Logstash YAML config files.

 

 

 

Related articles

Security for Southbound SDN Protocols: How Vulnerable is OpenFlow?

The traditional IP network is hard to manage and typically consists of a complex hierarchy of interconnected devices. Each individual device needs to be uniquely configured and separately managed, and is often times tied to […]

Learn More

VPN Client Linux OpenSuse Patch

OpenSuse 12.x requires some patching in order to use the Linux VPN client. During a workstation migration, this issue came up and the process was documented to upgrade the OpenSuse VPN client.

Learn More

VMWare SSH Keys Password-less Logins for ESXi

The file system for the hypervisor is not persistent for the directory where SSH keys are typically stored. Typically they are stored in the user’s home directory. For VMware an extra step is used to […]

Learn More