Using ODFE and OpenSearch on Kubernetes

OpenSearch

2022-01-19

Using ODFE and OpenSearch on Kubernetes

In this article I will show you how to setup OpenSearch and Dashboards on Kubernetes using Helm Charts.

Having looked over the ODFE and OpenSearch forums, I came across many instances of users having a difficult time running ODFE/OpenSearch on Kubernetes using Helm. I decided to write this blog, which hopefully will provide a little bit more clarity and will serve as a good starting point for anyone having similar issues.
In this blog we are going to cover how to:
    Install custom certificates
    Load custom Elasticsearch configuration
    Load custom Kibana configuration
    Load custom security configuration
    Troubleshoot common issues
The instruction will be based on ODFE, however, changes for Opensearch/Dashboards will be listed at the end. And, rest assured, apart from some syntactical differences the process is identical.
We will be running Kubernetes locally using minikube, so you should be able to follow along without the need of any external cloud provider.

Let’s Begin

The initial step is to follow the ODFE docs and clone the latest helm chart using:
git clone https://github.com/opendistro-for-elasticsearch/opendistro-build
In my case I am using ODFE 1.13.2, your version might differ depending on when you clone it. Once we cd into opendistro-build/helm/opendistro-es/ we see 3 files and a templates directory:
    Chart.yaml
    values.yaml
    values-nonroot.yaml
    templates (DR)
For our use case we don’t need to worry about the Chart.yaml and values-nonroot.yaml. Most of the magic will happen in values.yaml. However, in order to load our custom certificates we are going to put ‘secrets.yaml’ file in templates/elasticsearch directory.

Install Custom Certificates

In order to create custom certs the docs are pretty good. However, for simplicity and reproducibility, I am going to use the demo certificates that were generated in a separate demo install of ODFE.
There are 5 files that we need to mount in order to get everything working. These are:
    Node certificate and key to establish TLS connection on transport layer. This is mandatory.
    Admin certificate and key to load configuration to security index
    Root certificate for both of the above
The files I have extracted from demo setup are:
    esnode.pem
    esnode-key.pem
    admin-crt.pem
    admin-key.pe
    root-ca.pem
We need to load these files as secrets to Kubernetes. This is done using below secrets.yaml file which I put into templates/elasticsearch directory:
apiVersion: v1
kind: Secret
metadata:
  name: elasticsearch-certs
  namespace: default
  labels:
    app: elasticsearch
data:
  esnode.pem: LS0tLS1CRUdJTiBDRVJ...
  esnode-key.pem: LS0tLS1CRUdJTiBQUklWQVRF...
  root-ca.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ...
  admin-crt.pem: LS0tLS1CRUdJTiBDRVJU...
  admin-key.pem: LS0tLS1CRUdJTiB....
Please note that the full content of the files mentioned above are base64 encoded and entered in a single line without line breaks.
To reiterate, all of below contents need to be base64 encoded:
-----BEGIN CERTIFICATE-----
MIIEQjCCAyqgAwIBAgIGAXkYont5MA0GCSqGSIb3DQEBCwUAMF8xEjAQBgoJkiaJ
k/IsZAEZFgJkZTENMAsGA1UEBwwEdGVzdDENMAsGA1UECgwEbm9kZTENMAsGA1UE
CwwEbm9kZTEcMBoGA1UEAwwTcm9vdC5jYS5leGFtcGxlLmNvbTAeFw0yMTA0Mjgx
MzE5MzRaFw0zMTA0MjYxMzE5MzRaMF0xEjAQBgoJkiaJk/IsZAEZFgJkZTENMAsG
A1UEBwwEdGVzdDENMAsGA1UECgwEbm9kZTENMAsGA1UECwwEbm9kZTEaMBgGA1UE
...
S1gMggCM+OFW93D0tnvzHPyxidjTlsviGh9I8gNzWm3Hm71G7sKg8o7n4eBPFO89
2/X1UVoZK7bTqtRiujXkj36Z6iUzQYAOxMnBOBEaT9y96ESUtQbDVq/BPHw4kgew
qInlktCttxD5CKDBnUzpeNHzVEI1ZDCQYsucpFWrBq3OuTNfoKY=
-----END CERTIFICATE-----
Once the file is saved in that directory, run below command to create a package which we are going to install later:
helm package .
Make sure you run the command in the same directory as Chart.yaml.
This will create a tgz file with name like opendistro-es-1.13.2.tgz.
Now we need to use these files in values.yml. In elasticsearch.extraVolumes and elasticsearch.extraVolumeMounts section enter the following:
extraVolumes: 
   - name: elasticsearch-certs
     secret:
       secretName: elasticsearch-certs

extraVolumeMounts:
   - name: elasticsearch-certs
     mountPath: /usr/share/elasticsearch/config/certs
     readOnly: true
On startup, this will create the directory /usr/share/elasticsearch/config/certs and load the files as they were named in secrets.yaml file (esnode.pem, esnode-key.pem, etc)

Load Custom Elasticsearch Configuration

The Elasticsearch configuration, that you are probably already familiar with, is loaded via the elasticsearch.config section in values.yaml (which is usually after the elasticsearch.client section) and should look something like below:
config:
    ######## Start OpenDistro for Elasticsearch Security Demo Configuration ########
    # WARNING: revise all the lines below before you go into production
    opendistro_security.ssl.transport.pemcert_filepath: /usr/share/elasticsearch/config/certs/esnode.pem
    opendistro_security.ssl.transport.pemkey_filepath: /usr/share/elasticsearch/config/certs/esnode-key.pem
    opendistro_security.ssl.transport.pemtrustedcas_filepath: /usr/share/elasticsearch/config/certs/root-ca.pem
    opendistro_security.ssl.transport.enforce_hostname_verification: false
    opendistro_security.ssl.http.enabled: false
    #opendistro_security.ssl.http.pemcert_filepath: /usr/share/elasticsearch/config/certs/esnode.pem
    #opendistro_security.ssl.http.pemkey_filepath: /usr/share/elasticsearch/config/certs/esnode-key.pem
    #opendistro_security.ssl.http.pemtrustedcas_filepath: /usr/share/elasticsearch/config/certs/root-ca.pem
    opendistro_security.allow_unsafe_democertificates: true
    opendistro_security.allow_default_init_securityindex: true
    opendistro_security.authcz.admin_dn:
      - CN=kirk,OU=node,O=node,L=test,DC=de
    opendistro_security.nodes_dn:
     - "CN=node*.example.com,OU=node,O=node,L=test,DC=de"
    opendistro_security.audit.type: internal_elasticsearch
    opendistro_security.enable_snapshot_restore_privilege: true
    opendistro_security.check_snapshot_restore_write_privileges: true
    opendistro_security.restapi.roles_enabled: ["all_access", "security_rest_api_access"]
    opendistro_security.system_indices.enabled: true
    opendistro_security.system_indices.indices: [".opendistro-alerting-config", ".opendistro-alerting-alert*", ".opendistro-anomaly-results*", ".opendistro-anomaly-detector*", ".opendistro-anomaly-checkpoints", ".opendistro-anomaly-detection-state", ".opendistro-reports-*", ".opendistro-notifications-*", ".opendistro-notebooks", ".opendistro-asynchronous-search-response*"]
    cluster.routing.allocation.disk.threshold_enabled: false
    node.max_local_storage_nodes: 3
This will be added to each node (master, data and client).
As you can see in this case we disabled ssl on http layer for simplicity, which means Elasticsearch will be available via http://localhost:9200, not https.

Load Custom Kibana Configuration

Before we are able to provide configuration to Kibana, we need to know the service that we need to point it to. However, we might not know the service name until after start up, which is no good.
To work around this issue provide any name of your choice at the very bottom of values.yaml file in section fullnameOverride. In my case I will give it test-elasticsearch, like so:
fullnameOverride: "test-elasticsearch"
Now that the name is set we can provide configuration for Kibana which is done in section kibana.config:
config:
      elasticsearch.hosts: "http://test-elasticsearch-client-service:9200"
      elasticsearch.ssl.verificationMode: none
      elasticsearch.username: kibanaserver
      elasticsearch.password: kibanaserver
      elasticsearch.requestHeadersWhitelist: ["securitytenant","Authorization"]

      opendistro_security.multitenancy.enabled: true
      opendistro_security.multitenancy.tenants.preferred: ["Private", "Global"]
      opendistro_security.readonly_mode.roles: ["kibana_read_only"]
Note that we are using http, and “-client-service” is automatically appended to the name.

Load Custom Security Configuration

Now we are getting to the more interesting part which seems to cause a number of issues for users - the security configuration.
The way that I chose to do this was to simple provide all the files in elasticsearch.securityConfig.config.data section, see example below:
elasticsearch:

  extraVolumeMounts:
   - name: elasticsearch-certs
     mountPath: /usr/share/elasticsearch/config/certs
     readOnly: true

  discoveryOverride: ""
  securityConfig:
    enabled: true
    path: "/usr/share/elasticsearch/plugins/opendistro_security/securityconfig"
    config:
      securityConfigSecret: "some-name"
      data:
        config.yml: |-
          _meta:
              type: "config"
              config_version: 2
          config:
            dynamic:
            ...
        internal_users.yml: |-
          _meta:
            type: "internalusers"
            config_version: 2
          testuser1:
            hash: "$2y$12$jgUJCj5pK33ziTFeD7B5pu6mMfhbMSGAUCiUoBT2y4hpkMH/Rmq6K"
There are a couple of caveats that seem to throw a lot of people off when it comes to this config:
SecurityConfigSecret
Some (any) string needs to be provided here, otherwise no error will be generated but the config will not be loaded (more on this in troubleshooting section)
All Files are Required
All config files need to be supplied, even files that have no specific setting in them, see example below of my “empty” files:
    action_groups.yml: |-
	  _meta:
	    type: "actiongroups"
	    config_version: 2
	nodes_dn.yml: |-
	  _meta:
	    type: "nodesdn"
	    config_version: 2
	whitelist.yml: |-
	  _meta:
	    type: "whitelist"
	    config_version: 2

Meta Tags
Meta tags are very important and config will fail without it, the tags are:
    tenants.yml: |-
      _meta:
        type: "tenants"
        config_version: 2
    action_groups.yml: |-
	  _meta:
	    type: "actiongroups"
	    config_version: 2
	nodes_dn.yml: |-
	  _meta:
	    type: "nodesdn"
	    config_version: 2
	whitelist.yml: |-
	  _meta:
	    type: "whitelist"
	    config_version: 2
	roles.yml: |-
      _meta:
        type: "roles"
        config_version: 2
    roles_mapping.yml: |-
      _meta:
        type: "rolesmapping"
        config_version: 2
    internal_users.yml: |-
      _meta:
        type: "internalusers"
        config_version: 2
    config.yml: |-
      _meta:
          type: "config"
          config_version: 2
Once this is done, you are ready to install chart using below command:
helm install --values=values.yaml opendistro-es-1.13.2.tgz --generate-name
Once everything starts up, after a couple of minutes (depending on your resources) you can run the port forward command:
kubectl port-forward deployment/test-elasticsearch-kibana 5601
and access the kibana from your local browser on http://localhost:5601
If everything was entered correctly there is no need to manually connect to any nodes and initialize the security index, assuming you have securityConfig.enabled set to true

Troubleshooting

Failed Liveness Check

I had issues with liveness check and my master, data and client pods would restart over and over. This is an issue with limited resources on my local minikube cluster and can be resolved by simply increasing the initialDelaySeconds in values.yaml of livenessProbe from 60 to something like 120.

Check Configuration

If the security config is incorrect, no error is displayed and the security index is not initialized. In order to troubleshoot:
1) Connect to master, data or client pod:
kubectl exec -it test-elasticsearch-master-0 -- bash
2) cd into /usr/share/elasticsearch/plugins/opendistro_security/tools 3) give execution permissions to securityadmin.sh:
    chmod +x securityadmin.sh
4) run the securityadmin.sh script:
./securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert /usr/share/elasticsearch/config/certs/root-ca.pem -cert /usr/share/elasticsearch/config/certs/admin-crt.pem -key /usr/share/elasticsearch/config/certs/admin-key.pem
Any errors that you see during execution need to be fixed.
It’s also possible that due to misconfiguration, default initialization took place and the config you supplied is not the one that was loaded into security index. In this case, you can use the above method to retrieve current configuration and compare it to yours. The command is:
  ./securityadmin.sh -cd /usr/share/elasticsearch/plugins/opendistro_security/securityconfig/ -icl -nhnv -cacert /usr/share/elasticsearch/config/certs/root-ca.pem -cert /usr/share/elasticsearch/config/certs/admin-crt.pem -key /usr/share/elasticsearch/config/certs/admin-key.pem -r

OpenSearch and Dashboards

In this chart OpenSearch and Dashboards are broken down into two separate charts. Let’s cover OpenSearch first.
1) Download the necessary charts using below command:
git clone https://github.com/opensearch-project/opensearch-devops.git
2) cd opensearch-devops/Helm/opensearch

Install Custom Certificates

Follow the process from ODFE section above to create secrets.yaml file, the process is identical, except the path for certs is slightly different:
extraVolumes: 
   - name: elasticsearch-certs
     secret:
       secretName: elasticsearch-certs

extraVolumeMounts:
   - name: elasticsearch-certs
     mountPath: /usr/share/opensearch/config/certs
     readOnly: true

Load Custom OpenSearch Configuration

Configuration is provided via config.opensearch.yml section and should look similar to below:
config:
  opensearch.yml:
    cluster.name: opensearch-cluster

    network.host: 0.0.0.0

    plugins:
      security:
        ssl:
          transport:
            pemcert_filepath: certs/esnode.pem
            pemkey_filepath: certs/esnode-key.pem
            pemtrustedcas_filepath: certs/root-ca.pem
            enforce_hostname_verification: false
          http:
            enabled: false
        allow_unsafe_democertificates: true
        allow_default_init_securityindex: true
        authcz:
          admin_dn:
            - CN=kirk,OU=node,O=node,L=test,DC=de
        nodes_dn:
          - "CN=node*.example.com,OU=node,O=node,L=test,DC=de"
        audit.type: internal_opensearch
        enable_snapshot_restore_privilege: true
        check_snapshot_restore_write_privileges: true
        restapi:
          roles_enabled: ["all_access", "security_rest_api_access"]
        system_indices:
          enabled: true
          indices:
            [
              ".opendistro-alerting-config",
              ".opendistro-alerting-alert*",
              ".opendistro-anomaly-results*",
              ".opendistro-anomaly-detector*",
              ".opendistro-anomaly-checkpoints",
              ".opendistro-anomaly-detection-state",
              ".opendistro-reports-*",
              ".opendistro-notifications-*",
              ".opendistro-notebooks",
              ".opendistro-asynchronous-search-response*",
            ]

Load Custom Security Configuration

This process is identical to ODFE, the configuration is stored in securityConfig.config.data section and should look similar to below:
securityConfig:
    enabled: true
    path: "/usr/share/opensearch/plugins/opensearch-security/securityconfig"
    config:
      securityConfigSecret: "any_name"
      data:
        config.yml: |-
          _meta:
              type: "config"
              config_version: 2
          config:
            dynamic:
            ...
Now that this is complete, package the chart and install it:
helm package .
helm install --values=values.yaml opensearch-1.0.0.tgz --generate-name
Give it some time, depending on your download speed and connect to one of the containers using below command:
kubectl exec -it opensearch-cluster-master-0 -- /bin/bash
Check if OpenSearch is running using below command:
curl -XGET http://localhost:9200 -u 'admin:admin' --insecure
Expected output should look like this:
{
  "name" : "opensearch-cluster-master-2",
  "cluster_name" : "opensearch-cluster",
  "cluster_uuid" : "EK8KjixGRJmFuXbwaGH7gA",
  "version" : {
    "distribution" : "opensearch",
    "number" : "1.0.0",
    "build_type" : "tar",
    "build_hash" : "34550c5b17124ddc59458ef774f6b43a086522e3",
    "build_date" : "2021-07-02T23:22:21.383695Z",
    "build_snapshot" : false,
    "lucene_version" : "8.8.2",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "The OpenSearch Project: https://opensearch.org/"
}

Dashboard Configuration

OpenSearch hosts are entered in separate field in values.yaml, as below:
opensearchHosts: "http://opensearch-cluster-master:9200"
The rest of the config is entered in config as before
config:
  opensearch_dashboards.yml:
    server:
      name: dashboards
      host: 0.0.0.0
      ssl.enabled: false
    opensearch.ssl.verificationMode: none
    opensearch_security.multitenancy.enabled: true
    opensearch_security.multitenancy.tenants.enable_global: true
    opensearch_security.multitenancy.tenants.enable_private: true
    opensearch.username: kibanaserver
    opensearch.password: kibanaserver
Once this is entered, package the charts and install them as we did with OpenSearch:
helm package .
helm install opensearch-dashboards-1.0.0.tgz --values=values.yaml --generate-name
After a couple of minutes run below commands to access openDashboards.
kubectl get deployment
example response:
NAME                                 READY   UP-TO-DATE   AVAILABLE   AGE
opensearch-dashboards-1-1631545895   1/1     1            1           7m27s

kubectl port-forward deployment/opensearch-dashboards-1-1631545895 5601
You should now be able to access the kibana on the localhost:5601
Get in touch with us
Send us a message and talk to the team today.
Need help with your setup?
OpenSearch or Elastic Stack – We’ve got you covered.