For a general quick start, please refer to
Part 1 – Getting Started with Helm Charts.
Prerequisites
You must have Docker and a Kubernetes cluster installed on your local machine. In the scope of this blog post, we will install Kubernetes locally with minikube, but in the following parts of this blog post series, we will leverage
AWS EKS for a production-grade setup.
The local minikube Kubernetes cluster will need 16 GB of memory and 4 CPUs. If you do not have such resources available on your local machine, we recommend trying this on an AWS m5.xlarge Ubuntu EC2 instance.
Install Helm if not already installed.
Install Docker on your local machine if not already installed. Ensure the Docker engine has
the necessary resources assigned.
Install minikube if not already installed.
Start Docker and check that it is running by executing docker run hello-world
Start minkube:
copyminikube delete -p eliatra-suite
minikube start --driver docker --memory 16384 --cpus 4 --kubernetes-version "v1.23.3" --nodes 1 -p "eliatra-suite" --wait=true
minikube -p eliatra-suite kubectl -- apply -f https://raw.githubusercontent.com/rancher/local-path-provisioner/v0.0.24/deploy/local-path-storage.yaml
minikube -p eliatra-suite kubectl -- patch storageclass local-path -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
minikube -p eliatra-suite kubectl -- delete storageclass standard
This will start minikube with 16 GB of memory, 4 CPUs, and one node, and installs the rancher local-path-provisioner because minikubes own local-path storage class is not working well.
Get Eliatra Suite Helm Charts
We provide free and Open Source licensed
Helm charts to install Eliatra Suite.
Get the Eliatra Suite Helm charts:
copygit clone https://git.eliatra.com/eliatra-suite/eliatra-suite-helm.git
In contrast to the last article, we deploy the charts from a checkout instead of using a repository. The reason is that we need to copy the root CA files into the charts.
Create a Certificate Authority (CA)
In a production-grade setup, you probably want to use your own PKI to generate TLS certificates. To do so, you need your root CA certificate and the key. It’s also possible to create your own root CA if you do not already have one by using the Eliatra Suite TLS Tool. We already downloaded the TLS Tool in the prerequisites, so let’s generate a Root CA by creating a config.yml file for the TLS Tool:
copy---
ca:
root:
dn: CN=root.ca.example.com,OU=CA,O=Example Com\, Inc.,DC=example,DC=com
keysize: 2048
validityDays: 365
pkPassword: myrootcapassword
file: root-ca.pem
Then, use the Eliatra Suite TLS Tool to create the root CA
copychmod +x eliatra-suite-tlstool-1.0.0.sh
./eliatra-suite-tlstool-1.0.0.sh --create-ca -c config.yml -t .
This will create two files, root-ca.key
and root-ca.pem
in your current working directory. Rename the files to crt.pem
and key.pem
.
copymkdir -p eliatra-suite-helm/secrets/ca
mv root-ca.key eliatra-suite-helm/secrets/ca/key.pem
mv root-ca.pem eliatra-suite-helm/secrets/ca/crt.pem
Install Eliatra Suite
We will now deploy an Eliatra Suite-secured OpenSearch cluster with the following configuration:
OpenSearch version 2.5.0
One master node, one data node, and one client node
Our own root CA
copyhelm install --set master.replicas=1 \
--set master.replicas=1 \
--set data.replicas=1 \
--set client.replicas=1 \
--set common.osversion=2.5.0 \
--set common.ca_certificates_enabled=true \
--set common.spctl_certificates_enabled=false \
--set common.ca_password=myrootcapassword \
--timeout 30m \
esuite eliatra-suite-helm/.
Then run minikube -p eliatra-suite kubectl -- get pods --watch
to check if all OpenSearch nodes are up and running. This will take a few minutes, depending on your hardware.
The result of the above command should look similar to:
copyNAME READY STATUS RESTARTS AGE
esuite-eliatra-suite-cleanup-job-28090890-xxx 0/1 Completed 0 5m23s
esuite-eliatra-suite-cleanup-job-28090895-xxx 1/1 Running 0 23s
esuite-eliatra-suite-client-0 1/1 Running 0 7m5s
esuite-eliatra-suite-data-0 1/1 Running 0 7m5s
esuite-eliatra-suite-master-0 1/1 Running 0 7m5s
esuite-eliatra-suite-osd-0 1/1 Running 0 7m5s
esuite-eliatra-suite-spctl-initialize-rx4bd 0/1 Completed 0 7m5s
Login into OpenSearch Dashboards
The password for the admin OpenSearch Dashboards user is auto-generated. To get the password, execute the following:
copyminikube -p eliatra-suite kubectl -- get secrets esuite-eliatra-suite-passwd-secret -o jsonpath='{.data.ES_ADMIN_PWD}' | base64 --decode
As a next step, execute
copyexport POD_NAME=$(kubectl get pods --namespace default -l "component=esuite-eliatra-suite,role=osd" -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward --namespace default $POD_NAME 5601:5601
Now point your browser to
https://localhost:5601, accept the self-signed TLS certificate, and login with username
admin
and the password from above.
Congratulations. You are now running an Eliatra Suite-protected OpenSearch cluster.
Scale-out
With the following helm upgrade
command, we scale the cluster to 2 client nodes, 2 data nodes, and 3 master nodes without any interruption:
copyhelm upgrade --reuse-values \
--set client.replicas=2 \
--set data.replicas=2 \
--set master.replicas=3 \
--timeout 30m \
esuite eliatra-suite-helm/.
The --reuse-values
option is critical because it preserves the previously user-defined configuration of the helm chart.
Upgrade
To upgrade OpenSearch to 2.6.0 issue the following command:
copyhelm upgrade --reuse-values \
--set common.osversion=2.6.0 \
--timeout 30m \
esuite eliatra-suite-helm/.
This will upgrade the cluster node-by-node without any interruption following the
rolling restart (or rolling upgrade) procedure.
Cleanup
Run `minikube delete -p “eliatra-suite” to tear down and clean up everything. This will delete the minikube nodes and all associated data.
Next Steps
In our next article, we will set up a production-ready Eliatra Suite protected OpenSearch cluster on Amazon EKS.