Maintaining an Openappstack cluster


Logs from pods and containers can be read in different ways:

  • In the cluster filesystem at /var/log/pods/ or /var/logs/containers/.
  • Using kubectl logs.
  • Querying aggregated logs with grafana, see below.

Central log aggregation

We use promtail, Loki and grafana for easy access of aggregated logs. The Loki documentation is a good starting point how this setup works, and the Using Loki in Grafana gets you started with querying your cluster logs with grafana.

You will find the loki grafana integration on your cluster at together with some generic query examples.

LogQL query examples

Please also refer to the LogQL documentation.


Flux is responsible for installing applications. It used helm-operator to deploy the desired helm releases.

Query all messages from flux:


Query all messages from flux and helm-operator:


flux messages containing wordpress:

{app = "flux"} |= "wordpress"

flux messages containing wordpress without unchanged events (to only show the installation messages):

{app = "flux"} |= "wordpress" != "unchanged"

Filter out redundant flux messages:

{ app = "flux" } !~ "(unchanged | event=refreshed | method=Sync | component=checkpoint)"

Debug oauth2 single sign-on with rocketchat:



Cert manager is responsible for requesting Let’s Encrypt TLS certificates.

Query cert-manager messages containing chat:

{app="cert-manager"} |= "chat"


Hydra is the single sign-on system.

Show only warnings and errors from hydra:

{container_name="hydra"} != "level=info"


Please take care to backup the following locations:

On your provisioning machine

  • Your cluster config directory, located in the top-level sub-directory clusters in your clone of the openappstack git repository. Here you can find all the files generated during the create and install commands of the CLI, together with the generated secrets that are stored during installation.

On your cluster

  • The local storage directories under /var/lib/OpenAppStack/local-storage. This is the place all persistant volumes are stored. Some are more important than others, if you want to hand-pick what volumes to backup, use kubectl get pvc --all-namespaces to see which volumes are used by what application. The prometheus and alertmanager volume contain metrics, so you could choose to not back those up to save space.
  • The rke directory /var/lib/OpenAppStack/rke where the rke config and state file of your cluster is stored.
  • At this moment, recurring, automated etcd snapshots are not configured. Please refer to the rke etc snapshot documentation if you like to backup etcd.

If you don’t care about your backup disk usage too much, the easiest way is to backup the whole /var/lib/OpenAppStack/ directory.


Restore instructions will follow, please reach out to us if you need assistance.

Change the IP of your cluster

In case your cluster needs to migrate to another IP use these steps to make OpenAppStack and rke adopt it:

  • rke etcd snapshot-save --config /var/lib/OpenAppStack/rke/cluster.yml --name test
  • Change IP in /var/lib/OpenAppStack/rke/cluster.yml
  • /usr/local/bin/rke up --config=/var/lib/OpenAppStack/rke/cluster.yml
  • rke etcd snapshot-restore --config /var/lib/OpenAppStack/rke/cluster.yml --name test
  • /usr/local/bin/rke up --config=/var/lib/OpenAppStack/rke/cluster.yml

Delete evicted pods

In case your cluster disk usage is over 80%, kubernetes taints the node with DiskPressure. Then it tries to evict pods, which is pointless in a single node setup but still happened anyway. Sometimes hundreds of pods will end up in evicted state but still showed up after DiskPressure recovered. See also the out of resource handling with kubelet documentation.

You can delete all evicted pods with this:

kubectl get pods --all-namespaces -ojson | jq -r '.items[] | select(.status.reason!=null) | select(.status.reason | contains("Evicted")) | + " " + .metadata.namespace' | xargs -n2 -l bash -c 'kubectl delete pods $0 --namespace=$1'