OpenAppStack Design

This article covers the basic design of OpenAppStack.

Application build pipeline

The following diagram explains the process to go from an application’s source code to a deployment on OpenAppStack.

_images/application-build-process.pngApplication build process

These are the steps in more detail:

  • Build container (this process should be maintained by application developer by providing a Dockerfile with the application)
    1. Get application package (source code, installation package, etc.)
      1. If not part of the package: get default configuration for the application
    2. Build container with application package installed
      1. Install application dependencies
      2. Install application package
      3. Setup default configuration
      4. Setup pluggable configuration override, can be:
        • Reading environment variables
        • Extra configuration file mounted into the container elsewhere
  • Helm chart
    • Deployment configuration to specify:
      • The container(s) that should be deployed.
      • The port(s) that they expose.
      • Volume mounts for configuration files and secrets.
      • Live/readyness probes
      • Persistent storage locations and methods
      • A lot of other things
    • Service configuration to specify:
      • Ports exposed to the user of the application
    • Ingress configuration to specify:
      • How to proxy to the application (which hostname or URL)
      • Some authentication plugins (http auth, for example)
    • Custom files:
      • Add file templates for mountable application configuration files
      • Files that specify integrations with other services
  • Deploy
    1. Create values.yaml file with the variables for the Helm deployment to the Kubernetes cluster
    2. “Manually” add secrets to the Kubernetes cluster.
    3. Run helm install to install the customised application.

Configuration

As can be seen in the images, applications are expected to have two different types of configuration. Containers should provide a default configuration, that at least configures things like the port the application runs on, the locations for log files, etc.

What we call the external configuration is provided by the user. This includes overrides of the default application, as well as variables like the hostname that the application will run on and listen to and the title of the web interface.

OpenAppStack will use Helm charts to provide the external configuration for the “Deploy” step. Helm charts can contain configuration file templates with default values that can be overridden during the installation or upgrade of a helm chart.

Application containers

For inclusion in OpenAppStack, it is required that the application developers provide Docker containers for their applications. There are several reasons for this:

  • If application developers do not provide a container, chances are they also do not think about how their application would update itself after a new container is deployed. This can lead to problems with things like database migrations.
  • Maintaining the containerisation for an application can, in most cases, not be fully automated.

Container updates

When an application update is available, these updates need to be rolled out to OpenAppStack instances. This will be done according the following steps:

  1. Application container is built with new application source and tagged for testing.
  2. Helm chart for application is updated to provide new container.
  3. Helm chart is deployed to an OpenAppStack test cluster following the steps in the diagram above.
  4. Application is tested with automated tests
  5. If tests succeed, new container is tagged for release.
  6. OpenAppStack automated update job fetches new Helm chart and upgrades current instance using Helm.

Most of these steps can be developed by configuring a CI system and configuring Kubernetes and Helm correctly. The automated update job that will run on OpenAppStack clusters will be developed by us.

Persistent data

Containerised applications are normally “stateless” (meaning no data is saved inside the containers). However, it is possible to mount persistent volumes to specific directories in the container, basically adding a persistent layer on top of the containerised application. To provide this in OAS’s simple setup, we use a local storage provisioner that automatically provides persistent data on the VPS running OAS to an application that requests it.

Automatic updates

OpenAppStack has an auto-update mechanism that performs unattended upgrades to applications. Flux is the system running in the cluster that is responsible for these updates.

Technically, flux is split up in two components: flux and helm-operator. flux watches a git repository (or subdirectory thereof) for source files that prescribe which application versions should be installed, and stores a copy of those prescriptions inside the cluster as a Kubernetes manifest (of kind HelmRelease).

helm-operator watches those in-cluster HelmRelease manifests, checks whether the listed applications are already installed – including correct versions and other settings – and performs any actions that are necessary to make sure that the cluster state matches the prescriptions: installing new applications, upgrading others.

Which git repository is watched by flux, is configurable. For typical production OpenAppStack deployments, this is set to be https://open.greenhost.net/openappstack/openappstack – the HelmRelease files are stored in the flux directory. The OpenAppStack team considers available upstream updates (say an update to Nextcloud). If the new Nextcloud version passes our tests, the team changes the corresponding application description file in the git repository (in this case flux/nextcloud.yaml) to reference the new version. OpenAppStack deployments that are configured to watch this git repository will see the change, record it in the in-cluster HelmRelease manifest, and have their helm-operator perform the Nextcloud upgrade.

Customising which applications are installed

The HelmRelease files in the flux directory form the applications that are available for installation. There is an additional mechanism though, to allow the cluster administrator to choose which applications are actually installed. You might want to leave out some apps that you think you won’t use, and save some resources that way. You can choose which apps to enable/disable by modifying the enabled_applications list in CLUSTERDIR/group_vars/all/settings.yml, and afterwards running the OAS install procedure.

Every HelmRelease file should have a corresponding Kubernetes Secret, which we call a “settings secret”. For the apps that are part of OpenAppStack, these secrets are created by the OpenAppStack installation procedure, so you don’t need to handle them unless you want to customise the set of installed applications.

The subdirectory of /flux where the HelmRelease file is located corresponds to the namespace of the secret. For example, the HelmRelease file /flux/oas-apps/nextcloud.yml corresponds to the Kubernetes secret nextcloud-settings in the namespace oas-apps.

This Kubernetes secret contains two keys:

  • enabled: this should contain a simple string: true to enable the application, or false to disable it;
  • values.yaml: this contains a yaml-formatted string with Helm values that are supplied to Helm when the application is installed.

Local development

When developing OpenAppStack, it’s nice to be able to change the application versions for your development cluster only, without affecting production clusters. One way to do that is to set the flux_source.repo and/or flux_source.branch ansible variables to point to another branch of the open.greenhost.net/openappstack/openappstack repository, or to a different repository.

To make this easier, we included a way to serve up a git repository with HelmRelease manifests from the cluster itself, so you don’t need an external Gitlab or Github project. This feature is disabled by default, and can be enabled by setting the local_flux ansible variable to true. If enabled, this will change several things:

  • when you run the OpenAppStack installation ansible playbook from your workstation, the current contents of the flux directory from your workstation are copied to a directory (/var/lib/OpenAppStack/local-flux) on the OpenAppStack host machine; also, a git commit is created in that directory on the host machine from these updated contents;
  • as part of the OpenAppStack installation, an nginx instance is deployed that serves up the contents of /var/lib/OpenAppStack/local-flux over http;
  • flux is configured to read the HelmRelease files from that git repository, served by the in-cluster nginx. In particular, the flux_source variables are ignored if local_flux is enabled.

The local-flux feature is also used by our CI pipeline, in order to be as similar as possible to a production installation, in particular using flux and helm-operator for the installation process, while still being completely separate from the production applications versions as prescribed by the master repository at open.greenhost.net/openappstack/openappstack.

Triggering an update

Both flux and helm-operator check at regular intervals (currently 1 hour and 20 minutes, respectively) whether there is an upgrade to perform. If you don’t want to wait for that after making a change, you can trigger an update:

  • to let flux re-read the HelmRelease files from the git repo (be it the OpenAppStack master one, a local-flux one, or yet another one), log in to the host machine and do

    $ fluxctl --k8s-fwd-ns=oas sync
    

    If there are any changes to HelmRelease manifests, helm-operator is notified of that through the Kubernetes API and should act on them more or less instantly (though the actual installation or upgrade could take quite a while, depending on what needs to be done).

  • If, for some reason, the HelmRelease manifests are still in sync between git repository and cluster, but you’d still like to let helm-operator check whether these HelmReleases match what’s actually installed, you can force that by deleting the helm-operator pod:

    $ kubectl delete pod -n oas -l app=helm-operator
    

    A new helm-operator pod should be created automatically, and in our experience will do a reconciliation run soon.