Deployment

The default deployment strategy of the TSG components uses Kubernetes as container orchestration solution. Partly, because of the feature set of Kubernetes that makes managing administrating larger deployments easier, but also because Kubernetes provides a layer of abstraction for the underlying (cloud) infrastructure. This allows the deployments to be highly portable and deployable on a wide variety of ecosystems.

Another aspect of using Kubernetes is the capability of the core container to communicate with the Kubernetes API to orchestrate data apps.

Non-kubernetes deployments are possible, since the deployments rely solely on Docker containers. However, we highly discourage non-Kubernetes deployments, due to the added complexity during the lifetime of the deployment, while Kubernetes primarily adds complexity on the deploy-time of deployments but improves the maintainability over time.

System requirements for deploying the TSG core container are:

  • Platform: Linux-based OS, capable of running OCI-images (e.g. via Docker or Kubernetes)
  • CPU: min 1 core available for the container
  • RAM: min 1GB available for the container

Helm

All of the provided components can be deployed via Helm. Helm Charts provide a package of resources that can be easily configured and deployed to Kubernetes. These Charts make it easy to deploy several resources in Kubernetes that have to be linked together (e.g. Deployments & Services). Helm uses a template enginge, based on Golang templates, so that with configuration properties the relevant Kubernetes resources are generated.

This configuration allows the TSG components to be deployed on almost all of the Kubernetes environments, by providing for instance the possibility to expose services via Ingress resources (preferred) or with so-called LoadBalancer or NodePort services.

An example of the configurabilty can be seen in the main Helm chart, with snippets for both Ingress-based configuration as NodePort service-based configuration:

coreContainer:
  # Ingress Service configuration
  ingress:
    path: /(.*)
    rewriteTarget: /$1
    clusterIssuer: letsencrypt
Ingress-based core container deployment
coreContainer:
  # NodePort Service configuration
  nodePort:
    api: 31000
    camel: 31001
NodePort service-based core container deployment

See the Connector Helm chart for the complete overview of configuration options.

Another powerful aspect of using Helm charts is the capability of including Helm charts as dependency for Helm chart. This allows you to include, for instance, a database solution next to your application. Or create a full TSG environment with one chart that includes the relevant sub-charts.

Helm Charts

The available Helm charts that can be used are:

  • connector: The main Helm chart for deploying the core container and relevant deploy-time data apps. This chart can be used standalone, or as a dependency to create a full local environment for testing purposes.
  • daps: The Helm chart for deploying a Dynamic Attribute Provisioning Service (DAPS) for creating a new identity provider. This chart will in most scenarios be deployed for a test environment, or for a newly created dataspace.

Connector chart

The Connector Helm Chart is the primary starting point for TSG Connector deployments on Kubernetes clusters. The sources of the Helm Chart can be found on Gitlab.

The basic steps to install the Connector Helm are as follows:

  1. Request an identity for the dataspace you want to participate in. Which should result in component.crt, component.key, and cachain.crt files.
  2. Create the configuration for the chart. See the documentation in the Connector repository, for an easy starting point examples are provided in the examples folder.
  3. Determine the Kubernetes namespace you want to deploy the connector in, as well as the deployment name of the connector. And substitute these in the commands below in place of respectively NAMESPACE and DEPLOYMENT_NAME.
  4. kubectl create secret generic \
        -n NAMESPACE \
        ids-identity-secret \
        --from-file=ids.crt=./component.crt \
        --from-file=ids.key=./component.key \
        --from-file=ca.crt=./cachain.crt
    helm repo add tsg https://nexus.dataspac.es/repository/tsg-helm
    helm install --create-namespace -n NAMESPACE DEPLOYMENT_NAME tsg/connector --version 3.0.0 -f values.yaml
    

More information on the connector Helm chart can be found in the README of the chart and the configuration properties of the core container can be found in the Configuration section.

DAPS chart

The DAPS chart is published on Gitlab.

TLS Encryption

For the TLS encryption between two connectors, the most used structure is to use Automatic Certificate Management Environment (ACME). For example, by using by using the free Certificate Authority Let’s Encrypt, since the trust aspect of IDS does not originate from the TLS encryption but from the Identity certificates. Which makes that Let’s Encrypt is a simple, automated, and free solution for encrypting the network traffic.

ACME is, however, not a requirement. If you already have a system or process in place for acquiring and managing TLS certificates, it is adviced to use that same system also for the TSG Connector deployments. In the examples we do assume the usage of Let’s Encrypt, but this can be easily adopted with your own certificates, especially for Kubernetes deployments with Ingresses enabled you’d only need to provide the certificate as a secret in the cluster and point the ingress to that.

Spring Configuration

Most of the Kotlin- or Java-based applications are using Spring Boot. This also means that all of the configuration for these components follow the Spring configuration, which allows the configuration to be manipulated in a couple of different ways. Most relevant:

  • application.yaml file: primary configuration file for the components, is created by the Helm charts. This can be modified (e.g. for the core container everything in the ids key will be placed in this file), for the addition of configuration properties required in certain scenarios (e.g. to modify the allowed HTTP post sizes to allow large files to be transfered).
  • Environment variables: configured environment variables override default properties and properties set in the application.yaml file. Spring uses relaxed binding for properties, which means that properties will be matched case-insensitive and ignoring dots, dashes, and underscores. For example, the security.users[0].password property can be provided by the SECURITY_USERS_0_PASSWORD environment variable. Which in turn can be configured with an Kubernetes secret, for example the secret ids-cc-secret (in the same namespace as the deployment) with key password:

    coreContainer:
      environment:
      - name: SECURITY_USERS_0_PASSWORD
        valueFrom:
          secretKeyRef:
            name: ids-cc-secret
            key: password
    

The documentation of the core container and data apps provide all of the possible configuration properties that are used by the components.

Configuration tampering recommendations

For administrators, we recommend to use a system that watches the configuration files of the core container. This ensures that administrators are always up to date with configuration changes, even if the connector has been offline. The tool that can be used for this is called auditctl. So given you’re in a directory where the application.yaml is located you can execute the following command:

auditctl -w application.yaml -p war -k config_changes

The -p war makes sure that all write, attribute change and read operations are logged. -k adds a key which makes the changes searchable via ausearch. The logs can be searched by using ausearch on the key:

ausearch -ts today -k config_changes

Ports

The default ports for which Kubernetes services are setup when deploying the Core Container are 8080 for the Camel routes and 8082 for the API. Optionally these can be configured as NodePort services to allow accessing these services outside the cluster. However, using Ingresses is the recommended way of exposing these ports.

Next to this, port 8081 is used to obtain metrics but this port is not connected to a service so this port is not reachable within the Kubernetes cluster, except for specific services that collect metric information.

Didn't find what you were looking for?