Kubernetes prerequisites

All of the deployments use Kubernetes as primary deployment environment. This page will layout the infrastructural requirements and the used Kubernetes concepts for reference purposes. This allows you to setup a Kubernetes environment that fits your needs.

TIP: Debugging and monitoring Kubernetes clusters can made easier by using Lens which allows you to quickly navigate through the cluster and view the logs of containers

Overview

The Helm charts allow for high configurability to ensure the TSG components can run on almost all of the Kubernetes environments. However, some recommendations are made to ensure a secure and production-ready environment.

First of all, allowing ingress traffic (i.e. network traffic from outside the cluster into the cluster) is the most important aspect to carefully set up. The preferred way of doing this is by leveraging the Ingress resources and accompanying Ingress Controller. As a default, the Helm charts assumes a NGINX Ingress Controller to be present in the cluster when enabling Ingress resources for the components. Other Ingress Controllers might work, but often do require specific annotations to be provided for the Ingress. The Deployment Environments section below lists several tutorials to set up clusters on different cloud, on-premises environments.

Another important aspect, especially in production environments, is the setup of persistent volumes. Most of the TSG components don’t use Persistent Volume Claims, since the components themselves don’t hold state. However, when deploying data apps or databases (e.g. MongoDB) next to these components, ensuring that this data is securely backed up is very important. The most convenient way for doing this is to enable backups on a cluster level, for instance by using Velero or Longhorn.

The TSG Helm charts use a couple of Kubernetes resources regularly of which a short description is given below. For a complete overview, the Kubernetes Documentation is quite extensive.

  • Pods: The smallest deployable unit of computing within Kubernetes, similar to a Docker Container. A Pod contains one (or more) containers that are always co-located and co-scheduler and run in a shared context. In practice, Pods often contain only a single container and are then equivalent to a Docker Container. In some scenario’s, multiple containers can live in the same Pod, for instance, when a container is added that executes some initial steps (so called init containers).
  • Deployments: Describe the desired state of Pods, allowing, for instance, the horizontal scaling of Pods accross the cluster. Where Pods live in the same context, multiple Pods described by a Deployment can live anywhere on the cluster.
  • Services: Abstract way of describing exposed services within the cluster. Services allow for exposing ports of Pods (e.g. related to Deployments), so that if there are multiple similar Pods the Service can distribute the load over the instances. The default type of services is ClusterIP, which means the service is only reachable from inside the cluster. These services are also used when opting for Ingress resources, as the Ingress Controller will communicate via the ClusterIP service. Cloud providers provide the possibility to use the LoadBalancer service type, which connects an external load balancer to the service. NodePort services are exposed on all of the nodes in the cluster, in most cases this type should be avoided and only used in local clusters (e.g. Docker Desktop) or in specific on-premises deployments.
  • Config Maps: Key-value pairs that store non-confidential data, in most cases the configuration used in Pods. This can be mounted to Pods by means of volumes (i.e. mounted to the filesystem of the containers) or environment variables. Updates to ConfigMaps that are mounted as volumes are automatically updated in running Pods, when mounted to environment variables Pods are restarted to reflect the new state of the ConfigMap.
  • Secrets: Sensitive data that can be used in Pods as configuration (either as volumes or as environment variables). Secrets can be used for a wide variety of configuration (e.g. passwords, tokens, keys, etc.). Especially in production environments distinguishing between secrets and application configuration is very important, as often you’ll be checking in the application configuration into Git (for instance in a Helm chart) and by seperating the secrets from the configuration the risk of leaking secrets is reduced.
  • Ingresses: Manage external access to Pods in the cluster, via primarily HTTP. Ingresses are closely related to Ingress Controllers, where the Ingress resource describes how information should be routed and the Ingress Controller actually handling these descriptions and routing traffic in the cluster.
  • Persistent Volume (Claims): Manage persistent storage that can stay online when Pods are rescheduled or moved. PersistentVolumes are the actual provisioned storage in the cluster, while PersistentVolumeClaims are the requests for storage by users.

Ingress Controller & Cert-Manager configuration

In most cases you will need to configure an Ingress Controller and a Certificate Manager (if you don’t have existing TLS certificates). This can be done in a generic way by using the Nginx Ingress Controller and Cert Manager.

The following instructions will work on most clusters, but in certain situations or certain configurations where tighter integration with the rest of the (cloud-)resources is required, different solutions might be more suitable.

First, the Ingress Controller can be deployed using Helm. The following command installs the Nginx Ingress Controller in the ingress-nginx namespace with a single replica. For production deployments it is recommended to have multiple replicas to balance the load and to allow for failures in one of the controllers.

helm upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx \
  --set controller.replicaCount=1 \
  --set controller.service.externalTrafficPolicy=Local \
  --create-namespace

After installing the controller, an external IP address is assigned to the ingress controller’s service. This IP address is the address the controller is reachable from by external entities. To retrieve this IP address execute:

kubectl --namespace ingress-nginx get services -o wide

Configure your (wildcard-)DNS to point to this IP address to allow these to be used by the Ingress resources in the cluster. For instance, when configuring an DNS A record for *.domain.name to the IP address, Ingress resources are able to use any direct subdomain of domain.name.

For enabling cert-manager, e.g. to enable LetsEncrypt certificates to be automatically requested, the following Helm chart can be used. Which installs cert-manager in the same namespace as the ingress controller.

helm upgrade --install cert-manager cert-manager \
  --repo https://charts.jetstack.io \
  --namespace ingress-nginx \
  --create-namespace \
  --set installCRDs=true

To configure Cert Manager to use the correct ACME service, a ClusterIssuer resource is required in the cluster. Create a cluster-issuer.yaml file with the following contents, with replaced MY_EMAIL_ADDRESS so that you’ll be informed if certificates will expire. This shouldn’t happen in case your Ingress resource is still active, since Cert Manager will automatically renew the certificate well before the expiration date. In the example below LetsEncrypt is used, but also other ACME services can be configured, see the cert-manager docs for more information on this configuration.

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt
spec:
  acme:
    server: https://acme-v02.api.letsencrypt.org/directory
    email: MY_EMAIL_ADDRESS
    privateKeySecretRef:
      name: letsencrypt
    solvers:
    - http01:
        ingress:
          class: nginx
          podTemplate:
            spec:
              nodeSelector:
                "kubernetes.io/os": linux

After creating the file, the resource can be uploaded to your cluster with the following kubectl command:

kubectl apply -f cluster-issuer.yaml

After everyting is setup, you can use the ingress configuration in the TSG Helm charts with the clusterIssuer field corresponding to the name of the resource (default: letsencrypt).

The Ingress controller provides a default logging format for all incoming requests that it handles. More information on this log format can be found here.

Security considerations

For ensuring a secure deployment on a Kubernetes cluster, some security measures should be taken. A definitive list can’t be provided due to the fact that there are different paths to a secure environment, in the list open source solutions are provided together with some well known cloud venders that might provide alternatives.

Secure secrets

In order to make sure secrets within Kubernetes are actually secure, a additional mechanism should be in place. Since Kubernetes by default stores secrets in plain text in the etcd backend.

A good and well-supported solution for securing secrets in Kubernetes is by using Hashicorp Vault. Guides for deploying this on your cluster can be found in the documentation. With a also a good overview of the security considerations in a tutorial.

Alternatives to Hashicorp Vault are Azure Key Vault, Google Secret Manager, AWS Secrets Manager.

Identification, Authentication, Authorization

A Kubernetes cluster must always be deployed with Role Based Access Control (RBAC) enabled, with the least privilege principle in mind.

On top of that additional systems can be useful to embed existing authorization frameworks into the cluster. For example, Azure Active Directory, Google Identity Service

Limit TLS protocols and ciphers

Default deployments of Nginx Ingress controllers are by default deployed with TLSv1.2 and TLSv1.3 support with a secure set of ciphers.

The TSG components are aimed at deployments using only TLSv1.3 support, to increase the security of the components. For TLSv1.3 currently all ciphers are considered secure, while for TLSv1.2 only a subset of ciphers is secure.

To configure the Nginx Ingress controller to only support TLSv1.3, follow the documentation.

Note: this might introduce compatibility issues with other applications that are still relying on TLSv1.2 or older.

Firewalling

In order to deploy Nginx with ModSecurity and OWASP Core Rule Set enabled, more configuration is needed.

The ModSecurity addon allows the nginx ingress controller to also act as firewall for incoming requests, combined with the widely used OWASP Core Rule Set the majority of malicious requests can be blocked. Furthermore, enabling ModSecurity increases the Zone Boundary protection for deployments.

For the deployments of the TSG, some OWASP rules (932115, 933210, 921110) provide false negatives. In general, for Nginx ingress controllers localhost access should be permitted to allow for health checking, while on the other hand all requests not containg a hostname in the Host header should be blocked since they cannot be routed anyway.

You can finetune the behaviour of ModSecurity by the modsecurity-snippet in the command below, in the ModSecurity Manual all of the options are described.

Finally, install the Nginx ingress controller with the following properties:

helm diff upgrade --install ingress-nginx ingress-nginx \
  --repo https://kubernetes.github.io/ingress-nginx \
  --namespace ingress-nginx \
  --set controller.replicaCount=1 \
  --set controller.service.externalTrafficPolicy=Local \
  --set controller.config.enable-modsecurity=true \
  --set controller.config.enable-owasp-modsecurity-crs=true \
  --set controller.config.modsecurity-snippet="SecRuleEngine On
SecRequestBodyAccess On
SecAuditEngine RelevantOnly
SecAuditLog /tmp/modsec_audit.log
SecRule REMOTE_ADDR \"@contains 127.0.0.1\" \"id:1\,phase:1\,nolog\,allow\,ctl:ruleEngine=Off\"
SecRuleUpdateActionById 920350 \"deny\,status:403\"
SecRuleRemoveById 932115 933210 921110" \
  --create-namespace

Note: make sure to escape both double quotes and comma’s inside the modsecurity-snippet, or create a seperate YAML file with the same configuration properties.

Alternatively, external firewalling can be used, like Azure Firewall. Preferably with OWASP rulesets, for which the same configuration as above can be used to bypass false positive rules and adding wanted rules.

Network Policies

Advanced network policies are supported by Kubernetes, depending on the network plugin that is used in your cluster.

With Network Policies you can limit the communication between services within the cluster as well as communication of services with external services. Especially when the Kubernetes cluster is more widely used, with also services that are not directly related to the IDS connector, Network Policies are a very good way of making sure only the services that you intent to are communicating with eachother.

For each deployment scenario, these network policies should be created independently. As it depends heavily on which services the connector uses, e.g. connected services might be deployed within the same Kubernetes cluster but also in an external service.

For more information on Network Policies and how to set these up, see the Network Policies in the Kubernetes documentation.

Rate Limiting

To mitigate DDoS attacks, the recommended measures are related to the Ingress Controller. By rate limiting at the ingress level, attacks can be stopped at the border of the cluster. This approach results in the fact that the services inside the cluster remain available to reach. For instance, by port-forwarding services directly from the Kubernetes cluster.

To use rate limiting on the Ingress level, either set global rate limits for all ingresses in the cluster (recommended) or set rate limits on a per-ingress resource base. The latter should only be used if you expect certain services to handle a lot more load than others.

Persistent Volume Back-ups

To ensure no data is lost in case of cluster failures, backing up persistent volumes is an important security measure that should be taken.

Different solutions are available to do this, depending on your cluster environment. Two open-source solutions that can be applied to almost all environments are:

  • Kubernetes Volume Snapshots: Built-in support for snapshotting persistent volumes, but drivers might be required to support this.
  • Velero: An open-source solution for disaster recovery, data migration, and data protection.

General security mechanisms for Kubernetes

In addition to the aspects above, the general security recommendations should be kept in mind.

In most cases managed Kubernetes clusters are configured by default with a lot of these recommendations in place.

Deployment Environments

Since Kubernetes runs on a wide variety of environments, some examples how to get started are provided for different types of cloud environment.

Azure Kubernetes Service

Deployment of TSG components on Azure is the current default method of deployment, without hard requirements to Azure.

For a basic scenario the following components are required:

  • Azure Kubernetes Service (AKS): Information for AKS and the accompanying tutorials
  • Nginx Ingress Controller & Cert Manager: For deploying the ingress controller and cert-manager see the steps above. For configuring the DNS A record in an Azure DNS zone, the easiest way is add a record set with name *, select alias record set and select the Public IP address starting with kubernetes- (assuming there are no other LoadBalancer services already present in the cluster).

The recommended deployment uses an wildcard A record set to point to the Ingress Controller, which allows for dynamic addition of new Ingress resources on the subdomains covered by the wildcard.

Amazon Elastic Kubernetes Service

Deployments of TSG components on Elastic Kubernetes Service are also possible. By using a Network Load Balancer (NLB), a tutorial on how to set this up together with an Nginx Ingress Controller and can be found here.

The structure that follows the Azure example after that, although specific annotations could be required for Ingress resources.

Rancher

Deployments of TSG components on Rancher are also possible. An example tutorial shows the deployment of an Nginx Ingress Controller with Cert Manager, but this depends on which version of Rancher you’re using. Given the amount of flexibility of Rancher, a single tutorial that always works can’t be provided. But there is a lot of information available, the most important components are the Nginx Ingress Controller and, if you’d want to use LetsEncrypt for TLS termination, Cert Manager.

Docker Desktop

Docker Desktop has built in support for a local Kubernetes cluster. The TSG deployments do work on such a Docker Desktop environment.

For exposing services on Docker Desktop, the NodePort service type is in most scenarios sufficient. Since the Docker Desktop cluster should be used for development only, as it is a single-node cluster without configuration options. Exposing services to outside of the machine, Ingress resources can be used in combination with a NGINX Ingress Controller.

Didn't find what you were looking for?