Kubernetes Agent + Ingress Controller + Module

IMPORTANT

This guide only applies to Next-Gen WAF customers with access to the Next-Gen WAF control panel. If you have access to the Next-Gen WAF product in the Fastly control panel, you can only deploy the Next-Gen WAF with the Edge WAF deployment method.

In this example, the Next-Gen WAF agent is installed as a Docker sidecar, communicating with a Next-Gen WAF native module for NGINX installed on an ingress-nginx Kubernetes ingress controller.

Integrating the Next-Gen WAF agent into an ingress controller

In addition to installing the Next-Gen WAF per application, it is also possible to install the Next-Gen WAF into a Kubernetes ingress controller that will receive all external traffic to your applications. Doing this is similar to installing into an application with a Next-Gen WAF module:

  • Install and configure the Next-Gen WAF module into the ingress controller.
  • Add the sigsci-agent container to the ingress pod and mount a sigsci-agent volume.
  • Add an emptyDir{} volume as a place for the sigsci-agent to write temporary data.

Kubernetes NGINX ingress controller

The Kubernetes NGINX Ingress Controller is an NGINX based implementation for the ingress API. Next-Gen WAF supports a native module for NGINX. This enables you to easily wrap the existing ingress-nginx controller to install the Next-Gen WAF module.

Wrap the base nginx-ingress-controller to install the Next-Gen WAF module

Wrapping the nginx-ingress-controller is done by using the base controller and installing the Next-Gen WAF native NGINX module. Our sigsci-nginx-ingress-controller repository contains two examples of this.

A prebuilt container can be pulled from Docker Hub with: docker pull signalsciences/sigsci-nginx-ingress-controller:latest

Installation

There are two methods for installing:

Prerequisites

Copy the agent keys for the site that you want the agent to be able to access. You will use the agent keys when configuring the Next-Gen WAF agent package.

Install via Helm using overrides

The following steps cover installing sigsci-nginx-ingress-controller + sigsci-agent via the official ingress-nginx charts with an override file.

  1. Add the ingress-nginx repository:

    $ helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
  2. In the sigsci-values.yaml file, add the agent keys as SIGSCI_ACCESSKEYID and SIGSCI_SECRETACCESSKEY.

  3. Install with the release name my-ingress in the default namespace:

    $ helm install -f values-sigsci.yaml my-ingress ingress-nginx/ingress-nginx

    You can specify a namespace with -n flag:

    $ helm install -n NAMESPACE -f values-sigsci.yaml my-ingress ingress-nginx/ingress-nginx
  4. After a few minutes, the agent will be listed in your Next-Gen WAF control panel.

  5. Create an Ingress resource. This step will vary depending on setup and supports a lot of configurations. Official documentation can be found regarding Basic usage - host based routing.

    Here is an example Ingress file:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
    annotations:
    kubernetes.io/ingress.class: nginx
    nginx.ingress.kubernetes.io/rewrite-target: /
    name: hello-kubernetes-ingress
    #namespace: SET THIS IF NOT IN DEFAULT NAMESPACE
    spec:
    rules:
    - host: example.com
    http:
    paths:
    - pathType: Prefix
    path: /testpath
    backend:
    service:
    name: NAME OF SERVICE
    port:
    number: 80

Helm upgrade with override file

  1. In the sigsci-values.yaml file, update the sigsci-nginx-ingress-controller to the latest version to update the ingress-nginx charts:

    1
    2
    3
    4
    5
    6
    controller:
    # Replaces the default nginx-controller image with a custom image that contains the Next-Gen WAF NGINX module
    image:
    repository: signalsciences/sigsci-nginx-ingress-controller
    tag: "latest"
    pullPolicy: IfNotPresent
  2. Run helm upgrade with the override file. This example is running helm upgrade against the my-ingress release created in the previous section:

    $ helm upgrade -f sigsci-values.yaml my-ingress ingress-nginx/ingress-nginx

    or

    $ helm upgrade -f sigsci-nginxinc-values.yaml my-ingress ingress-nginx/ingress-nginx

    If ingress is not in default namespace, use -n to specify namespace:

    $ helm upgrade -n NAMESPACE -f sigsci-values.yaml my-ingress ingress-nginx/ingress-nginx

    or

    $ helm upgrade -n NAMESPACE -f sigsci-nginxinc-values.yaml my-ingress ingress-nginx/ingress-nginx

Uninstall release

  1. Uninstall release my-ingress.

    $ helm uninstall my-ingress
  2. If it's not in the default namespace, use -n to specify the namespace:

    $ helm uninstall -n NAMESPACE my-ingress

Install with custom file

Integrating the Next-Gen WAF agent

The Next-Gen WAF agent can be installed as a sidecar into each pod or as a service for some specialized needs.

The recommended way of installing the Next-Gen WAF agent in Kubernetes is by integrating the sigsci-agent into a pod as a sidecar. This means adding the sigsci-agent as an additional container to the Kubernetes pod. As a sidecar, the agent will scale with the app/service in the pod instead of having to do this separately. However, in some situations, it may make more sense to install the sigsci-agent container as a service and scale it separately from the application.

The sigsci-agent container can be configured in various ways depending on the installation type and module being used.

You can use the preStop container hook to slow the pod's shutdown and ensure drain timeouts are met.

1
2
3
4
5
preStop:
exec:
command:
- sleep
- "30"

By default, the agent prioritizes quick start up and performance readiness for preliminary inspection. However, quick startup isn't always desirable if you only want the agent to inspect traffic after loading your rules and configuration data. If you want to delay agent startup, consider configuring a startup probe.

Getting and updating the agent container image

An official signalsciences/sigsci-agent container image is available on Docker Hub.

Alternatively, if you want to build your own image or need to customize the image, then follow the sigsci-agent build instructions.

These instructions reference the latest version of the agent with imagePullPolicy: Always, which will pull the latest agent version even if one already exist locally. This is so the documentation does not fall out of date and anyone using this will not have an agent that stays stagnant. However, this may not be what you need to keep installations consistent or on a specific version of the agent. In these cases, you should specify an agent version. Images on Docker Hub are tagged with their versions and a list of versions is available on Docker Hub.

Whether you choose to use the latest image or a specific version, there are a few items to consider to keep the agent up-to-date:

Using the latest container image

If you do choose to use the latest image, then you will want to consider how you will keep the agent up to date.

  • If you have used the imagePullPolicy: Always option, then the latest image will be pulled on each startup and your agent will continue to get updates.

  • Alternatively, you may instead choose to manually update the local cache by periodically forcing a pull instead of always pulling on startup:

    $ docker pull signalsciences/sigsci-agent:latest

    Then, use latest with imagePullPolicy: Never set in the configuration so that pulls are never done on startup (only manually as above):

    1
    2
    3
    4
    - name: sigsci-agent
    image: signalsciences/sigsci-agent:latest
    imagePullPolicy: Never
    ...

Using a versioned container image

To use a specific version of the agent, replace latest with the agent version (represented here by x.xx.x). You may also want to change imagePullPolicy: IfNotPresent in this case as the image should not change.

1
2
3
4
- name: sigsci-agent
image: signalsciences/sigsci-agent:x.xx.x
imagePullPolicy: IfNotPresent
...

This will pull the specified agent version and cache it locally. If you use this method, then it is recommended that you parameterize the agent image, using Helm or similar, so that it is easier to update the agent images later on.

Using a custom tag for the container image

It is also possible to apply a custom tag to a local agent image. To do this, pull the agent image (by version or use latest), apply a custom tag, then use that custom tag in the configuration. You will need to specify imagePullPolicy: Never so local images are only updated manually. After doing so, you will need to periodically update the local image to keep the agent up-to-date.

For example:

$ docker pull signalsciences/sigsci-agent:latest
$ docker tag signalsciences/sigsci-agent:latest signalsciences/sigsci-agent:testing

Then use this image tag in the configuration:

1
2
3
4
- name: sigsci-agent
image: signalsciences/sigsci-agent:testing
imagePullPolicy: Never
...

Configuring the agent container

Agent configuration is normally done via the environment. Most configuration options are available as environment variables. Environment variables names have the configuration option name all capitalized, prefixed with SIGSCI_ and any dashes (-) changed to underscores (_). For example, the max-procs option would become the SIGSCI_MAX_PROCS environment variable. For more details on what options are available, see the Agent Configuration documentation.

The sigsci-agent container has a few required options that need to be configured:

  • Agent credentials (Agent Access Key and Agent Secret Key).
  • A volume to write temporary files.

Agent credentials

The sigsci-agent credentials are configured with two environment variables. These variables must be set or the agent will not start.

  • SIGSCI_ACCESSKEYID: The Agent Access Key identifies which site (also known as workspace) in the Next-Gen WAF control panel that the agent is configured for.
  • SIGSCI_SECRETACCESSKEY: The Agent Secret Key is the shared secret key to authenticate and authorize the agent.

Because of the sensitive nature of these values, we recommend you use the built in secrets functionality of Kubernetes. With this configuration, the agent will pull the values from the secrets data instead of reading hardcoded values into the deployment configuration. This also makes any desired agent credential rotation easier to manage by having to change them in only one place.

Use the valueFrom option instead of the value option to use the secrets functionality. For example:

1
2
3
4
5
6
7
8
9
10
11
12
13
env:
- name: SIGSCI_ACCESSKEYID
valueFrom:
secretKeyRef:
# Update my-site-name-here to the correct site (workspace) name or similar identifier
name: sigsci.my-site-name-here
key: accesskeyid
- name: SIGSCI_SECRETACCESSKEY
valueFrom:
secretKeyRef:
# Update my-site-name-here to the correct site (workspace) name or similar identifier
name: sigsci.my-site-name-here
key: secretaccesskey

The secrets functionality keeps secrets in various stores in Kubernetes. This guide uses the generic secret store in its examples, however any equivalent store can be used. Agent secrets can be added to the generic secret store using YAML similar to the following example:

1
2
3
4
5
6
7
apiVersion: v1
kind: Secret
metadata:
name: sigsci.my-site-name-here
stringData:
accesskeyid: 12345678-abcd-1234-abcd-1234567890ab
secretaccesskey: abcdefg_hijklmn_opqrstuvwxy_z0123456789ABCD

This can also be created from the command line with kubectl such as with the following example:

$ kubectl create secret generic sigsci.my-site-name-here \
--from-literal=accesskeyid=12345678-abcd-1234-abcd-1234567890ab \
--from-literal=secretaccesskey=abcdefg_hijklmn_opqrstuvwxy_z0123456789ABCD

Additional information about Kubernetes secrets functionality can be found in the Kubernetes documentation.

Agent temporary volume

For added security, we recommended the sigsci-agent container be executed with the root filesystem mounted as read only. However, the agent still needs to write some temporary files such as the socket file for RPC communication and some periodically updated files such as geolocation data.

To accomplish this with a read only root filesystem, there needs to be a writeable volume mounted. This writeable volume can also be shared to expose the RPC socket file to other containers in the same pod.

The recommended way of creating a writeable volume is to use the builtin emptyDir volume type. This is typically configured in the volumes section of a deployment, as shown in the following example:

1
2
3
volumes:
- name: sigsci-tmp
emptyDir: {}

Containers will then mount this volume at /sigsci/tmp:

1
2
3
volumeMounts:
- name: sigsci-tmp
mountPath: /sigsci/tmp

The default in the official agent container image is to have the temporary volume mounted at /sigsci/tmp. If this needs to be moved for the agent container, then the following agent configuration options should also be changed from their defaults to match the new mount location:

  • rpc-address defaults to /sigsci/tmp/sigsci.sock
  • shared-cache-dir defaults to /sigsci/tmp/cache

The NGINX ingress controller is installed with the mandatory.yaml file. This file contains a modified template of the Generic Ingress Controller Deployment. The main additions are:

  1. Change the ingress container to load the custom ingress container and add Volume mounts for socket file communication between the Module/ingress container and Agent sidecar container:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    ...
    containers:
    - name: nginx-ingress-controller
    image: signalsciences/sigsci-nginx-ingress-controller:latest
    ...
    volumeMounts:
    - name: sigsci-tmp
    mountPath: /sigsci/tmp
    ...
  2. Load the Next-Gen WAF module in the NGINX configuration file (nginx.conf) via ConfigMap:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    kind: ConfigMap
    apiVersion: v1
    data:
    main-snippet: load_module /usr/lib/nginx/modules/ngx_http_sigsci_nxo_module-1.17.7.so;
    http-snippet: sigsci_agent_host unix:/sigsci/tmp/sigsci.sock;
    metadata:
    name: nginx-configuration
    namespace: ingress-nginx
    labels:
    app.kubernetes.io/name: ingress-nginx
    app.kubernetes.io/part-of: ingress-nginx
  3. Add a container for the Next-Gen WAF agent:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    ...
    containers:
    ...
    # Next-Gen WAF agent running in default RPC mode
    - name: sigsci-agent
    image: signalsciences/sigsci-agent:latest
    imagePullPolicy: IfNotPresent
    env:
    - name: SIGSCI_ACCESSKEYID
    valueFrom:
    secretKeyRef:
    # This secret needs added (see docs on sigsci secrets)
    name: sigsci.my-site-name-here
    key: accesskeyid
    - name: SIGSCI_SECRETACCESSKEY
    valueFrom:
    secretKeyRef:
    # This secret needs added (see docs on sigsci secrets)
    name: sigsci.my-site-name-here
    key: secretaccesskey
    securityContext:
    # The sigsci-agent container should run with its root filesystem read only
    readOnlyRootFilesystem: true
    volumeMounts:
    # Default volume mount location for sigsci-agent writeable data (do not change mount path)
    - name: sigsci-tmp
    mountPath: /sigsci/tmp
    ...
  4. Define the volume used above:

    1
    2
    3
    4
    5
    6
    7
    ...
    volumes:
    # Define a volume where sigsci-agent will write temp data and share the socket file,
    # which is required with the root filesystem is mounted read only
    - name: sigsci-tmp
    emptyDir: {}
    ...

Setup

The mandatory.yaml file creates the resources in the ingress-nginx namespace. If using Kubernetes Secrets to store the agent access keys, you will need to create the namespace and access keys before running the mandatory.yaml file.

  1. Set the name for the secrets for the agent keys in mandatory.yaml.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    ...
    env:
    - name: SIGSCI_ACCESSKEYID
    valueFrom:
    secretKeyRef:
    # This secret needs added (see docs on sigsci secrets)
    name: sigsci.my-site-name-here
    key: accesskeyid
    - name: SIGSCI_SECRETACCESSKEY
    valueFrom:
    secretKeyRef:
    # This secret needs added (see docs on sigsci secrets)
    name: sigsci.my-site-name-here
    key: secretaccesskey
    ...
  2. Pull or build the NGINX ingress + Signal Sciences Module container. Set any preferred registry and repository name, and set the image to match in mandatory.yaml:

    $ docker pull signalsciences/sigsci-nginx-ingress-controller:latest
  3. Deploy using modified Generic Deployment:

    $ kubectl apply -f mandatory.yaml
  4. Create the service to expose the Ingress Controller. The steps necessary are dependent on your cloud provider. Official instructions can be found at https://kubernetes.github.io/ingress-nginx/deploy/#environment-specific-instructions.

    Below is an example service.yaml file:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    kind: Service
    apiVersion: v1
    metadata:
    name: ingress-nginx
    namespace: ingress-nginx
    spec:
    externalTrafficPolicy: Cluster
    selector:
    app.kubernetes.io/name: ingress-nginx
    type: LoadBalancer
    ports:
    - name: http
    port: 80
    targetPort: http
    - name: https
    port: 443
    targetPort: https
  5. Create the Ingress Resource. Below is an example Ingress Resource:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    apiVersion: extensions/v1
    kind: Ingress
    metadata:
    name: test-ingress
    namespace: ingress-nginx
    annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
    spec:
    rules:
    - http:
    paths:
    - path: /testpath
    backend:
    serviceName: nginx
    servicePort: 80
Was this guide helpful?

Do not use this form to send sensitive information. If you need assistance, contact support. This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.