Kubernetes Istio

In this example, the Signal Sciences agent runs in a Docker sidecar and integrates directly with an Istio service mesh deployed on the application. In this configuration, you can configure the Next-Gen WAF to inspect east/west (service-to-service) web requests along with the traditional north/south (client to server) requests.

Integrating the Signal Sciences agent

The Signal Sciences agent can be installed as a sidecar into each pod or as a service for some specialized needs.

The recommended way of installing the Signal Sciences agent in Kubernetes is by integrating the sigsci-agent into a pod as a sidecar. This means adding the sigsci-agent as an additional container to the Kubernetes pod. As a sidecar, the agent will scale with the app/service in the pod instead of having to do this separately. However, in some situations, it may make more sense to install the sigsci-agent container as a service and scale it separately from the application.

The sigsci-agent container can be configured in various ways depending on the installation type and module being used.

You can use the preStop container hook to slow the pod's shutdown and ensure drain timeouts are met.

1preStop:
2 exec:
3 command:
4 - sleep
5 - "30"

By default, the agent prioritizes quick start up and performance readiness for preliminary inspection. However, quick startup isn't always desirable if you only want the agent to inspect traffic after loading your rules and configuration data. If you want to delay agent startup, consider configuring a startup probe.

Getting and updating the Signal Sciences agent container image

An official signalsciences/sigsci-agent container image is available from the Docker Hub.

Alternatively, if you want to build your own image or need to customize the image, then follow the sigsci-agent build instructions.

These instructions reference the latest version of the agent with imagePullPolicy: Always, which will pull the latest agent version even if one already exist locally. This is so the documentation does not fall out of date and anyone using this will not have an agent that stays stagnant. However, this may not be what if you need to keep installations consistent or on a specific version of the agent. In these cases, you should specify an agent version. Images on Docker Hub are tagged with their versions and a list of versions is available on Docker Hub.

Whether you choose to use the latest image or a specific version, there are a few items to consider to keep the agent up-to-date.

Using the latest Signal Sciences container image

If you do choose to use the latest image, then you will want to consider how you will keep the agent up to date.

  • If you have used the imagePullPolicy: Always option, then the latest image will be pulled on each startup and your agent will continue to get updates.

  • Alternatively, you may instead choose to manually update the local cache by periodically forcing a pull instead of always pulling on startup:

    $ docker pull signalsciences/sigsci-agent:latest

    Then, use latest with imagePullPolicy: Never set in the configuration so that pulls are never done on startup (only manually as above):

    1- name: sigsci-agent
    2 image: signalsciences/sigsci-agent:latest
    3 imagePullPolicy: Never
    4 ...

Using a versioned Signal Sciences container image

To use a specific version of the agent, replace latest with the agent version (represented here by x.xx.x). You may also want to change imagePullPolicy: IfNotPresent in this case as the image should not change.

1- name: sigsci-agent
2 image: signalsciences/sigsci-agent:x.xx.x
3 imagePullPolicy: IfNotPresent
4 ...

This will pull the specified agent version and cache it locally. If you use this method, then it is recommended that you parameterize the agent image, using Helm or similar, so that it is easier to update the agent images later on.

Using a custom tag for the Signal Sciences container image

It is also possible to apply a custom tag to a local agent image. To do this, pull the agent image (by version or use latest), apply a custom tag, then use that custom tag in the configuration. You will need to specify imagePullPolicy: Never so local images are only updated manually. After doing so, you will need to periodically update the local image to keep the agent up-to-date.

For example:

$ docker pull signalsciences/sigsci-agent:latest
$ docker tag signalsciences/sigsci-agent:latest signalsciences/sigsci-agent:testing

Then use this image tag in the configuration:

1- name: sigsci-agent
2 image: signalsciences/sigsci-agent:testing
3 imagePullPolicy: Never
4...

Configuring the Signal Sciences agent container

Agent configuration is normally done via the environment. Most configuration options are available as environment variables. Environment variables names have the configuration option name all capitalized, prefixed with SIGSCI_ and any dashes (-) changed to underscores (_). For example, the max-procs option would become the SIGSCI_MAX_PROCS environment variable. For more details on what options are available, see the Agent Configuration documentation.

The sigsci-agent container has a few required options that need to be configured:

  • Agent credentials (Agent Access Key and Agent Secret Key).
  • A volume to write temporary files.

Agent credentials

The sigsci-agent credentials are configured with two environment variables. These variables must be set or the agent will not start.

  • SIGSCI_ACCESSKEYID: The Agent Access Key identifies which site in the Next-Gen WAF console that the agent is configured for.
  • SIGSCI_SECRETACCESSKEY: The Agent Secret Key is the shared secret key to authenticate and authorize the agent.

Because of the sensitive nature of these values, we recommend you use the built in secrets functionality of Kubernetes. With this configuration, the agent will pull the values from the secrets data instead of reading hardcoded values into the deployment configuration. This also makes any desired agent credential rotation easier to manage by having to change them in only one place.

Use the valueFrom option instead of the value option to use the secrets functionality. For example:

1env:
2 - name: SIGSCI_ACCESSKEYID
3 valueFrom:
4 secretKeyRef:
5 # Update my-site-name-here to the correct site name or similar identifier
6 name: sigsci.my-site-name-here
7 key: accesskeyid
8 - name: SIGSCI_SECRETACCESSKEY
9 valueFrom:
10 secretKeyRef:
11 # Update my-site-name-here to the correct site name or similar identifier
12 name: sigsci.my-site-name-here
13 key: secretaccesskey

The secrets functionality keeps secrets in various stores in Kubernetes. This guide uses the generic secret store in its examples, however any equivalent store can be used. Agent secrets can be added to the generic secret store using YAML similar to the following example:

1apiVersion: v1
2kind: Secret
3metadata:
4 name: sigsci.my-site-name-here
5stringData:
6 accesskeyid: 12345678-abcd-1234-abcd-1234567890ab
7 secretaccesskey: abcdefg_hijklmn_opqrstuvwxy_z0123456789ABCD

This can also be created from the command line with kubectl such as with the following example:

$ kubectl create secret generic sigsci.my-site-name-here \
--from-literal=accesskeyid=12345678-abcd-1234-abcd-1234567890ab \
--from-literal=secretaccesskey=abcdefg_hijklmn_opqrstuvwxy_z0123456789ABCD

Additional information about Kubernetes secrets functionality can be found in the Kubernetes documentation.

Agent temporary volume

For added security, we recommended the sigsci-agent container be executed with the root filesystem mounted as read only. However, the agent still needs to write some temporary files such as the socket file for RPC communication and some periodically updated files such as geolocation data.

To accomplish this with a read only root filesystem, there needs to be a writeable volume mounted. This writeable volume can also be shared to expose the RPC socket file to other containers in the same pod.

The recommended way of creating a writeable volume is to use the builtin emptyDir volume type. This is typically configured in the volumes section of a deployment, as shown in the following example:

1volumes:
2 - name: sigsci-tmp
3 emptyDir: {}

Containers will then mount this volume at /sigsci/tmp:

1volumeMounts:
2 - name: sigsci-tmp
3 mountPath: /sigsci/tmp

The default in the official agent container image is to have the temporary volume mounted at /sigsci/tmp. If this needs to be moved for the agent container, then the following agent configuration options should also be changed from their defaults to match the new mount location:

  • rpc-address defaults to /sigsci/tmp/sigsci.sock
  • shared-cache-dir defaults to /sigsci/tmp/cache

Integrating the Signal Sciences agent using external authorization

As of Istio v1.9, support has been added to setup an authorization policy that delegates access control to an external authorization system.

The snippets below follow Istio's example and enhance the process to replace the example ext-authz service with the Signal Sciences Agent. Refer to the Istio documentation for initial namespace and test workloads, as those are referenced in the snippets below. All files are applied to the 'foo' namespace unless otherwise indicated.

Deploy the external authorizer

Assumes the secrets have been applied.

1apiVersion: v1
2kind: Service
3metadata:
4 name: sigsci-agent
5 labels:
6 app: sigsci-agent
7spec:
8 ports:
9 - name: grpc
10 port: 9999
11 targetPort: 9999
12 selector:
13 app: sigsci-agent
14---
15apiVersion: apps/v1
16kind: Deployment
17metadata:
18 name: sigsci-agent
19spec:
20 replicas: 1
21 selector:
22 matchLabels:
23 app: sigsci-agent
24 template:
25 metadata:
26 labels:
27 app: sigsci-agent
28 spec:
29 containers:
30 - name: sigsci-agent
31 image: signalsciences/sigsci-agent:latest
32 imagePullPolicy: IfNotPresent
33 # Configure the agent to use Envoy gRPC on port 9999
34 env:
35 - name: SIGSCI_ACCESSKEYID
36 valueFrom:
37 secretKeyRef:
38 # This secret needs added (see docs on sigsci secrets)
39 name: sigsci-agent-accesskey
40 key: accesskeyid
41 - name: SIGSCI_SECRETACCESSKEY
42 valueFrom:
43 secretKeyRef:
44 # This secret needs added (see docs on sigsci secrets)
45 name: sigsci-agent-accesskey
46 key: secretaccesskey
47 # Configure the Envoy to expect response data (if using a gRPC access log config for Envoy)
48 - name: SIGSCI_ENVOY_EXPECT_RESPONSE_DATA
49 value: "1"
50 - name: SIGSCI_ENVOY_GRPC_ADDRESS
51 value: :9999
52 ports:
53 - containerPort: 9999
54 securityContext:
55 # The sigsci-agent container should run with its root filesystem read only
56 readOnlyRootFilesystem: true
57---

Verify the Agent is running.

$ kubectl logs "$(kubectl get pod -l app=sigsci-agent -n foo -o jsonpath={.items..metadata.name})" -n foo -c sigsci-agent

Define the external authorizer

Edit the mesh config with the following command and add the extension provide definitions.

$ kubectl edit configmap istio -n istio-system
1data:
2 mesh: |-
3 # Add the following content to define the external authorizers.
4 extensionProviders:
5 - name: "sigsci-agent-ext-authz"
6 envoyExtAuthzGrpc:
7 service: "sigsci-agent.foo.svc.cluster.local"
8 port: "9999"
9 timeout: 0.2s
10 failOpen: true
11 - name: "sigsci-agent-access-log"
12 envoyHttpAls:
13 service: "sigsci-agent.foo.svc.cluster.local"
14 port: "9999"
15 additionalRequestHeadersToLog:
16 - "x-sigsci-request-id"
17 - "x-sigsci-waf-response"
18 - "accept"
19 - "content-type"
20 - "content-length"
21 additionalResponseHeadersToLog:
22 - "date"
23 - "server"
24 - "content-type"
25 - "content-length"

Enable with external authorization

Enable the external authorization and apply logging.

1apiVersion: security.istio.io/v1beta1
2kind: AuthorizationPolicy
3metadata:
4 name: ext-authz
5spec:
6 selector:
7 matchLabels:
8 app: httpbin
9 action: CUSTOM
10 provider:
11 # The provider name must match the extension provider defined in the mesh config.
12 name: sigsci-agent-ext-authz
13 rules:
14 # The rules specify when to trigger the external authorizer.
15 - to:
16 - operation:
17 paths: ["/headers"]
1# kubectl apply -f logging.yaml
2apiVersion: telemetry.istio.io/v1alpha1
3kind: Telemetry
4metadata:
5 name: mesh-default
6 namespace: istio-system
7spec:
8 accessLogging:
9 - providers:
10 - name: sigsci-agent-access-log
# In another terminal curl the httpbin app:
$ kubectl exec "$(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name})" -c sleep -n foo -- curl -v "http://httpbin.foo:8000/headers" -s
# tail the logs
$ kubectl logs -f "$(kubectl get pod -l app=sigsci-agent -n foo -o jsonpath={.items..metadata.name})" -n foo -c sigsci-agent

Integrating the Signal Sciences agent using EnvoyFilter

Istio uses Envoy proxy under its hood. Because of this, Istio can use the Signal Sciences agent in gRPC mode in the same way as with a generic Envoy install. The method of installing and configuring the Signal Sciences agent is similar to a generic Envoy install except the Envoy proxy is automatically deployed as a sidecar. Envoy is then configured using Istio's EnvoyFilter. Full Istio integration is only possible in Istio v1.3 or later due to the required extensions to EnvoyFilter.

To add Signal Sciences support to an Istio based application deployment, you will need to:

  • Add the sigsci-agent container to the pod, configured in Envoy gRPC listener mode.
  • Add an emptyDir{} volume as a place for the sigsci-agent to write temporary data.
  • Add an Istio EnvoyFilter for the app to allow the required Envoy configuration to be injected into the generated istio-proxy config.

Add the Signal Sciences agent as an Envoy gRPC service

1...
2 containers:
3 # Example helloworld app running on port 8000 without sigsci configured
4 - name: helloworld
5 image: signalsciences/example-helloworld:latest
6 imagePullPolicy: IfNotPresent
7 args:
8 # Address for the app to listen on
9 - localhost:8080
10 ports:
11 - containerPort: 8080
12 # Signal Sciences Agent running in Envoy gRPC mode (SIGSCI_ENVOY_GRPC_ADDRESS configured)
13 - name: sigsci-agent
14 image: signalsciences/sigsci-agent:latest
15 imagePullPolicy: IfNotPresent
16 # Configure the agent to use Envoy gRPC on port 9999
17 env:
18 - name: SIGSCI_ACCESSKEYID
19 valueFrom:
20 secretKeyRef:
21 # This secret needs added (see docs on sigsci secrets)
22 name: sigsci.my-site-name-here
23 key: accesskeyid
24 - name: SIGSCI_SECRETACCESSKEY
25 valueFrom:
26 secretKeyRef:
27 # This secret needs added (see docs on sigsci secrets)
28 name: sigsci.my-site-name-here
29 key: secretaccesskey
30 # Configure the Envoy to expect response data (if using a gRPC access log config for Envoy)
31 - name: SIGSCI_ENVOY_EXPECT_RESPONSE_DATA
32 value: "1"
33 # Configure the Envoy gRPC listener address on any unused port
34 - name: SIGSCI_ENVOY_GRPC_ADDRESS
35 value: localhost:9999
36 ports:
37 - containerPort: 9999
38 securityContext:
39 # The sigsci-agent container should run with its root filesystem read only
40 readOnlyRootFilesystem: true

Adding the Signal Sciences agent temp volume definition to the deployment

The agent temp volume needs to be defined for use by the other containers in the pod using the builtin emptyDir: {} volume type:

1...
2 volumes:
3 # Define a volume where sigsci-agent will write temp data and share the socket file,
4 # which is required with the root filesystem is mounted read only
5 - name: sigsci-tmp
6 emptyDir: {}

Adding the Istio EnvoyFilter object to inject the required Envoy config into the Istio proxy

Istio's EnvoyFilter object is a feature rich way of customizing the Envoy configuration for the istio-proxy.

You will need to set the EnvoyFilter metadata.name field and the spec.workloadSelector.labels.app field to the application name below. Additional Envoy configuration options are outlined in the Envoy install guide. These sections are highlighted with comments in the example YAML.

Example example-helloworld_sigsci-envoyfilter.yaml:

1# The following adds the required Envoy configuration into the istio-proxy configuration
2apiVersion: networking.istio.io/v1alpha3
3kind: EnvoyFilter
4metadata:
5 # This needs adjusted to be the app name protected by sigsci
6 name: helloworld
7spec:
8 workloadSelector:
9 labels:
10 # This needs adjusted to be the app name protected by sigsci
11 app: helloworld
12
13 # Patch the Envoy configuration, adding in the required sigsci config
14 configPatches:
15
16 # Adds the ext_authz HTTP filter for the sigsci-agent ext_authz API
17 - applyTo: HTTP_FILTER
18 match:
19 context: SIDECAR_INBOUND
20 listener:
21 name: virtualInbound
22 filterChain:
23 filter:
24 name: "envoy.http_connection_manager"
25 patch:
26 operation: INSERT_BEFORE
27 value:
28 # Configure the envoy.ext_authz here:
29 name: envoy.filters.http.ext_authz
30 typed_config:
31 "@type": "type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz"
32 transport_api_version: "V3"
33 grpc_service:
34 # NOTE: *SHOULD* use envoy_grpc as ext_authz can use dynamic clusters and has connection pooling
35 envoy_grpc:
36 cluster_name: sigsci-agent-grpc
37 timeout: 0.2s
38 failure_mode_allow: true
39 with_request_body:
40 max_request_bytes: 8192
41 allow_partial_message: true
42
43 # Adds the access_log entry for the sigsci-agent http_grpc_access_log API
44 - applyTo: NETWORK_FILTER
45 match:
46 context: SIDECAR_INBOUND
47 listener:
48 name: virtualInbound
49 filterChain:
50 filter:
51 name: "envoy.http_connection_manager"
52 patch:
53 operation: MERGE
54 value:
55 name: "envoy.http_connection_manager"
56 typed_config:
57 "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"
58 access_log:
59 # Configure the envoy.http_grpc_access_log here:
60 - name: "envoy.http_grpc_access_log"
61 typed_config:
62 "@type": "type.googleapis.com/envoy.extensions.access_loggers.grpc.v3.HttpGrpcAccessLogConfig"
63 common_config:
64 log_name: "sigsci-agent-grpc"
65 transport_api_version: "V3"
66 grpc_service:
67 # NOTE: *MUST* use google_grpc as envoy_grpc cannot handle a dynamic cluster for ALS (yet)
68 google_grpc:
69 # The address *MUST* be 127.0.0.1 so that communication is intra-pod
70 # Configure the sigsci-agent port number here:
71 target_uri: 127.0.0.1:9999
72 stat_prefix: "sigsci-agent"
73 timeout: 0.2s
74 additional_request_headers_to_log:
75 # These are required:
76 - "x-sigsci-request-id"
77 - "x-sigsci-waf-response"
78 # These are additional you want recorded:
79 - "accept"
80 - "content-type"
81 - "content-length"
82 additional_response_headers_to_log:
83 # These are additional you want recorded:
84 - "date"
85 - "server"
86 - "content-type"
87 - "content-length"
88
89 # Adds a dynamic cluster for the sigsci-agent via CDS for sigsci-agent ext_authz API
90 - applyTo: CLUSTER
91 patch:
92 operation: ADD
93 value:
94 name: sigsci-agent-grpc
95 type: STRICT_DNS
96 connect_timeout: 0.5s
97 http2_protocol_options: {}
98 load_assignment:
99 cluster_name: sigsci-agent-grpc
100 endpoints:
101 - lb_endpoints:
102 - endpoint:
103 address:
104 socket_address:
105 # The address *MUST* be 127.0.0.1 so that communication is intra-pod
106 address: 127.0.0.1
107 # Configure the agent port here:
108 port_value: 9999

The application can then be deployed as you normally would with Istio. For example:

$ istioctl kube-inject -f example-helloworld-sigsci.yaml | kubectl apply -f -
service/helloworld created
deployment.apps/helloworld created
$ kubectl apply -f example-helloworld-sigsci_envoyfilter.yaml
envoyfilter.networking.istio.io/helloworld created
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
helloworld-7954bb57bc-pfr22 3/3 Running 2 33s
$ kubectl get pod helloworld-7954bb57bc-pfr22 -o jsonpath='{.spec.containers[*].name}'
helloworld sigsci-agent istio-proxy
$ kubectl logs helloworld-7954bb57bc-pfr22 sigsci-agent | head
2019/10/01 21:04:57.540047 Signal Sciences Agent 4.39.0 starting as user sigsci with PID 1, Max open files=1048576, Max data size=unlimited, Max address space=unlimited, Max stack size=8388608
2019/10/01 21:04:57.541987 =====================================================
2019/10/01 21:04:57.542028 Agent: helloworld-7954bb57bc-pfr22
2019/10/01 21:04:57.542034 System: alpine 3.9.4 (linux 4.9.184-linuxkit)
2019/10/01 21:04:57.542173 Memory: 1.672G / 3.854G RAM available
2019/10/01 21:04:57.542187 CPU: 6 MaxProcs / 12 CPU cores available
2019/10/01 21:04:57.542257 =====================================================
2019/10/01 21:04:57.630755 Envoy gRPC server on 127.0.0.1:9999 starting

Note that there are three containers running in the pod: app=helloworld, sigsci-agent, and the istio-proxy.

Was this guide helpful?

Do not use this form to send sensitive information. If you need assistance, contact support. This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.