Kubernetes Istio
Last updated 2024-04-03
IMPORTANT
This guide only applies to Next-Gen WAF customers with access to the Next-Gen WAF control panel. If you have access to the Next-Gen WAF product in the Fastly control panel, you can only deploy the Next-Gen WAF with the Edge WAF deployment method.
In this example, the Next-Gen WAF agent runs in a Docker sidecar and integrates directly with an Istio service mesh deployed on the application. In this configuration, you can configure the Next-Gen WAF to inspect east/west (service-to-service) web requests along with the traditional north/south (client to server) requests.
Integrating the Next-Gen WAF agent
The Next-Gen WAF agent can be installed as a sidecar into each pod or as a service for some specialized needs.
The recommended way of installing the Next-Gen WAF agent in Kubernetes is by integrating the sigsci-agent
into a pod as a sidecar. This means adding the sigsci-agent
as an additional container to the Kubernetes pod. As a sidecar, the agent will scale with the app/service in the pod instead of having to do this separately. However, in some situations, it may make more sense to install the sigsci-agent
container as a service and scale it separately from the application.
The sigsci-agent
container can be configured in various ways depending on the installation type and module being used.
You can use the preStop
container hook to slow the pod's shutdown and ensure drain timeouts are met.
1preStop:2 exec:3 command:4 - sleep5 - "30"
By default, the agent prioritizes quick start up and performance readiness for preliminary inspection. However, quick startup isn't always desirable if you only want the agent to inspect traffic after loading your rules and configuration data. If you want to delay agent startup, consider configuring a startup probe.
Getting and updating the agent container image
An official signalsciences/sigsci-agent
container image is available on Docker Hub.
Alternatively, if you want to build your own image or need to customize the image, then follow the sigsci-agent build instructions.
These instructions reference the latest
version of the agent with imagePullPolicy: Always
, which will pull the latest agent version even if one already exist locally. This is so the documentation does not fall out of date and anyone using this will not have an agent that stays stagnant. However, this may not be what if you need to keep installations consistent or on a specific version of the agent. In these cases, you should specify an agent version. Images on Docker Hub are tagged with their versions and a list of versions is available on Docker Hub.
Whether you choose to use the latest
image or a specific version, there are a few items to consider to keep the agent up-to-date.
Using the latest
container image
If you do choose to use the latest
image, then you will want to consider how you will keep the agent up to date.
If you have used the
imagePullPolicy: Always
option, then the latest image will be pulled on each startup and your agent will continue to get updates.Alternatively, you may instead choose to manually update the local cache by periodically forcing a pull instead of always pulling on startup:
$ docker pull signalsciences/sigsci-agent:latestThen, use
latest
withimagePullPolicy: Never
set in the configuration so that pulls are never done on startup (only manually as above):1- name: sigsci-agent2 image: signalsciences/sigsci-agent:latest3 imagePullPolicy: Never4 ...
Using a versioned container image
To use a specific version of the agent, replace latest
with the agent version (represented here by x.xx.x
). You may also want to change imagePullPolicy: IfNotPresent
in this case as the image should not change.
1- name: sigsci-agent2 image: signalsciences/sigsci-agent:x.xx.x3 imagePullPolicy: IfNotPresent4 ...
This will pull the specified agent version and cache it locally. If you use this method, then it is recommended that you parameterize the agent image, using Helm or similar, so that it is easier to update the agent images later on.
Using a custom tag for the container image
It is also possible to apply a custom tag to a local agent image. To do this, pull the agent image (by version or use latest
), apply a custom tag, then use that custom tag in the configuration. You will need to specify imagePullPolicy: Never
so local images are only updated manually. After doing so, you will need to periodically update the local image to keep the agent up-to-date.
For example:
$ docker pull signalsciences/sigsci-agent:latest$ docker tag signalsciences/sigsci-agent:latest signalsciences/sigsci-agent:testing
Then use this image tag in the configuration:
1- name: sigsci-agent2 image: signalsciences/sigsci-agent:testing3 imagePullPolicy: Never4...
Configuring the agent container
Agent configuration is normally done via the environment. Most configuration options are available as environment variables. Environment variables names have the configuration option name all capitalized, prefixed with SIGSCI_
and any dashes (-) changed to underscores (_). For example, the max-procs option would become the SIGSCI_MAX_PROCS
environment variable. For more details on what options are available, see the Agent Configuration documentation.
The sigsci-agent
container has a few required options that need to be configured:
- Agent credentials (Agent Access Key and Agent Secret Key).
- A volume to write temporary files.
Agent credentials
The sigsci-agent
credentials are configured with two environment variables. These variables must be set or the agent will not start.
- SIGSCI_ACCESSKEYID: The Agent Access Key identifies which site (also known as workspace) in the Next-Gen WAF control panel that the agent is configured for.
- SIGSCI_SECRETACCESSKEY: The Agent Secret Key is the shared secret key to authenticate and authorize the agent.
Because of the sensitive nature of these values, we recommend you use the built in secrets
functionality of Kubernetes. With this configuration, the agent will pull the values from the secrets data instead of reading hardcoded values into the deployment configuration. This also makes any desired agent credential rotation easier to manage by having to change them in only one place.
Use the valueFrom
option instead of the value
option to use the secrets
functionality. For example:
1env:2 - name: SIGSCI_ACCESSKEYID3 valueFrom:4 secretKeyRef:5 # Update my-site-name-here to the correct site (workspace) name or similar identifier6 name: sigsci.my-site-name-here7 key: accesskeyid8 - name: SIGSCI_SECRETACCESSKEY9 valueFrom:10 secretKeyRef:11 # Update my-site-name-here to the correct site (workspace) name or similar identifier12 name: sigsci.my-site-name-here13 key: secretaccesskey
The secrets
functionality keeps secrets in various stores in Kubernetes. This guide uses the generic secret store in its examples, however any equivalent store can be used. Agent secrets can be added to the generic secret store using YAML similar to the following example:
1apiVersion: v12kind: Secret3metadata:4 name: sigsci.my-site-name-here5stringData:6 accesskeyid: 12345678-abcd-1234-abcd-1234567890ab7 secretaccesskey: abcdefg_hijklmn_opqrstuvwxy_z0123456789ABCD
This can also be created from the command line with kubectl
such as with the following example:
$ kubectl create secret generic sigsci.my-site-name-here \ --from-literal=accesskeyid=12345678-abcd-1234-abcd-1234567890ab \ --from-literal=secretaccesskey=abcdefg_hijklmn_opqrstuvwxy_z0123456789ABCD
Additional information about Kubernetes secrets
functionality can be found in the Kubernetes documentation.
Agent temporary volume
For added security, we recommended the sigsci-agent
container be executed with the root filesystem mounted as read only. However, the agent still needs to write some temporary files such as the socket file for RPC communication and some periodically updated files such as geolocation data.
To accomplish this with a read only root filesystem, there needs to be a writeable volume mounted. This writeable volume can also be shared to expose the RPC socket file to other containers in the same pod.
The recommended way of creating a writeable volume is to use the builtin emptyDir
volume type. This is typically configured in the volumes
section of a deployment, as shown in the following example:
1volumes:2 - name: sigsci-tmp3 emptyDir: {}
Containers will then mount this volume at /sigsci/tmp
:
1volumeMounts:2 - name: sigsci-tmp3 mountPath: /sigsci/tmp
The default in the official agent container image is to have the temporary volume mounted at /sigsci/tmp
. If this needs to be moved for the agent container, then the following agent configuration options should also be changed from their defaults to match the new mount location:
rpc-address
defaults to/sigsci/tmp/sigsci.sock
shared-cache-dir
defaults to/sigsci/tmp/cache
Integrating the Next-Gen WAF agent using external authorization
As of Istio v1.9, support has been added to setup an authorization policy that delegates access control to an external authorization system.
The snippets below follow Istio's example and enhance the process to replace the example ext-authz
service with the Next-Gen WAF agent. Refer to the Istio documentation for initial namespace and test workloads, as those are referenced in the snippets below. All files are applied to the 'foo' namespace unless otherwise indicated.
Deploy the external authorizer
Assumes the secrets have been applied.
1apiVersion: v12kind: Service3metadata:4 name: sigsci-agent5 labels:6 app: sigsci-agent7spec:8 ports:9 - name: grpc10 port: 999911 targetPort: 999912 selector:13 app: sigsci-agent14---15apiVersion: apps/v116kind: Deployment17metadata:18 name: sigsci-agent19spec:20 replicas: 121 selector:22 matchLabels:23 app: sigsci-agent24 template:25 metadata:26 labels:27 app: sigsci-agent28 spec:29 containers:30 - name: sigsci-agent31 image: signalsciences/sigsci-agent:latest32 imagePullPolicy: IfNotPresent33 # Configure the agent to use Envoy gRPC on port 999934 env:35 - name: SIGSCI_ACCESSKEYID36 valueFrom:37 secretKeyRef:38 # This secret needs added (see docs on sigsci secrets)39 name: sigsci-agent-accesskey40 key: accesskeyid41 - name: SIGSCI_SECRETACCESSKEY42 valueFrom:43 secretKeyRef:44 # This secret needs added (see docs on sigsci secrets)45 name: sigsci-agent-accesskey46 key: secretaccesskey47 # Configure the Envoy to expect response data (if using a gRPC access log config for Envoy)48 - name: SIGSCI_ENVOY_EXPECT_RESPONSE_DATA49 value: "1"50 - name: SIGSCI_ENVOY_GRPC_ADDRESS51 value: :999952 ports:53 - containerPort: 999954 securityContext:55 # The sigsci-agent container should run with its root filesystem read only56 readOnlyRootFilesystem: true57---
Verify the Agent is running.
$ kubectl logs "$(kubectl get pod -l app=sigsci-agent -n foo -o jsonpath={.items..metadata.name})" -n foo -c sigsci-agent
Define the external authorizer
Edit the mesh config with the following command and add the extension provide definitions.
$ kubectl edit configmap istio -n istio-system
1data:2 mesh: |-3 # Add the following content to define the external authorizers.4 extensionProviders:5 - name: "sigsci-agent-ext-authz"6 envoyExtAuthzGrpc:7 service: "sigsci-agent.foo.svc.cluster.local"8 port: "9999"9 timeout: 0.2s10 failOpen: true11 includeRequestBodyInCheck:12 packAsBytes: true13 # use `allowPartialMessage: false` if you want to inspect larger payloads14 allowPartialMessage: true15 maxRequestBytes: 819216 - name: "sigsci-agent-access-log"17 envoyHttpAls:18 service: "sigsci-agent.foo.svc.cluster.local"19 port: "9999"20 additionalRequestHeadersToLog:21 - "x-sigsci-request-id"22 - "x-sigsci-waf-response"23 - "accept"24 - "content-type"25 - "content-length"26 additionalResponseHeadersToLog:27 - "date"28 - "server"29 - "content-type"30 - "content-length"
Enable with external authorization
Enable the external authorization and apply logging.
1apiVersion: security.istio.io/v1beta12kind: AuthorizationPolicy3metadata:4 name: ext-authz5spec:6 selector:7 matchLabels:8 app: httpbin9 action: CUSTOM10 provider:11 # The provider name must match the extension provider defined in the mesh config.12 name: sigsci-agent-ext-authz13 rules:14 # The rules specify when to trigger the external authorizer.15 - to:16 - operation:17 paths: ["/headers"]
1# kubectl apply -f logging.yaml2apiVersion: telemetry.istio.io/v1alpha13kind: Telemetry4metadata:5 name: mesh-default6 namespace: istio-system7spec:8 accessLogging:9 - providers:10 - name: sigsci-agent-access-log
# In another terminal curl the httpbin app:$ kubectl exec "$(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name})" -c sleep -n foo -- curl -v "http://httpbin.foo:8000/headers" -s
# tail the logs$ kubectl logs -f "$(kubectl get pod -l app=sigsci-agent -n foo -o jsonpath={.items..metadata.name})" -n foo -c sigsci-agent
Integrating the Next-Gen WAF agent using EnvoyFilter
Istio uses Envoy proxy under its hood. Because of this, Istio can use the Next-Gen WAF agent in gRPC mode in the same way as with a generic Envoy install. The method of installing and configuring the Next-Gen WAF agent is similar to a generic Envoy install except the Envoy proxy is automatically deployed as a sidecar. Envoy is then configured using Istio's EnvoyFilter
. Full Istio integration is only possible in Istio v1.3 or later due to the required extensions to EnvoyFilter
.
To add Next-Gen WAF support to an Istio based application deployment, you will need to:
- Add the
sigsci-agent
container to the pod, configured in Envoy gRPC listener mode. - Add an
emptyDir{}
volume as a place for thesigsci-agent
to write temporary data. - Add an Istio
EnvoyFilter
for the app to allow the required Envoy configuration to be injected into the generatedistio-proxy
config.
Add the Next-Gen WAF agent as an Envoy gRPC service
1...2 containers:3 # Example helloworld app running on port 8000 without sigsci configured4 - name: helloworld5 image: signalsciences/example-helloworld:latest6 imagePullPolicy: IfNotPresent7 args:8 # Address for the app to listen on9 - localhost:808010 ports:11 - containerPort: 808012 # Next-Gen WAF agent running in Envoy gRPC mode (SIGSCI_ENVOY_GRPC_ADDRESS configured)13 - name: sigsci-agent14 image: signalsciences/sigsci-agent:latest15 imagePullPolicy: IfNotPresent16 # Configure the agent to use Envoy gRPC on port 999917 env:18 - name: SIGSCI_ACCESSKEYID19 valueFrom:20 secretKeyRef:21 # This secret needs added (see docs on sigsci secrets)22 name: sigsci.my-site-name-here23 key: accesskeyid24 - name: SIGSCI_SECRETACCESSKEY25 valueFrom:26 secretKeyRef:27 # This secret needs added (see docs on sigsci secrets)28 name: sigsci.my-site-name-here29 key: secretaccesskey30 # Configure the Envoy to expect response data (if using a gRPC access log config for Envoy)31 - name: SIGSCI_ENVOY_EXPECT_RESPONSE_DATA32 value: "1"33 # Configure the Envoy gRPC listener address on any unused port34 - name: SIGSCI_ENVOY_GRPC_ADDRESS35 value: localhost:999936 ports:37 - containerPort: 999938 securityContext:39 # The sigsci-agent container should run with its root filesystem read only40 readOnlyRootFilesystem: true
Adding the Next-Gen WAF agent temp volume definition to the deployment
The agent temp volume needs to be defined for use by the other containers in the pod using the builtin emptyDir: {}
volume type:
1...2 volumes:3 # Define a volume where sigsci-agent will write temp data and share the socket file,4 # which is required with the root filesystem is mounted read only5 - name: sigsci-tmp6 emptyDir: {}
Adding the Istio EnvoyFilter object to inject the required Envoy config into the Istio proxy
Istio's EnvoyFilter
object is a feature rich way of customizing the Envoy configuration for the istio-proxy
.
You will need to set the EnvoyFilter metadata.name
field and the spec.workloadSelector.labels.app
field to the application name below. Additional Envoy configuration options are outlined in the Envoy install guide. These sections are highlighted with comments in the example YAML.
Example example-helloworld_sigsci-envoyfilter.yaml
:
1# The following adds the required Envoy configuration into the istio-proxy configuration2apiVersion: networking.istio.io/v1alpha33kind: EnvoyFilter4metadata:5 # This needs adjusted to be the app name protected by sigsci6 name: helloworld7spec:8 workloadSelector:9 labels:10 # This needs adjusted to be the app name protected by sigsci11 app: helloworld12
13 # Patch the Envoy configuration, adding in the required sigsci config14 configPatches:15
16 # Adds the ext_authz HTTP filter for the sigsci-agent ext_authz API17 - applyTo: HTTP_FILTER18 match:19 context: SIDECAR_INBOUND20 listener:21 name: virtualInbound22 filterChain:23 filter:24 name: "envoy.http_connection_manager"25 patch:26 operation: INSERT_BEFORE27 value:28 # Configure the envoy.ext_authz here:29 name: envoy.filters.http.ext_authz30 typed_config:31 "@type": "type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz"32 transport_api_version: "V3"33 grpc_service:34 # NOTE: *SHOULD* use envoy_grpc as ext_authz can use dynamic clusters and has connection pooling35 envoy_grpc:36 cluster_name: sigsci-agent-grpc37 timeout: 0.2s38 failure_mode_allow: true39 with_request_body:40 max_request_bytes: 819241 allow_partial_message: true42
43 # Adds the access_log entry for the sigsci-agent http_grpc_access_log API44 - applyTo: NETWORK_FILTER45 match:46 context: SIDECAR_INBOUND47 listener:48 name: virtualInbound49 filterChain:50 filter:51 name: "envoy.http_connection_manager"52 patch:53 operation: MERGE54 value:55 name: "envoy.http_connection_manager"56 typed_config:57 "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager"58 access_log:59 # Configure the envoy.http_grpc_access_log here:60 - name: "envoy.http_grpc_access_log"61 typed_config:62 "@type": "type.googleapis.com/envoy.extensions.access_loggers.grpc.v3.HttpGrpcAccessLogConfig"63 common_config:64 log_name: "sigsci-agent-grpc"65 transport_api_version: "V3"66 grpc_service:67 # NOTE: *MUST* use google_grpc as envoy_grpc cannot handle a dynamic cluster for ALS (yet)68 google_grpc:69 # The address *MUST* be 127.0.0.1 so that communication is intra-pod70 # Configure the sigsci-agent port number here:71 target_uri: 127.0.0.1:999972 stat_prefix: "sigsci-agent"73 timeout: 0.2s74 additional_request_headers_to_log:75 # These are required:76 - "x-sigsci-request-id"77 - "x-sigsci-waf-response"78 # These are additional you want recorded:79 - "accept"80 - "content-type"81 - "content-length"82 additional_response_headers_to_log:83 # These are additional you want recorded:84 - "date"85 - "server"86 - "content-type"87 - "content-length"88
89 # Adds a dynamic cluster for the sigsci-agent via CDS for sigsci-agent ext_authz API90 - applyTo: CLUSTER91 patch:92 operation: ADD93 value:94 name: sigsci-agent-grpc95 type: STRICT_DNS96 connect_timeout: 0.5s97 http2_protocol_options: {}98 load_assignment:99 cluster_name: sigsci-agent-grpc100 endpoints:101 - lb_endpoints:102 - endpoint:103 address:104 socket_address:105 # The address *MUST* be 127.0.0.1 so that communication is intra-pod106 address: 127.0.0.1107 # Configure the agent port here:108 port_value: 9999
The application can then be deployed as you normally would with Istio. For example:
$ istioctl kube-inject -f example-helloworld-sigsci.yaml | kubectl apply -f -service/helloworld createddeployment.apps/helloworld created$ kubectl apply -f example-helloworld-sigsci_envoyfilter.yamlenvoyfilter.networking.istio.io/helloworld created$ kubectl get podsNAME READY STATUS RESTARTS AGEhelloworld-7954bb57bc-pfr22 3/3 Running 2 33s$ kubectl get pod helloworld-7954bb57bc-pfr22 -o jsonpath='{.spec.containers[*].name}'helloworld sigsci-agent istio-proxy$ kubectl logs helloworld-7954bb57bc-pfr22 sigsci-agent | head2019/10/01 21:04:57.540047 Signal Sciences Agent 4.39.0 starting as user sigsci with PID 1, Max open files=1048576, Max data size=unlimited, Max address space=unlimited, Max stack size=83886082019/10/01 21:04:57.541987 =====================================================2019/10/01 21:04:57.542028 Agent: helloworld-7954bb57bc-pfr222019/10/01 21:04:57.542034 System: alpine 3.9.4 (linux 4.9.184-linuxkit)2019/10/01 21:04:57.542173 Memory: 1.672G / 3.854G RAM available2019/10/01 21:04:57.542187 CPU: 6 MaxProcs / 12 CPU cores available2019/10/01 21:04:57.542257 =====================================================2019/10/01 21:04:57.630755 Envoy gRPC server on 127.0.0.1:9999 starting
Note that there are three containers running in the pod: app=helloworld
, sigsci-agent
, and the istio-proxy
.
Do not use this form to send sensitive information. If you need assistance, contact support. This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.