Kubernetes Istio
Last updated 2024-04-03
IMPORTANT
This guide only applies to Next-Gen WAF customers with access to the Next-Gen WAF control panel. If you have access to the Next-Gen WAF product in the Fastly control panel, you can only deploy the Next-Gen WAF with the Edge WAF deployment method.
In this example, the Next-Gen WAF agent runs in a Docker sidecar and integrates directly with an Istio service mesh deployed on the application. In this configuration, you can configure the Next-Gen WAF to inspect east/west (service-to-service) web requests along with the traditional north/south (client to server) requests.
Integrating the Next-Gen WAF agent
The Next-Gen WAF agent can be installed as a sidecar into each pod or as a service for some specialized needs.
The recommended way of installing the Next-Gen WAF agent in Kubernetes is by integrating the sigsci-agent
into a pod as a sidecar. This means adding the sigsci-agent
as an additional container to the Kubernetes pod. As a sidecar, the agent will scale with the app/service in the pod instead of having to do this separately. However, in some situations, it may make more sense to install the sigsci-agent
container as a service and scale it separately from the application.
The sigsci-agent
container can be configured in various ways depending on the installation type and module being used.
You can use the preStop
container hook to slow the pod's shutdown and ensure drain timeouts are met.
12345
preStop: exec: command: - sleep - "30"
By default, the agent prioritizes quick start up and performance readiness for preliminary inspection. However, quick startup isn't always desirable if you only want the agent to inspect traffic after loading your rules and configuration data. If you want to delay agent startup, consider configuring a startup probe.
Getting and updating the agent container image
An official signalsciences/sigsci-agent
container image is available on Docker Hub.
Alternatively, if you want to build your own image or need to customize the image, then follow the sigsci-agent build instructions.
These instructions reference the latest
version of the agent with imagePullPolicy: Always
, which will pull the latest agent version even if one already exist locally. This is so the documentation does not fall out of date and anyone using this will not have an agent that stays stagnant. However, this may not be what if you need to keep installations consistent or on a specific version of the agent. In these cases, you should specify an agent version. Images on Docker Hub are tagged with their versions and a list of versions is available on Docker Hub.
Whether you choose to use the latest
image or a specific version, there are a few items to consider to keep the agent up-to-date.
Using the latest
container image
If you do choose to use the latest
image, then you will want to consider how you will keep the agent up to date.
If you have used the
imagePullPolicy: Always
option, then the latest image will be pulled on each startup and your agent will continue to get updates.Alternatively, you may instead choose to manually update the local cache by periodically forcing a pull instead of always pulling on startup:
$ docker pull signalsciences/sigsci-agent:latestThen, use
latest
withimagePullPolicy: Never
set in the configuration so that pulls are never done on startup (only manually as above):1234- name: sigsci-agentimage: signalsciences/sigsci-agent:latestimagePullPolicy: Never...
Using a versioned container image
To use a specific version of the agent, replace latest
with the agent version (represented here by x.xx.x
). You may also want to change imagePullPolicy: IfNotPresent
in this case as the image should not change.
1234
- name: sigsci-agent image: signalsciences/sigsci-agent:x.xx.x imagePullPolicy: IfNotPresent ...
This will pull the specified agent version and cache it locally. If you use this method, then it is recommended that you parameterize the agent image, using Helm or similar, so that it is easier to update the agent images later on.
Using a custom tag for the container image
It is also possible to apply a custom tag to a local agent image. To do this, pull the agent image (by version or use latest
), apply a custom tag, then use that custom tag in the configuration. You will need to specify imagePullPolicy: Never
so local images are only updated manually. After doing so, you will need to periodically update the local image to keep the agent up-to-date.
For example:
$ docker pull signalsciences/sigsci-agent:latest$ docker tag signalsciences/sigsci-agent:latest signalsciences/sigsci-agent:testing
Then use this image tag in the configuration:
1234
- name: sigsci-agent image: signalsciences/sigsci-agent:testing imagePullPolicy: Never...
Configuring the agent container
Agent configuration is normally done via the environment. Most configuration options are available as environment variables. Environment variables names have the configuration option name all capitalized, prefixed with SIGSCI_
and any dashes (-) changed to underscores (_). For example, the max-procs option would become the SIGSCI_MAX_PROCS
environment variable. For more details on what options are available, see the Agent Configuration documentation.
The sigsci-agent
container has a few required options that need to be configured:
- Agent credentials (Agent Access Key and Agent Secret Key).
- A volume to write temporary files.
Agent credentials
The sigsci-agent
credentials are configured with two environment variables. These variables must be set or the agent will not start.
- SIGSCI_ACCESSKEYID: The Agent Access Key identifies which site (also known as workspace) in the Next-Gen WAF control panel that the agent is configured for.
- SIGSCI_SECRETACCESSKEY: The Agent Secret Key is the shared secret key to authenticate and authorize the agent.
Because of the sensitive nature of these values, we recommend you use the built in secrets
functionality of Kubernetes. With this configuration, the agent will pull the values from the secrets data instead of reading hardcoded values into the deployment configuration. This also makes any desired agent credential rotation easier to manage by having to change them in only one place.
Use the valueFrom
option instead of the value
option to use the secrets
functionality. For example:
12345678910111213
env: - name: SIGSCI_ACCESSKEYID valueFrom: secretKeyRef: # Update my-site-name-here to the correct site (workspace) name or similar identifier name: sigsci.my-site-name-here key: accesskeyid - name: SIGSCI_SECRETACCESSKEY valueFrom: secretKeyRef: # Update my-site-name-here to the correct site (workspace) name or similar identifier name: sigsci.my-site-name-here key: secretaccesskey
The secrets
functionality keeps secrets in various stores in Kubernetes. This guide uses the generic secret store in its examples, however any equivalent store can be used. Agent secrets can be added to the generic secret store using YAML similar to the following example:
1234567
apiVersion: v1kind: Secretmetadata: name: sigsci.my-site-name-herestringData: accesskeyid: 12345678-abcd-1234-abcd-1234567890ab secretaccesskey: abcdefg_hijklmn_opqrstuvwxy_z0123456789ABCD
This can also be created from the command line with kubectl
such as with the following example:
$ kubectl create secret generic sigsci.my-site-name-here \ --from-literal=accesskeyid=12345678-abcd-1234-abcd-1234567890ab \ --from-literal=secretaccesskey=abcdefg_hijklmn_opqrstuvwxy_z0123456789ABCD
Additional information about Kubernetes secrets
functionality can be found in the Kubernetes documentation.
Agent temporary volume
For added security, we recommended the sigsci-agent
container be executed with the root filesystem mounted as read only. However, the agent still needs to write some temporary files such as the socket file for RPC communication and some periodically updated files such as geolocation data.
To accomplish this with a read only root filesystem, there needs to be a writeable volume mounted. This writeable volume can also be shared to expose the RPC socket file to other containers in the same pod.
The recommended way of creating a writeable volume is to use the builtin emptyDir
volume type. This is typically configured in the volumes
section of a deployment, as shown in the following example:
123
volumes: - name: sigsci-tmp emptyDir: {}
Containers will then mount this volume at /sigsci/tmp
:
123
volumeMounts: - name: sigsci-tmp mountPath: /sigsci/tmp
The default in the official agent container image is to have the temporary volume mounted at /sigsci/tmp
. If this needs to be moved for the agent container, then the following agent configuration options should also be changed from their defaults to match the new mount location:
rpc-address
defaults to/sigsci/tmp/sigsci.sock
shared-cache-dir
defaults to/sigsci/tmp/cache
Integrating the Next-Gen WAF agent using external authorization
As of Istio v1.9, support has been added to setup an authorization policy that delegates access control to an external authorization system.
The snippets below follow Istio's example and enhance the process to replace the example ext-authz
service with the Next-Gen WAF agent. Refer to the Istio documentation for initial namespace and test workloads, as those are referenced in the snippets below. All files are applied to the 'foo' namespace unless otherwise indicated.
Deploy the external authorizer
Assumes the secrets have been applied.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657
apiVersion: v1kind: Servicemetadata: name: sigsci-agent labels: app: sigsci-agentspec: ports: - name: grpc port: 9999 targetPort: 9999 selector: app: sigsci-agent---apiVersion: apps/v1kind: Deploymentmetadata: name: sigsci-agentspec: replicas: 1 selector: matchLabels: app: sigsci-agent template: metadata: labels: app: sigsci-agent spec: containers: - name: sigsci-agent image: signalsciences/sigsci-agent:latest imagePullPolicy: IfNotPresent # Configure the agent to use Envoy gRPC on port 9999 env: - name: SIGSCI_ACCESSKEYID valueFrom: secretKeyRef: # This secret needs added (see docs on sigsci secrets) name: sigsci-agent-accesskey key: accesskeyid - name: SIGSCI_SECRETACCESSKEY valueFrom: secretKeyRef: # This secret needs added (see docs on sigsci secrets) name: sigsci-agent-accesskey key: secretaccesskey # Configure the Envoy to expect response data (if using a gRPC access log config for Envoy) - name: SIGSCI_ENVOY_EXPECT_RESPONSE_DATA value: "1" - name: SIGSCI_ENVOY_GRPC_ADDRESS value: :9999 ports: - containerPort: 9999 securityContext: # The sigsci-agent container should run with its root filesystem read only readOnlyRootFilesystem: true---
Verify the Agent is running.
$ kubectl logs "$(kubectl get pod -l app=sigsci-agent -n foo -o jsonpath={.items..metadata.name})" -n foo -c sigsci-agent
Define the external authorizer
Edit the mesh config with the following command and add the extension provide definitions.
$ kubectl edit configmap istio -n istio-system
123456789101112131415161718192021222324252627282930
data: mesh: |- # Add the following content to define the external authorizers. extensionProviders: - name: "sigsci-agent-ext-authz" envoyExtAuthzGrpc: service: "sigsci-agent.foo.svc.cluster.local" port: "9999" timeout: 0.2s failOpen: true includeRequestBodyInCheck: packAsBytes: true # use `allowPartialMessage: false` if you want to inspect larger payloads allowPartialMessage: true maxRequestBytes: 8192 - name: "sigsci-agent-access-log" envoyHttpAls: service: "sigsci-agent.foo.svc.cluster.local" port: "9999" additionalRequestHeadersToLog: - "x-sigsci-request-id" - "x-sigsci-waf-response" - "accept" - "content-type" - "content-length" additionalResponseHeadersToLog: - "date" - "server" - "content-type" - "content-length"
Enable with external authorization
Enable the external authorization and apply logging.
1234567891011121314151617
apiVersion: security.istio.io/v1beta1kind: AuthorizationPolicymetadata: name: ext-authzspec: selector: matchLabels: app: httpbin action: CUSTOM provider: # The provider name must match the extension provider defined in the mesh config. name: sigsci-agent-ext-authz rules: # The rules specify when to trigger the external authorizer. - to: - operation: paths: ["/headers"]
12345678910
# kubectl apply -f logging.yamlapiVersion: telemetry.istio.io/v1alpha1kind: Telemetrymetadata: name: mesh-default namespace: istio-systemspec: accessLogging: - providers: - name: sigsci-agent-access-log
# In another terminal curl the httpbin app:$ kubectl exec "$(kubectl get pod -l app=sleep -n foo -o jsonpath={.items..metadata.name})" -c sleep -n foo -- curl -v "http://httpbin.foo:8000/headers" -s
# tail the logs$ kubectl logs -f "$(kubectl get pod -l app=sigsci-agent -n foo -o jsonpath={.items..metadata.name})" -n foo -c sigsci-agent
Integrating the Next-Gen WAF agent using EnvoyFilter
Istio uses Envoy proxy under its hood. Because of this, Istio can use the Next-Gen WAF agent in gRPC mode in the same way as with a generic Envoy install. The method of installing and configuring the Next-Gen WAF agent is similar to a generic Envoy install except the Envoy proxy is automatically deployed as a sidecar. Envoy is then configured using Istio's EnvoyFilter
. Full Istio integration is only possible in Istio v1.3 or later due to the required extensions to EnvoyFilter
.
To add Next-Gen WAF support to an Istio based application deployment, you will need to:
- Add the
sigsci-agent
container to the pod, configured in Envoy gRPC listener mode. - Add an
emptyDir{}
volume as a place for thesigsci-agent
to write temporary data. - Add an Istio
EnvoyFilter
for the app to allow the required Envoy configuration to be injected into the generatedistio-proxy
config.
Add the Next-Gen WAF agent as an Envoy gRPC service
12345678910111213141516171819202122232425262728293031323334353637383940
... containers: # Example helloworld app running on port 8000 without sigsci configured - name: helloworld image: signalsciences/example-helloworld:latest imagePullPolicy: IfNotPresent args: # Address for the app to listen on - localhost:8080 ports: - containerPort: 8080 # Next-Gen WAF agent running in Envoy gRPC mode (SIGSCI_ENVOY_GRPC_ADDRESS configured) - name: sigsci-agent image: signalsciences/sigsci-agent:latest imagePullPolicy: IfNotPresent # Configure the agent to use Envoy gRPC on port 9999 env: - name: SIGSCI_ACCESSKEYID valueFrom: secretKeyRef: # This secret needs added (see docs on sigsci secrets) name: sigsci.my-site-name-here key: accesskeyid - name: SIGSCI_SECRETACCESSKEY valueFrom: secretKeyRef: # This secret needs added (see docs on sigsci secrets) name: sigsci.my-site-name-here key: secretaccesskey # Configure the Envoy to expect response data (if using a gRPC access log config for Envoy) - name: SIGSCI_ENVOY_EXPECT_RESPONSE_DATA value: "1" # Configure the Envoy gRPC listener address on any unused port - name: SIGSCI_ENVOY_GRPC_ADDRESS value: localhost:9999 ports: - containerPort: 9999 securityContext: # The sigsci-agent container should run with its root filesystem read only readOnlyRootFilesystem: true
Adding the Next-Gen WAF agent temp volume definition to the deployment
The agent temp volume needs to be defined for use by the other containers in the pod using the builtin emptyDir: {}
volume type:
123456
... volumes: # Define a volume where sigsci-agent will write temp data and share the socket file, # which is required with the root filesystem is mounted read only - name: sigsci-tmp emptyDir: {}
Adding the Istio EnvoyFilter object to inject the required Envoy config into the Istio proxy
Istio's EnvoyFilter
object is a feature rich way of customizing the Envoy configuration for the istio-proxy
.
You will need to set the EnvoyFilter metadata.name
field and the spec.workloadSelector.labels.app
field to the application name below. Additional Envoy configuration options are outlined in the Envoy install guide. These sections are highlighted with comments in the example YAML.
Example example-helloworld_sigsci-envoyfilter.yaml
:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108
# The following adds the required Envoy configuration into the istio-proxy configurationapiVersion: networking.istio.io/v1alpha3kind: EnvoyFiltermetadata: # This needs adjusted to be the app name protected by sigsci name: helloworldspec: workloadSelector: labels: # This needs adjusted to be the app name protected by sigsci app: helloworld
# Patch the Envoy configuration, adding in the required sigsci config configPatches:
# Adds the ext_authz HTTP filter for the sigsci-agent ext_authz API - applyTo: HTTP_FILTER match: context: SIDECAR_INBOUND listener: name: virtualInbound filterChain: filter: name: "envoy.http_connection_manager" patch: operation: INSERT_BEFORE value: # Configure the envoy.ext_authz here: name: envoy.filters.http.ext_authz typed_config: "@type": "type.googleapis.com/envoy.extensions.filters.http.ext_authz.v3.ExtAuthz" transport_api_version: "V3" grpc_service: # NOTE: *SHOULD* use envoy_grpc as ext_authz can use dynamic clusters and has connection pooling envoy_grpc: cluster_name: sigsci-agent-grpc timeout: 0.2s failure_mode_allow: true with_request_body: max_request_bytes: 8192 allow_partial_message: true
# Adds the access_log entry for the sigsci-agent http_grpc_access_log API - applyTo: NETWORK_FILTER match: context: SIDECAR_INBOUND listener: name: virtualInbound filterChain: filter: name: "envoy.http_connection_manager" patch: operation: MERGE value: name: "envoy.http_connection_manager" typed_config: "@type": "type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager" access_log: # Configure the envoy.http_grpc_access_log here: - name: "envoy.http_grpc_access_log" typed_config: "@type": "type.googleapis.com/envoy.extensions.access_loggers.grpc.v3.HttpGrpcAccessLogConfig" common_config: log_name: "sigsci-agent-grpc" transport_api_version: "V3" grpc_service: # NOTE: *MUST* use google_grpc as envoy_grpc cannot handle a dynamic cluster for ALS (yet) google_grpc: # The address *MUST* be 127.0.0.1 so that communication is intra-pod # Configure the sigsci-agent port number here: target_uri: 127.0.0.1:9999 stat_prefix: "sigsci-agent" timeout: 0.2s additional_request_headers_to_log: # These are required: - "x-sigsci-request-id" - "x-sigsci-waf-response" # These are additional you want recorded: - "accept" - "content-type" - "content-length" additional_response_headers_to_log: # These are additional you want recorded: - "date" - "server" - "content-type" - "content-length"
# Adds a dynamic cluster for the sigsci-agent via CDS for sigsci-agent ext_authz API - applyTo: CLUSTER patch: operation: ADD value: name: sigsci-agent-grpc type: STRICT_DNS connect_timeout: 0.5s http2_protocol_options: {} load_assignment: cluster_name: sigsci-agent-grpc endpoints: - lb_endpoints: - endpoint: address: socket_address: # The address *MUST* be 127.0.0.1 so that communication is intra-pod address: 127.0.0.1 # Configure the agent port here: port_value: 9999
The application can then be deployed as you normally would with Istio. For example:
$ istioctl kube-inject -f example-helloworld-sigsci.yaml | kubectl apply -f -service/helloworld createddeployment.apps/helloworld created$ kubectl apply -f example-helloworld-sigsci_envoyfilter.yamlenvoyfilter.networking.istio.io/helloworld created$ kubectl get podsNAME READY STATUS RESTARTS AGEhelloworld-7954bb57bc-pfr22 3/3 Running 2 33s$ kubectl get pod helloworld-7954bb57bc-pfr22 -o jsonpath='{.spec.containers[*].name}'helloworld sigsci-agent istio-proxy$ kubectl logs helloworld-7954bb57bc-pfr22 sigsci-agent | head2019/10/01 21:04:57.540047 Signal Sciences Agent 4.39.0 starting as user sigsci with PID 1, Max open files=1048576, Max data size=unlimited, Max address space=unlimited, Max stack size=83886082019/10/01 21:04:57.541987 =====================================================2019/10/01 21:04:57.542028 Agent: helloworld-7954bb57bc-pfr222019/10/01 21:04:57.542034 System: alpine 3.9.4 (linux 4.9.184-linuxkit)2019/10/01 21:04:57.542173 Memory: 1.672G / 3.854G RAM available2019/10/01 21:04:57.542187 CPU: 6 MaxProcs / 12 CPU cores available2019/10/01 21:04:57.542257 =====================================================2019/10/01 21:04:57.630755 Envoy gRPC server on 127.0.0.1:9999 starting
Note that there are three containers running in the pod: app=helloworld
, sigsci-agent
, and the istio-proxy
.
Do not use this form to send sensitive information. If you need assistance, contact support. This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.