Agent scaling and running as a service

IMPORTANT

This guide only applies to Next-Gen WAF customers with access to the Next-Gen WAF control panel. If you have access to the Next-Gen WAF product in the Fastly control panel, you can only deploy the Next-Gen WAF with the Edge WAF deployment method.

If the sigsci-agent is installed as a sidecar into a pod, the agent will scale however you have chosen to scale the application in the pod. This is the recommended method of installing the agent as it does not require a different means of scaling your application. However, for some installations the agent may need to be scaled at a different rate than the application. In these cases you can install the agent as a service to be used by the application pods. However, there are limitations when installing the agent as a service.

Limitations

  • The sigsci-agent can only be configured for a single site (also known as workspace). This means that any agent service would only be able to send to a single site (workspace). All of the agents in the service will have the same configuration.
  • The sigsci-agent keeps some request states when processing the responses. This means that the agent that processed the request data needs to be the same agent that processes the response data. Therefore, load balancing agents require affinity, which makes the service more complex to scale.
  • Using the sigsci-agent as a service means configuring the communication channel as TCP instead of a Unix domain socket and this is slightly less efficient.

Installing the Next-Gen WAF agent as a service

The sigsci-agent can be installed as a service, but care must be taken when configuring the service due the above limitations. The service will be tied to a single site (workspace). If you will have multiple sites (workspaces), then you should name the service based on the site (workspace) name. To scale the service, it must be configured so that the same agent will process both the request and response data for a transaction. To do this, you need to configure the service to use affinity based on the pod that is sending data to the agent. This is done by setting the affinity to use the Client IP.

Below is an example service tied to a site (workspace) named my-site-name using Client IP affinity:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
apiVersion: v1
kind: Service
metadata:
name: sigsci-agent-my-site-name
labels:
app: sigsci-agent-my-site-name
spec:
ports:
# Port names and numbers are arbitrary
# 737 is the default RPC port
# 8000 may be more appropriate for gRPC used with Envoy
- name: rpc
port: 737
targetPort: 737
selector:
app: sigsci-agent-my-site-name
sessionAffinity: ClientIP
sessionAffinityConfig:
clientIP:
timeoutSeconds: 60

The service must then be backed by a deployment with any number of replicas. The sigsci-agent container must be configured as in a typical sidecar install, but must use TCP instead of a shared Unix domain socket. This is done by setting the SIGSCI_RPC_ADDRESS configuration option. Note that if using this with Envoy, you must use SIGSCI_ENVOY_GRPC_ADDRESS instead.

Example deployment corresponding with the service above:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
apiVersion: apps/v1
kind: Deployment
metadata:
name: sigsci-agent-my-site-name
labels:
app: sigsci-agent-my-site-name
spec:
replicas: 2
selector:
matchLabels:
app: sigsci-agent-my-site-name
template:
metadata:
labels:
app: sigsci-agent-my-site-name
spec:
containers:
- name: sigsci-agent
image: signalsciences/sigsci-agent:latest
imagePullPolicy: IfNotPresent
env:
- name: SIGSCI_ACCESSKEYID
valueFrom:
secretKeyRef:
name: sigsci.my-site-name
key: accesskeyid
- name: SIGSCI_SECRETACCESSKEY
valueFrom:
secretKeyRef:
name: sigsci.my-site-name
key: secretaccesskey
# Use RPC via TCP instead of default Unix Domain Socket
- name: SIGSCI_RPC_ADDRESS
value: "0.0.0.0:737"
# Use all available resources.limits.cpu cores
- name: SIGSCI_MAX_PROCS
value: "100%"
securityContext:
readOnlyRootFilesystem: true
volumeMounts:
- name: sigsci-tmp
mountPath: /sigsci/tmp
# Set CPU resource limits (required for autoscaling)
resources:
limits:
cpu: 4
requests:
cpu: 1
volumes:
- name: sigsci-tmp
emptyDir: {}

The above example will deploy two sigsci-agent pods for the sigsci-agent-my-site-name service to use for the my-site-name site (workspace). Each agent will see up to 4 CPU cores, requiring resources for at least one core.

Each application pod must then have its module configured to send to a sigsci-agent at the service name and port defined by the service. In this example the module would be configured to sent to host sigsci-agent-my-site-name and port 737. These values are defined by the service as well as the SIGSCI_RPC_ADDRESS configuration option (or SIGSCI_ENVOY_GRPC_ADDRESS if Envoy is being used).

As for scaling, each pod that connects to this service will be assigned a sigsci-agent running in the service and affinity will be locked to this agent. If the agent is then updated or otherwise removed from the service (such as due to an autoscaling down event) the agent will be reassigned to the client application pod. Because of how agents are assigned to pods with affinity, the maximum number of active agents will not be more than the number of pods connecting to the service. This should be considered when determining the number of replicas and autoscaling parameters.

The deployment can be autoscaled. As an example, it is possible to autoscale with a Horizontal Pod Autoscaler via kubectl autoscale. In the example below, the deployment will use a minimum of 2 agents and be scaled up to 6 agents whenever the overall CPU usage reaches 60%. Note again, however, that all of these agents will only be handling a single site (workspace).

$ kubectl autoscale deployment sigsci-agent-my-site-name --cpu-percent=60 --min=2 --max=6

The status of the Horizontal Pod Autoscaler can be viewed via the kubectl get hpa command:

$ kubectl get hpa

That command will produce the following output:

NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
sigsci-agent-my-site-name Deployment/sigsci-agent-my-site-name 42%/60% 2 6 2 53m42s

There are some limitations to this type of scaling. When scaling (by manually setting the replica number or autoscaling), the sigsci-agent pod count will change for the service. When an agent is added, new connections to the service may get assigned affinity to new agent pods, but note that application pods that already have their affinity set to a specific agent pod will not be rebalanced unless the service setting for the affinity timeout (sessionAffinityConfig.clientIP.timeoutSeconds) is hit. Because of this, this scaling works best when the application pods are also scaled so that new application pods will get balanced to new agent pods. Similarly, when an agent pod is removed from the service due to scaling down, the application pods that were assigned to this agent will be reassigned to another agent and affinity set. When scaling back up, these will not get rebalanced. If this occurs often, then you may consider reducing the affinity timeout (sessionAffinityConfig.clientIP.timeoutSeconds) to allow for rebalancing if there is some idle time.

Was this guide helpful?

Do not use this form to send sensitive information. If you need assistance, contact support. This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.