search close

Agent Scaling and Running as a Service

access_time Updated Jun 20, 2021

Scaling the Agent

If the sigsci-agent is installed as a sidecar into a pod, then the agent will scale however you have chosen to scale the application in the pod. This is the recommended method of installing the agent as it does not require a different means of scaling your application. However, for some installations the agent may need to be scaled at a different rate than the application. In this case you may consider installing the agent as a service to be used by the application pods. Doing so, however, has some limitations and challenges of its own.

Limitations

  • The sigsci-agent can only be configured for a single site. This means that any agent service would only be able to send to a single site. All of the agents in the service will have the same configuration.
  • The sigsci-agent keeps some request state when processing the responses. This means that the agent that processed the request data needs to be the same agent that processes the response data, so load balancing agents requires affinity, which does make the service more complex to scale.
  • Using the sigsci-agent as a service means configuring the communication channel as TCP vs a Unix domain socket and this is slightly less efficient.

Installing the Signal Sciences Agent as a Service

The sigsci-agent can be installed as a service, but care needs to be taken when configuring the service due the the above limitations. The service will be tied to a single site. If you will have multiple sites, then you should name the service based on the Signal Sciences site name. To scale the service, it must be configured so that the same agent will process both the request and response data for a transaction. To do this, you need to configure the service to use affinity based on the pod that is sending data to the agent. This is done by setting the affinity to use the Client IP.

Example service tied to a site named “my-site-name” using Client IP affinity:

apiVersion: v1
kind: Service
metadata:
  name: sigsci-agent-my-site-name
  labels:
    app: sigsci-agent-my-site-name
spec:
  ports:
  # Port names and numbers are arbitrary
  #  737 is the default RPC port
  #  8000 may be more appropriate for gRPC used with envoy
  - name: rpc
    port: 737
    targetPort: 737
  selector:
    app: sigsci-agent-my-site-name
  sessionAffinity: ClientIP
  sessionAffinityConfig:
    clientIP:
      timeoutSeconds: 60

The service would then be backed by a deployment with any number of replicas. The sigsci-agent container would be configured as in a typical sidecar install, but would use TCP instead of a shared Unix domain socket. This is done by setting the SIGSCI_RPC_ADDRESS configuration option. Note that for using this with envoy, you would use SIGSCI_ENVOY_GRPC_ADDRESS instead.

Example deployment corresponding with the service above:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: sigsci-agent-my-site-name
  labels:
    app: sigsci-agent-my-site-name
spec:
  replicas: 2
  selector:
    matchLabels:
      app: sigsci-agent-my-site-name
  template:
    metadata:
      labels:
        app: sigsci-agent-my-site-name
    spec:
      containers:
      - name: sigsci-agent
        image: signalsciences/sigsci-agent:latest
        imagePullPolicy: IfNotPresent
        env:
        - name: SIGSCI_ACCESSKEYID
          valueFrom:
            secretKeyRef:
              name: sigsci.my-site-name
              key: accesskeyid
        - name: SIGSCI_SECRETACCESSKEY
          valueFrom:
            secretKeyRef:
              name: sigsci.my-site-name
              key: secretaccesskey
        # Use RPC via TCP instead of default Unix Domain Socket
        - name: SIGSCI_RPC_ADDRESS
          value: "0.0.0.0:737"
        # Use all available resources.limits.cpu cores
        - name: SIGSCI_MAX_PROCS
          value: "100%"
        securityContext:
          readOnlyRootFilesystem: true
        volumeMounts:
        - name: sigsci-tmp
          mountPath: /sigsci/tmp
        # Set CPU resource limits (required for autoscaling)
        resources:
          limits:
            cpu: 4
          requests:
            cpu: 1
      volumes:
      - name: sigsci-tmp
        emptyDir: {}

The above example will deploy two sigsci-agent pods for the sigsci-agent-my-site-name service to use for the my-site-name Signal Sciences site. Each agent will see up to 4 CPU cores, requiring resources for at least one core.

Each application pod can then have its module configured to send to a sigsci-agent at the service name and port defined by the service. In this example the module would be configured to sent to host sigsci-agent-my-site-name and port 737. These values would be defined by the service as well as the SIGSCI_RPC_ADDRESS (or SIGSCI_ENVOY_GRPC_ADDRESS if envoy is being used).

As for scaling, each pod that connects to this service will be assigned a sigsci-agent running in the service and affinity will be locked to this agent. If the agent is then updated or otherwise removed from the service (such as an autoscaling down event) the agent will be reassigned to the client application pod. Because of how agents are assigned to pods with affinity, the maximum number of active agents will not be more than the number of pods connecting to the service. This should be considered when determining the number of replicas and/or autoscaling parameters.

The deployment can be autoscaled. As an example, it is possible to autoscale with a Horizontal Pod Autoscaler via kubectl autoscale. In the example below the deployment will utilize a minimum of 2 agents and be scaled up to 6 agents whenever the overall CPU usage reaches 60%. Note again, however, that all of these agents will only be handling a single Signal Sciences site.

kubectl autoscale deployment sigsci-agent-my-site-name --cpu-percent=60 --min=2 --max=6

The status of the Horizontal Pod Autoscaler can be viewed via the kubectl get hpa command:

$ kubectl get hpa
NAME                      REFERENCE                            TARGETS   MINPODS   MAXPODS   REPLICAS   AGE
sigsci-agent-my-site-name Deployment/sigsci-agent-my-site-name 42%/60%   2         6         2          53m42s 

There are some limitations to this type of scaling, however. When scaling (manually setting the replica number or autoscaling), the sigsci-agent pod count will change for the service. When an agent is added, new connections to the service may get assigned affinity to new agent pods, but note that application pods that already have their affinity set to a specific agent pod will not be rebalanced unless the service setting for the affinity timeout is hit (sessionAffinityConfig.clientIP.timeoutSeconds). Because of this, this scaling works best when the application pods are also scaled so that new application pods will get balanced to new agent pods, etc. Similarly, when an agent pod is removed from the service due to scaling down, the application pods that were assigned this agent will be reassigned to another agent and affinity set. When scaling back up, these will not get rebalanced. If this occurs often, then you may consider reducing the affinity timeout (sessionAffinityConfig.clientIP.timeoutSeconds) to allow for rebalancing if there is some idle time.