LOG IN SIGN UP
Documentation

Shielding

Fastly's shielding service feature allows you to designate a specific Point of Presence (POP) as a shield node to your origins. Once designated, all requests to your origin will go through that datacenter, increasing the chances of getting a HIT for a given resource. If a different POP doesn't have a specific object, it will query the shield (provided it's not down for maintenance) instead of your origins.

How shielding works

When a user requests brand new content from a customer's server and that content has never been cached by any Fastly POP, this is what happens to their request when shielding is and is not enabled.

Without shielding enabled

Without shielding enabled, when the first request for content arrives at POP A, the POP does not have the content cached. It passes the request along to a customer's origin server to get the content. Once the content is retrieved, POP A caches it and sends it on to the user.

caching behavior on the first request, without shielding enabled

When a second request for that same content arrives at POP A, the content is already cached. No request goes to the customer's origin server. It's merely sent from the cached copy.

caching behavior on subsequent requests, without shielding enabled

If the second request were to arrive at POP B instead of POP A, however, the request would once again be passed along to the customer's origin server. It would then be cached and passed back to the end user, just like POP A did when that info was first requested.

With shielding enabled

With shielding enabled, when the first request for content arrives at the POP A, that POP does not have the content cached. It passes the request along to the shield POP, which also doesn't have the content cached. The shield POP passes the request along to the customer's origin server. It then caches the content that's retrieved and passes it along to POP A. POP A then passes the content along to the user.

caching behavior on the first request, with shielding enabled

When a second request for that same content arrives at POP A, the content is already cached, so no request goes to the shield POP or the customer's origin server.

caching behavior on subsequent requests, with shielding enabled

If the second request were to arrive at POP B instead of POP A, however, the request would be passed along to the shield POP. That shield POP already has a cached copy from the first request to POP A. No future requests for the content would be passed along to the customer's origin server until the shield POP's cached copy of it expires.

Enabling shielding

Enable shielding with these steps:

  1. Read the caveats of shielding information below for details about the implications of and potential pitfalls involved with enabling shielding for your organization.
  2. Log in to the Fastly web interface and click the Configure link.
  3. From the service menu, select the appropriate service.
  4. Click the Edit configuration button and then select Clone active. The service version page appears.
  5. Click the Origins tab. The Origins page appears.
  6. Click the name of the host you want to edit. The Edit a host page appears.

    the Edit Backend window

  7. From the Shielding menu, select the datacenter to use as your shield.

  8. Click Update to save your changes.

  9. If you have changed the default host or have added a header to change the host, add the modified hostname to your list of domains. Do this by clicking the Domains tab and checking to make sure the host in question appears on the page. If it isn't included, add it by clicking the Create domain button.

    the Domains area

    With shielding enabled, queries from other POPs appear as incoming requests to the shield. If the shield doesn't know about the modified hostname, it doesn't know which service to match the request to. Including the origin's hostname in the domain list eliminates this concern.

  10. Click the Activate button to deploy your configuration changes.

Caveats of shielding

Shielding not only impacts traffic and hit ratios, it affects configuration and performance. When you configure shielding, be aware of the following caveats.

Traffic and hit ratio caveats

Inbound traffic to a shield will be billed as regular traffic, including requests to populate remote POPs. Enabling shielding will incur some additional Fastly bandwidth charges, but will be offset by savings of your origin bandwidth (and origin server load). Pass-through requests will not go directly to the origin, they will go through the shield first.

Global HIT ratio calculation may seem lower than the actual numbers. Shielding is not taken into account when calculating the global hit ratio. If an edge node doesn't have an object in its cache, it reports a miss. Local MISS/Shield HIT gets reported as a miss and a hit in the statistics, even though there is no call to the backend. It will also result in one request from the edge node to the shield. Local MISS/Shield MISS will result in two requests, because we will subsequently fetch the resource from your origin. For more information about caching with shielding see our article Understanding Cache HIT and MISS with Shielding Services.

Configuration caveats

You will be unable to manually define backends using VCL. Shielding at this level is completely dependent on backends being defined as actual objects through the web interface or API. Other custom VCL will work just fine.

You can only set one shield total if automatic load balancing is selected via the web interface. If you've selected auto load balancing, you can only select one shield total. You must use custom VCL to use multiple shields when auto load balancing is set.

Enabling sticky load balancing and shielding at the same time requires custom VCL. Sticky load balancers use client.identity to choose where to send the session. The client.identity defaults to the IP request header. That's fine under normal circumstances, but if you enable shielding, the IP will be the original POP's IP, not the client's IP. Thus, to enable shielding and a custom sticky load balancer, you want to use the following:

if (req.http.fastly-ff) {
  set client.identity = req.http.Fastly-Client-IP;
}

You'll need to use caution when changing the host header before it reaches the shield. Fastly matches a request with a host header. If the host header doesn't match to a domain within the service an error of 500 is expected. Also, purging conflicts can occur if the host header is changed to a domain that exists in a different service.

For example, say Service A has hostname a.example.com and Service B has hostname b.example.com. If Service B changes the host header to a.example.com, then the edge will think the request is for Service B but the shield will think the request is for Service A.

When you purge an object from Service B and not from Service A, the shield will serve the old object that you wanted to purge to the edge, since the purge went out to Service B and not Service A. You will want to purge the object from both Service A and Service B. However, this opens the door for confusion and error.

VCL gets executed twice: once on the shield node and again on the edge node. Changes to beresp and resp can affect the caching of a URL on the shield and edge. Consider the following examples.

Say you want Fastly to cache an object for one hour (3600 seconds) and then ten seconds on the browser. The origin sends Cache-Control: max-age=3600. You unset beresp.http.Cache-Control and then reset Cache-Control to max-age=10. With shielding enabled, however, the result will not be what you expect. The object will have max-age=3600 on the shield and reach the edge with max-age=10.

A better option in this instance would be to use Surrogate-Control and Cache-Control response headers. Surrogate-Control overrides Cache-Control and is stripped after the edge node. The max-age from Cache-Control will then communicate with the browser. The origin response headers would look like this:

Surrogate-Control: max-age=3600
Cache-Control: max-age=10

Another common pitfall involves sending the wrong Vary header to an edge POP. For example, there's VCL that takes a specific value from a cookie, puts it in a header, and that header is then added to the Vary header. To maximize compatibility with any caches outside of your control (such as with shared proxies as commonly seen in large enterprises), the Vary header is updated in vcl_deliver, replacing the custom header with Cookie. The code might look like this:

vcl_recv {
  # Set the custom header
  if (req.http.Cookie ~ "ABtesting=B") {
    set req.http.X-ABtesting = "B";
  } else {
    set req.http.X-ABtesting = "A";
  }
  ...
}

...

sub vcl_fetch {
  # Vary on the custom header
  if (beresp.http.Vary) {
    set beresp.http.Vary = beresp.http.Vary ", X-ABtesting";
  } else {
    set beresp.http.Vary = "X-ABtesting";
  }
  ...
}

...

sub vcl_deliver {
  # Hide the existence of the header from downstream
  if (resp.http.Vary) {
    set resp.http.Vary = regsub(resp.http.Vary, "X-ABtesting", "Cookie");
  }
}

When combined with shielding, however, the effect of the above code will be that edge POPs will have Cookie in the Vary header, and thus will have a terrible hit rate. To work around this, amend the above VCL so that Vary is only updated with Cookie when the request is not coming from another Fastly cache. The Fastly-FF header is a good way to tell. The code would look something like this (including the same vcl_recv from the above example):

# Same vcl_recv from above code example

sub vcl_fetch {
  # Vary on the custom header, don't add if shield POP already added
  if (beresp.http.Vary !~ "X-ABtesting") {
    if (beresp.http.Vary) {
      set beresp.http.Vary = beresp.http.Vary ", X-ABtesting";
    } else {
      set beresp.http.Vary = "X-ABtesting";
    }
  }
  ...
}

...

sub vcl_deliver {
  # Hide the existence of the header from downstream
  if (resp.http.Vary && !req.http.Fastly-FF) {
    set resp.http.Vary = regsub(resp.http.Vary, "X-ABtesting", "Cookie");
  }
}

Back to Top