Getting started
Basics
Domains & Origins
Performance

Configuration
Basics
Conditions
Dictionaries
Domains & Origins
Request settings
Cache settings
Headers
Responses
Performance
Purging
Custom VCL
Image optimization
Video

Security
Access Control Lists
Monitoring and testing
Securing communications
Security measures
TLS
Web Application Firewall

Integrations
Logging endpoints
Non-Fastly services

Diagnostics
Streaming logs
Debugging techniques
Common errors

Account info
Account management
Billing
User access and control

Reference

    Configuration guidelines for live streaming

      Last updated April 23, 2019

    The Fastly network can deliver live streams for any HTTP streaming technology, archived or recorded, on any public or private cloud storage service. When configuring VCL to deliver live streams, we recommend following these guidelines, which Customer Support can help you with.

    Configure shielding

    Configure shielding by designating a specific shield POP for your origin to ensure live streams remain highly available within the Fastly network. If your setup includes primary and alternate origins (e.g., for high profile live streams), be sure to select a shield POP, close to each origin, one for each origin you define.

    Configure video manifest and segment caching TTLs

    In live streams, video manifests are periodically refreshed when new segments become available, specially for HLS. We recommend setting manifest file TTLs to less than half of the video segment duration, typically 1-2 seconds for 5-second video segments. For long DVRs and live-to-VOD transitions, set segment TTLs longer on shields and shorter on edge POPs such that they are served from memory (that is, less than 3600s).

    The following VCL sample may help you implement different TTLs for video manifest and segments. It can also be added to your service using VCL Snippets:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    
    sub vcl_fetch {
    #FASTLY fetch
    
      # Set 1s ttls for video manifest and 3600s ttls for segments of HTTP Streaming formats.
      # Microsoft Smooth Streaming format manifest and segments do not have file extensions.
      # Look for the keywords "Manifest" and "QualityLevel" to identify manifest and segment requests.
      if (req.url.ext ~ "m3u8|mpd" || req.url.path ~ "Manifest") {
        set beresp.ttl = 1s;
        return (deliver);
      }
      else {
        if (req.url.ext ~ "aac|dash|m4s|mp4|ts" || req.url.path ~ "QualityLevel") {
          set beresp.ttl = 3600s;
          return (deliver);
        }
      }
    
      return (deliver);
    }
    

    Optionally, identify video manifests and segments using the MIME type.

    Configure lower TTLs for errors

    By default, Fastly honors the Cache-Control header from the origin to set TTLs for cacheable objects. However, origins may not send Cache-Control headers for non-200 or 206 HTTP status code responses. As a result, Fastly will only cache few status code responses with default TTLs configured, usually 3600s, to prevent large numbers of requests from hitting the origin. Uncacheable status code responses can be enabled for caching by setting beresp.cacheable flag to true.

    For live streams, new video segments are added every few seconds. Typically, live stream transcoders are configured to generate 5s segments and manifests are refreshed after each new segment is available. Frequently, video players can make requests to segments not yet available or requests can return errors like 500 or 503 status codes. In such cases, status code responses should be made cacheable and should only be cached with TTLs small enough to give sufficient time for origins to recover (around 1s).

    The following VCL sample may help you implement this and can also be added to your service using VCL Snippets:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    
    sub vcl_fetch {
    #FASTLY fetch
    
      # Set 1s ttl if origin response HTTP status code is anything other than 200 and 206
      if (!http_status_matches(beresp.status, "200,206")) {
        set beresp.ttl = 1s;
        set beresp.cacheable = true;
        return (deliver);
      }
    
      return (deliver);
    }
    

    Configure Streaming Miss

    Configure Streaming Miss to reduce the time clients (players) must wait to begin downloading streams when Fastly's edge servers must fetch content from your origin. Streaming Miss should be enabled for video or audio objects only (these are sometimes called "chunks" or "segments").

    The following VCL sample may help you implement this. It can also be added to your service using VCL Snippets:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    
    sub vcl_fetch {
    #FASTLY fetch
    
      # Enable Streaming Miss only for video or audio objects.
      # Below conditions checks for video or audio file extensions commonly used in
      # HTTP Streaming formats.
      if (req.url.ext ~ "aac|dash|m4s|mp4|ts") {
        set beresp.do_stream = true;
      }
    
      return (deliver);
    }
    

    Configure automatic gzipping

    Configure automatic gzipping for manifest files based on their file extension or content-type using the following table as a guide:

    HTTP streaming format file extension content-type
    Apple HLS m3u8 application/x-mpegurl, application/vnd.apple.mpegurl
    MPEG-DASH mpd application/dash+xml
    Adobe HDS f4m, bootstrap application/f4m (for manifest), application/octet-stream (for bootstrap)
    Microsoft HSS N/A application/vnd.ms-sstr+xml

    Configure a CORS header

    Configure a CORS header on your service to play audio or video content on a different domain.

    Advanced TCP optimizations

    You can enable TCP optimizations between cache servers and clients to improve response time, specifically metrics like video startup times (a.k.a., "time-to-first-frame") and re-buffering percentages. Consider implementing the following optional, TCP-related configurations to improve a client's experience.

    Change the default value for client.socket.cwnd

    Set the TCP socket initial congestion window (a.k.a., initial CWND) to 30. The default value is 10. To do this, add the following VCL to your service using VCL Snippets:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    
    sub vcl_deliver {
    #FASTLY deliver
    
      # increase init cwnd for only client requests
      if (!req.http.Fastly-FF && client.requests == 1) {
        set client.socket.cwnd = 30;
      }
    
      return(deliver);
    }
    

    Enable the experimental BBR congestion algorithm

    Enable the BBR TCP congestion control algorithm. Unlike the default CUBIC congestion control algorithm, which is packet-loss-based and latency-insensitive, BBR is designed to maximize bandwidth while controlling latency.

    You can implement this algorithm by adding the following VCL to your service using VCL Snippets:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    
    sub vcl_deliver {
    #FASTLY deliver
    
      # set congestion algorithm for client requests
      if (!req.http.Fastly-FF && client.requests == 1) {
        set client.socket.congestion_algorithm = "bbr";
      }
    
      return(deliver);
    }
    

    Configure origin timeouts

    Set appropriate origin timeouts to ensure new live stream segments are downloaded from origin in a timely manner. For example, for a live stream with 5s video segments, set the Origin Connect value to 1s and the First Byte and Between Bytes timeout values to 2s. Typically, these values should be configured such that Fastly can also retry another origin (if configured) before sending the appropriate response on client requests.

    Consider setting up failover (fallback) origins

    Consider configuring your VCL to allow your origins to failover from high-profile primary streams to alternate streams in case of encoder failures or other issues (e.g., high resource utilization).

    Configure real-time log streaming

    For troubleshooting and debugging live streaming delivery issues, configure real-time log streaming and include TCP connection, caching, and different time-related metrics in vcl_log. For example, consider including:

    These metrics can help you analyze throughput and may help you determine reasons a video player might switch quality levels during ABR playback.

    Take advantage of surrogate key purging

    All video segments and the manifest for a live stream can be purged using a single API call by using Fastly's surrogate key feature.

    Manage live-to-VOD smoothly

    Most encoders generate a separate video manifest when making the same live stream available for VOD. If your VOD manifest has the same URL as the live one, purge the live stream video manifest or wait for the caches to invalidate (as they will be set with low TTLs). If your setup archives the live stream as progressive mp4s, consider delivering them using Fastly's OTFP service.

    Back to Top