- About the web interface controls
- Always-on DDoS mitigation
- Browser recommendations when using the Fastly web interface
- Content and its delivery
- Fastly POP locations
- Getting started with Fastly
- How caching and CDNs work
- How Fastly's CDN Service works
- HTTP status codes cached by default
- Self-provisioned Fastly services
- Sign up and create your first service
- Working with services
Domains & Origins
Domains & Origins
- Changing origins based on user location
- Connecting to origins
- Enabling global POPs
- Failover configuration
- IPv6 support
- Maintaining separate HTTP and HTTPS requests to origin servers
- Routing assets to different origins
- Setting up redundant origin servers
- Specifying an override host
- Using Fastly with apex domains
- About Dynamic Servers
- Authenticating URL purge requests via API
- Cache control tutorial
- Caching configuration best practices
- Controlling caching
- Creating and using pools with Dynamic Servers
- Creating and using server entries with Dynamic Servers
- Enabling API caching
- Enabling automatic gzipping
- Failure modes with large files
- Getting started with surrogate keys
- HTTP/2 server push
- Implementing API cache control
- Logging purge requests
- Making query strings agnostic
- Purging API cache with surrogate keys
- Request collapsing
- Segmented Caching
- Serving stale content
- Setting Surrogate-Key headers based on a URL
- Setting Surrogate-Key headers for Amazon S3 origins
- Single purges
- Soft purges
- Streaming Miss
- Wildcard purges
- Accept-Language header VCL features
- Authenticating before returning a request
- Basic authentication
- Creating location-based tagging
- Custom responses that don't hit origin servers
- Delivering different content to different devices
- Enabling URL token validation
- Guide to VCL
- Isolating header values without regular expressions
- Manipulating the cache key
- IP geolocation variables: Migrating to the new dataset
- Overriding which IP address the geolocation features use
- Response Cookie handling
- Support for the Edge-Control header
- Understanding the different PASS action behaviors
- Using edge side includes (ESI)
- VCL regular expression cheat sheet
Access Control Lists
Monitoring and testing
- Domain validation for TLS certificates
- Enabling HSTS through Fastly
- Forcing a TLS redirect
- Managing domains on TLS certificates
- Serving HTTPS traffic using certificates you manage
- Serving HTTPS traffic using Fastly-managed certificates
- Setting up free TLS
- TLS key and certificate replacement
- TLS termination
Web Application Firewall
- Log streaming: Amazon S3
- Log streaming: Microsoft Azure Blob Storage
- Log streaming: Cloud Files
- Log streaming: Datadog
- Log streaming: DigitalOcean Spaces
- Log streaming: Elasticsearch
- Log streaming: FTP
- Log streaming: Google BigQuery
- Log streaming: Google Cloud Storage
- Log streaming: Honeycomb
- Log streaming: Kafka
- Log streaming: Log Shuttle
- Log streaming: LogDNA
- Log streaming: Logentries
- Log streaming: Loggly
- Log streaming: Heroku's Logplex
- Log streaming: OpenStack
- Log streaming: Papertrail
- Log streaming: Scalyr
- Log streaming: SFTP
- Log streaming: Splunk
- Log streaming: Sumo Logic
- Log streaming: Syslog
User access and control
Log streaming: Amazon S3
Last updated May 31, 2019
Fastly's Real-Time Log Streaming feature can send log files to Amazon Simple Storage Service (Amazon S3). Amazon S3 is a static file storage service used by developers and IT teams. You can also use the instructions in this guide to configure log streaming to another S3-compatible service.
NOTE: Fastly does not provide direct support for third-party services. See Fastly's Terms of Service for more information.
Before adding Amazon S3 as a logging endpoint for Fastly services, we recommend creating an Identity and Access Management (IAM) user in Amazon S3 specifically for Fastly. Grant the user
PutObject permissions for the directory in which you want to store logs. For more information, see Amazon's Getting Your Access Key ID and Secret Access Key page.
Adding Amazon S3 as a logging endpoint
After you've registered for an Amazon S3 account and created an IAM user in Amazon S3, follow these instructions to add Amazon S3 as a logging endpoint:
- Review the information in our Setting Up Remote Log Streaming guide.
Click the Amazon Web Services S3 logo. The Create an Amazon S3 endpoint page appears.
- Fill out the Create an Amazon S3 endpoint fields as follows:
- In the Name field, type a human-readable name for the endpoint.
- In the Log format field, optionally type an Apache-style string or VCL variables to use for log formatting. The Apache Common Log format string appears in this field by default.
- In the Timestamp format field, optionally type a timestamp format for log files. The default is an
strftimecompatible string. Our guide on changing where log files are written provides more information.
- In the Bucket name field, type the name of the Amazon S3 bucket in which to store the logs.
- In the Access key field, type the access key associated with the Amazon S3 bucket. See Amazon's documentation on security credentials for more information.
In the Secret key field, type the secret key associated with the Amazon S3 bucket. See Amazon's documentation on security credentials for more information.
NOTE: Password management software may mistakenly treat the Secret Key field as a password field because of the way your web browser works. As such, that software may try to auto-fill this field with your Fastly account password. If this happens to you, the AWS integration with Fastly services won't work and you will need to enter Secret Key manually instead.
- In the Period field, optionally type an interval (in seconds) to control how frequently your log files are rotated. This value defaults to
Click the Advanced options link of the Create a new S3 endpoint page and decide which of the optional fields to change, if any.
- Fill out the Advanced options of the Create an Amazon S3 endpoint page as follows:
- In the Path field, optionally type the path within the bucket to store the files. The path ends with a trailing slash. If this field is left empty, the files will be saved in the bucket's root path. Our guide on changing where log files are written provides more information.
- In the Domain field, optionally type the domain of the Amazon S3 endpoint. If your Amazon S3 bucket was not created in the US Standard region, you must set the domain to match the appropriate endpoint URL. Use the table in the S3 section of the Regions and Endpoints Amazon S3 documentation page. If you want to use an S3-compatible storage system (such as Dreamhost's DreamObjects), set the domain to match the domain name for that service (for example, in the case of DreamObjects, the domain name would be
- In the PGP public key field, optionally type a PGP public key that Fastly will use to encrypt your log files before writing them to disk. You will only be able to read the contents by decrypting them with your private key. The PGP key should be in PEM (Privacy-Enhanced Mail) format. See our guide on log encryption for more information.
- In the Select a log line format area, select the log line format for your log messages. Our guide on changing log line formats provides more information.
- In the Placement area, select where the logging call should be placed in the generated VCL. Valid values are Format Version Default, None, and waf_debug (waf_debug_log). Selecting None creates a logging object that can only be used in custom VCL. See our guide on WAF logging for more information about
- In the Gzip level field, optionally type the level of gzip compression you want applied to the log files. You can specify any whole number from
1(fastest and least compressed) to
9(slowest and most compressed). This value defaults to
- From the Redundancy level menu, select a setting. This value defaults to Standard. Amazon's Using Reduced Redundancy Storage Guide provides more information on using reduced redundancy storage.
- From the ACL menu, optionally select an access control header. See Amazon's Access Control List (ACL) Specific Request Headers for more information.
- In the Server side encryption area, optionally select an encryption method to protect files that Fastly writes to your Amazon S3 bucket. Valid values are None, AES-256, and AWS Key Management Service. If you select AWS Key Management Service, you'll have to provide an AWS KMS Key ID. See Amazon's guide on protecting data using server-side encryption for more information. Our discussion of format strings also provides more information.
- Click the Create button to create the new logging endpoint.
- Click the Activate button to deploy your configuration changes.
NOTE: Although Fastly continuously streams logs into Amazon S3, the Amazon S3 website and API do not make files available for access until after their upload is complete.