Compute@Edge log streaming: Amazon S3

Fastly's Real-Time Log Streaming feature for Compute@Edge services can send log files to Amazon Simple Storage Service (Amazon S3). Amazon S3 is a static file storage service used by developers and IT teams. You can also use the instructions in this guide to configure log streaming to another S3-compatible service.


Fastly does not provide direct support for third-party services. Read Fastly's Terms of Service for more information.


Before adding Amazon S3 as a logging endpoint for Fastly Compute@Edge services, we recommend creating Identity and Access Management (IAM) credentials in your AWS account specifically for Fastly. Our recommended way for doing this is by creating an AWS IAM role, which lets you grant temporary credentials. For more information, see Creating an AWS IAM Role for Fastly Logging. Alternatively, create an IAM user and grant the user s3:PutObject permissions for the logging stream. For more information, see Amazon's guidance on understanding and getting your AWS credentials.

Adding Amazon S3 as a logging endpoint

After you've registered for an Amazon S3 account and created an IAM user in Amazon S3, follow these instructions to add Amazon S3 as a logging endpoint:

  1. Review the information in our guide to setting up remote log streaming for Compute@Edge. Additionally, our developer documentation provides more information about logging with Compute@Edge code written in Rust, AssemblyScript, and JavaScript.
  2. Click the Amazon Web Services S3 Create endpoint button. The Create an Amazon S3 endpoint page appears.

  3. Fill out the Create an Amazon S3 endpoint fields as follows:

    • In the Name field, enter the endpoint name you specified in your Compute@Edge code. For example, in our Rust code example, the name is my_endpoint_name.
    • In the Timestamp format field, optionally enter a timestamp format for log files. The default is an strftime compatible string. Our guide on changing where log files are written provides more information.
    • In the Bucket name field, enter the name of the Amazon S3 bucket in which to store the logs.
    • In the Access method field, select either User Credentials or IAM Role.
    • If you select User Credentials, enter the access key and secret key associated with the IAM user you created in your AWS account specifically for Fastly. See Amazon's documentation on security credentials for more information.

    Password management software may mistakenly treat the Secret Key field as a password field because of the way your web browser works. As such, that software may try to auto-fill this field with your Fastly account password. If this happens to you, the AWS integration with Fastly services won't work and you will need to enter Secret Key manually instead.

    • If you select IAM Role, enter the Amazon Resource Name (ARN) for the IAM role granting Fastly access to S3. For more information, see Creating an AWS IAM Role for Fastly Logging.
    • In the Period field, optionally enter an interval (in seconds) to control how frequently your log files are rotated. Rotation entails the finalization of one file object and the start of a new one, never removing any previously created file object. This value defaults to 3600 seconds. Use the Period setting in conjunction with a known average log rate to approximate S3 log file objects of a preferred, uncompressed size. Size may fluctuate depending on actual log volume.
  4. Click the Advanced options link of the Create a new S3 endpoint page and decide which of the optional fields to change, if any.

  5. Fill out the Advanced options of the Create an Amazon S3 endpoint page as follows:

    • In the Path field, optionally enter the path within the bucket to store the files. The path ends with a trailing slash. If this field is left empty, the files will be saved in the bucket's root path. Our guide on changing where log files are written provides more information.
    • In the Domain field, optionally enter the domain of the Amazon S3 endpoint. If your Amazon S3 bucket was not created in the US Standard region, you must set the domain to match the appropriate endpoint URL. Use the table in the S3 section of the Regions and Endpoints Amazon S3 documentation page. To use an S3-compatible storage system (such as DreamHost's DreamObjects), set the domain to match the domain name for that service (for example, in the case of DreamObjects, the domain name would be
    • In the PGP public key field, optionally enter a PGP public key that Fastly will use to encrypt your log files before writing them to disk. You will only be able to read the contents by decrypting them with your private key. The PGP key should be in PEM (Privacy-Enhanced Mail) format. Read our guide on log encryption for more information.
    • In the Select a log line format area, select the log line format for your log messages. Our guide on changing log line formats provides more information.
    • In the Compression field, optionally select the compression format you want applied to the log files. Our guide on changing log compression options provides more information.
    • From the Redundancy level menu, select a setting. This value defaults to Standard. Amazon's Using Reduced Redundancy Storage Guide provides more information on using reduced redundancy storage.
    • From the ACL menu, optionally select an access control header. See Amazon's Access Control List (ACL) Specific Request Headers for more information.
    • In the Server side encryption area, optionally select an encryption method to protect files that Fastly writes to your Amazon S3 bucket. Valid values are None, AES-256, and AWS Key Management Service. If you select AWS Key Management Service, you'll have to provide an AWS KMS Key ID. See Amazon's guide on protecting data using server-side encryption for more information.
  6. Click the Create button to create the new logging endpoint.
  7. Click the Activate button to deploy your configuration changes.

Although Fastly continuously streams logs into Amazon S3, the Amazon S3 website and API do not make files available for access until after their upload is complete.

Log messages can take on any format you choose as long as they can be processed from Amazon S3.

Was this guide helpful?

Do not use this form to send sensitive information. If you need assistance, contact support.