Compute@Edge

The Compute@Edge platform helps you compile your custom code to WebAssembly and runs it at the Fastly edge using the WebAssembly System Interface for each compute request. Per-request isolation and lightweight sandboxing create an environment focused on performance and security.

IMPORTANT

This feature is not permitted for use in cryptocurrency mining.

Serverless isolation technology

Compute@Edge runs WebAssembly (Wasm). When a Compute request is received by Fastly, an instance is created and the serverless function is run, allowing developers to apply custom business logic on demand.

Global deployment

Deploying to a Compute@Edge service leverages Fastly’s software-defined network and globally distributed points of presence. A single deploy action makes customer logic available across the Fastly network.

Available programming languages

By running Wasm on the Fastly network, Compute@Edge creates a serverless environment suitable for multiple programming languages. Fastly collaborates with the Bytecode Alliance and other open source communities to actively grow the number of supported languages. Support level per language varies. Resources per language are available on developer.fastly.com.

Logging endpoint compatibility

Compute@Edge supports sending user-specified logs to a variety of logging endpoints. These connections can be created and managed via manage.fastly.com and by using a supported language.

Continuous integration and deployment

Deployment to the Compute@Edge platform can be accomplished via the Fastly web interface, the Fastly API, and via Fastly’s Terraform provider plugin (Limited Availability). The Fastly CLI (Limited Availability) also provides a local toolchain with features for creating, debugging, and deploying to Wasm services, including Log Tailing and Local Testing.

NOTE

Some Compute@Edge features available through the Fastly CLI are disabled by default. To learn more about them, contact your account manager or email sales@fastly.com for details.

Core Cache API

The Core Cache API is a general purpose cache API for developers building apps on Compute@Edge. The Core Cache API gives you programmatic control over reading and writing to the cache using languages like Rust and Go.

Dynamic Backends

Dynamic Backends extends the Fastly API and allows you to decide which origin to connect to at runtime instead of having to define it ahead of time in your configuration. With Dynamic Backends, you can dynamically adjust your origin definitions, dispatch to new hosts based on computed values, and connect to a wider variety of origin servers.

Log Tailing

Limited Availability

Log Tailing allows you to stream custom log messages from your Compute@Edge application so you can respond quickly when debugging the application without setting up a third-party logging tool.

Local Testing

Limited Availability

Local Testing allows you to run your work-in-progress applications locally on your laptop, server, or CI system, so you can test your Compute@Edge applications without hosting them on public staging or production environments. Local environments support a subset of Compute@Edge features.

Billing

NOTE

Billing limits for this product may be different if you've purchased it as part of a package offering or are using it as part of a free trial.

Fastly charges for Compute@Edge based on the total number of compute requests you make plus their compute duration and CPU time. Each compute request uses the Fastly delivery architecture and the associated Compute@Edge charges apply in addition to any already applicable delivery charges.

  • Compute requests. Compute requests represent a count of the number of times you invoke a function instance. Each incoming request creates one function instance.

  • Compute duration. Compute duration represents the total memory allocation over time required to process a compute request. We measure memory allocation in GB-seconds and calculate it based on the time it takes for a function instance to execute, multiplied by the memory allocated to that function. We measure function instance time in wall clock time from the start of a function to its completion or termination, rounded up to the nearest 50ms. The minimum function size for billing purposes is 128MB, though actual memory allocation may differ.

  • CPU time. CPU time represents the sum of the total time spent by the CPU running your code.

For specific pricing information, contact your Fastly account team.

Resource limits

Fastly services and individual instances are allowed a specific range of resources per service and per execution as described in our developer learning resources. These limits change based on whether or not you've purchased one of our packaged offerings or you're using a free trial account.

Keep these limitations and constraints in mind especially when testing and debugging on Compute@Edge, when sending Fastly logs to third party logging providers, and when using Log Tailing.

Was this guide helpful?

Do not use this form to send sensitive information. If you need assistance, contact support.