Last updated February 03, 2021

The Compute@Edge platform helps you compile your custom code to WebAssembly and runs it at the Fastly edge using the WebAssembly System Interface for each compute request. Per-request isolation and lightweight sandboxing create an environment focused on performance and security.

Serverless isolation technology

Compute@Edge runs WebAssembly (Wasm) and leverages the Lucet compiler and runtime, which ahead-of-time compiles customer code to Wasm. When a compute request is received by Fastly, an instance is created and the serverless function is run, allowing developers to apply custom business logic on demand.

Global deployment

Deploying to a Compute@Edge service leverages Fastly’s software-defined network and globally distributed points of presence. A single deploy action makes customer logic available across the Fastly network.

Available programming languages

By running Wasm on the Fastly network, Compute@Edge creates a serverless environment suitable for multiple programming languages. Fastly collaborates with the ByteCode Alliance and other open source communities to actively grow the number of supported languages. Resources per language are available on

Logging endpoint compatibility

Compute@Edge supports sending user-specified logs to a variety of logging endpoints. These connections can be created and managed via and by using the log_fastly crate.

Continuous integration and deployment

Deployment to the Compute@Edge platform can be accomplished via, the Fastly API, and via Fastly’s Terraform provider plugin. The Fastly CLI also provides a local toolchain with features for creating, debugging, and deploying to Wasm services. Some of those features, like those related to log tailing, are disabled by default. To learn more about them, contact your account manager or email for details.

Back to Top