Last updated February 03, 2021
IMPORTANT: This information is part of a limited availability release. For more information, see our product and feature lifecycle descriptions.
The Compute@Edge platform helps you compile your custom code to WebAssembly and runs it at the Fastly edge using the WebAssembly System Interface for each compute request. Per-request isolation and lightweight sandboxing create an environment focused on performance and security.
Serverless isolation technology
Compute@Edge runs WebAssembly (Wasm) and leverages the Lucet compiler and runtime, which ahead-of-time compiles customer code to Wasm. When a compute request is received by Fastly, an instance is created and the serverless function is run, allowing developers to apply custom business logic on demand.
Deploying to a Compute@Edge service leverages Fastly’s software-defined network and globally distributed points of presence. A single deploy action makes customer logic available across the Fastly network.
Available programming languages
By running Wasm on the Fastly network, Compute@Edge creates a serverless environment suitable for multiple programming languages. Fastly collaborates with the ByteCode Alliance and other open source communities to actively grow the number of supported languages. Resources per language are available on developer.fastly.com.
Logging endpoint compatibility
Continuous integration and deployment
Deployment to the Compute@Edge platform can be accomplished via manage.fastly.com, the Fastly API, and via Fastly’s Terraform provider plugin. The Fastly CLI also provides a local toolchain with features for creating, debugging, and deploying to Wasm services. Some of those features, like those related to log tailing, are disabled by default. To learn more about them, contact your account manager or email firstname.lastname@example.org for details.Back to Top