Last updated 2021-10-29
The Compute@Edge platform helps you compile your custom code to WebAssembly and runs it at the Fastly edge using the WebAssembly System Interface for each compute request. Per-request isolation and lightweight sandboxing create an environment focused on performance and security.
This information is part of a limited availability release. For more information, see our product and feature lifecycle descriptions. In addition, this feature is not permitted for use in cryptocurrency mining.
Serverless isolation technology
Compute@Edge runs WebAssembly (Wasm) and leverages the Lucet compiler and runtime, which ahead-of-time compiles customer code to Wasm. When a compute request is received by Fastly, an instance is created and the serverless function is run, allowing developers to apply custom business logic on demand.
Deploying to a Compute@Edge service leverages Fastly’s software-defined network and globally distributed points of presence. A single deploy action makes customer logic available across the Fastly network.
Available programming languages
By running Wasm on the Fastly network, Compute@Edge creates a serverless environment suitable for multiple programming languages. Fastly collaborates with the ByteCode Alliance and other open source communities to actively grow the number of supported languages. Resources per language are available on developer.fastly.com.
Logging endpoint compatibility
Continuous integration and deployment
Deployment to the Compute@Edge platform can be accomplished via the Fastly web interface, the Fastly API, and via Fastly’s Terraform provider plugin. The Fastly CLI also provides a local toolchain with features for creating, debugging, and deploying to Wasm services, including Log Tailing and Local Testing.
Some Compute@Edge features available through the Fastly CLI are disabled by default. To learn more about them, contact your account manager or email firstname.lastname@example.org for details.
Log Tailing allows you to stream custom log messages from your Compute@Edge application so you can respond quickly when debugging the application without setting up a third-party logging tool.
Local Testing allows you to run your work-in-progress applications locally on your laptop, server, or CI system, so you can test your Compute@Edge applications without hosting them on public staging or production environments. When using Local Testing, GeoIP and Dictionaries are unsupported. Additionally, caching directives are ignored (no caching is ever performed) and information about the TLS connection from the client is not available.
Fastly charges for Compute@Edge based on the total number of compute requests you make plus their compute duration and CPU time. Each compute request uses the Fastly delivery architecture and the associated Compute@Edge charges apply in addition to any already applicable delivery charges.
Compute requests. Compute requests represent a count of the number of times you invoke a function instance. Each incoming request creates one function instance.
Compute duration. Compute duration represents the total memory allocation over time required to process a compute request. We measure memory allocation in GB-seconds and calculate it based on the time it takes for a function instance to execute, multiplied by the memory allocated to that function. We measure function instance time in wall clock time from the start of a function to its completion or termination, rounded up to the nearest 50ms. The minimum function size for billing purposes is 128MB, though actual memory allocation may differ.
CPU time. CPU time represents the sum of the total time spent by the CPU running your code.
For specific pricing information, contact your Fastly account team.
Fastly services and individual instances are allowed a specific range of resources per service and per execution as described in our developer learning resources. These limits are lower for free trial accounts.