Last updated 2023-10-13
Fastly’s Compute platform, formerly known as Compute@Edge, helps you compile your custom code to WebAssembly and runs it at the Fastly edge using the WebAssembly System Interface for each compute request. Per-request isolation and lightweight sandboxing create an environment focused on performance and security.
This feature is not permitted for use in cryptocurrency mining.
Compute runs WebAssembly (Wasm). When a Compute request is received by Fastly, an instance is created and the serverless function is run, allowing developers to apply custom business logic on demand.
Deploying to a Compute service leverages Fastly’s software-defined network and globally distributed points of presence. A single deploy action makes customer logic available across the Fastly network.
By running Wasm on the Fastly network, Compute creates a serverless environment suitable for multiple programming languages. Fastly collaborates with the Bytecode Alliance and other open source communities to actively grow the number of supported languages. Support level per language varies. Resources per language are available on developer.fastly.com.
Compute supports sending user-specified logs to a variety of logging endpoints. These connections can be created and managed via manage.fastly.com and by using a supported language.
Deployment to the Compute platform can be accomplished via the Fastly web interface, the Fastly API, and via Fastly’s Terraform provider plugin (Limited Availability). The Fastly CLI (Limited Availability) also provides a local toolchain with features for creating, debugging, and deploying to Wasm services, including Log Tailing and Local Testing.
Some Compute features available through the Fastly CLI are disabled by default. To learn more about them, contact your account manager or email email@example.com for details.
The following APIs enable developers to build apps on Compute using Fastly's global cache network.Limited Availability
The Simple Cache API exposes a key-value cache interface developers can use to cache small, arbitrary data to help reduce repetitive compute costs.
Dynamic Backends extends the Fastly API and allows you to decide which origin to connect to at runtime instead of having to define it ahead of time in your configuration. With Dynamic Backends, you can dynamically adjust your origin definitions, dispatch to new hosts based on computed values, and connect to a wider variety of origin servers.Limited Availability
Log Tailing allows you to stream custom log messages from your Compute application so you can respond quickly when debugging the application without setting up a third-party logging tool.Limited Availability
Local Testing allows you to run your work-in-progress applications locally on your laptop, server, or CI system, so you can test your Compute applications without hosting them on public staging or production environments. Local environments support a subset of Compute features.
Fastly services and individual instances are allowed a specific range of resources per service and per execution as described in our developer learning resources. These limits change based on whether or not you've purchased one of our packaged offerings or you're using a trial account.
Fastly charges for Compute based on the total number of compute requests you make plus their compute duration and CPU time. Each compute request uses the Fastly delivery architecture and the associated Compute charges apply in addition to any already applicable delivery charges.
Compute requests. Compute requests represent a count of the number of times you invoke a function instance. Each incoming request creates one function instance.
Compute duration. Compute duration represents the total memory allocation over time required to process a compute request. We measure memory allocation in GB-seconds and calculate it based on the time it takes for a function instance to execute, multiplied by the memory allocated to that function. We measure function instance time in wall clock time from the start of a function to its completion or termination, rounded up to the nearest 50ms. The minimum function size for billing purposes is 128MB, though actual memory allocation may differ.
CPU time. CPU time represents the sum of the total time spent by the CPU running your code.