Compute@Edge

      Last updated October 23, 2020

    The Compute@Edge platform helps you compile your custom code to WebAssembly and runs it at the Fastly edge using the WebAssembly System Interface for each compute request. Per-request isolation and lightweight sandboxing create an environment focused on performance and security.

    Serverless isolation technology

    Compute@Edge runs WebAssembly (Wasm) and leverages the Lucet compiler and runtime, which ahead-of-time compiles customer code to Wasm. When a compute request is received by Fastly, an instance is created and the serverless function is run, allowing developers to apply custom business logic on demand.

    Global deployment

    Deploying to a Compute@Edge service leverages Fastly’s software-defined network and globally distributed points of presence. A single deploy action makes customer logic available across the Fastly network.

    Available programming languages

    By running Wasm on the Fastly network, Compute@Edge creates a serverless environment suitable for multiple programming languages. Fastly collaborates with the ByteCode Alliance and other open source communities to actively grow the number of supported languages. Resources per language are available on developer.fastly.com.

    Logging endpoint compatibility

    Compute@Edge supports sending user-specified logs to a variety of logging endpoints. These connections can be created and managed via manage.fastly.com and by using the log_fastly crate.

    Continuous integration and deployment

    Deployment to the Compute@Edge platform can be accomplished via manage.fastly.com, the Fastly API, and via Fastly’s Terraform provider plugin. The Fastly CLI also provides a local toolchain for creating and deploying to Wasm services.

    Back to Top