Log streaming: Kafka

Fastly's Real-Time Log Streaming feature can send logs to Apache Kafka. Kafka is an open-source, high-throughput, low-latency platform for handling real-time data feeds.

NOTE

Fastly does not provide direct support for third-party services. Read Fastly's Terms of Service for more information.

Before you begin

Before adding Apache Kafka as a logging endpoint for Fastly services, ensure Kafka is running on a remote server. You’ll need to know the hostname or IP address of one or more servers (brokers) and the category or feed name to which messages will be stored (topic). For more information on setting up Kafka check out the Apache Kafka Quickstart guide.

Each emitted Fastly log constitutes an individual Kafka record, and Fastly follows standard Kafka client protocols of producing multiple records for multiple Kafka partitions in batches. To ensure adequate log message throughput, be sure to adjust the Maximum bytes and Compression codec settings to match the capabilities of your log processing infrastructure.

Kafka is a shared resource by default, and other users of Kafka within your environment may affect throughput for Fastly logs. If you are operating your own Kafka infrastructure, Fastly recommends designating exclusive disk storage volumes to the partitions of the Kafka topic where you will send logs. You should also monitor IOPS and consumed bandwidth against the storage maximums to anticipate the need to expand capacity.

Adding Kafka as a logging endpoint

Follow these instructions to add Kafka as a logging endpoint:

  1. Deliver services
  2. Compute services
  1. Review the information in our guide to setting up remote log streaming.
  2. In the Apache Kafka area, click Create endpoint.
  3. Fill out the Create an Apache Kafka endpoint fields as follows:
    • In the Name field, enter a human-readable name for the endpoint.
    • In the Placement area, select where the logging call should be placed in the generated VCL. Valid values are Format Version Default, waf_debug (waf_debug_log), and None. Read our guide on changing log placement for more information.
    • In the Log format field, optionally enter an Apache-style string or VCL variables to use for log formatting. Consult the example format section for details.
    • In the Brokers field, enter the hostname or IP address of one or more servers (Kafka brokers). By default, it will use port 9092. Be sure to append the port using the format :[port] (e.g., :9093) to the hostname if it is different from the default port. You can specify multiple servers using a comma-separated string.
    • In the Topic field, enter the name of the topic to send logs to.
    • (Optional) In the Maximum bytes field, enter the maximum size of a Kafka produce request message in bytes.
    • (Optional) From the Parse key-values controls, select whether or not to parse any key-value pairs within the log format into Kafka record headers. Key-value pairs must lead a formatted log line as key=val and must be comma-separated. Neither keys nor values can begin with a double-quote. Keys can be preceded by an arbitrary number of spaces, cannot contain spaces, and must have at least one character. Values can be empty as in key=. Specify a record key for partitioning by including a __record_key key-value pair in your log line. Logs will be partitioned according to Kafka's uniform sticky partitioner strategy. In the absence of a designated record key, Fastly's Kafka client will distribute log messages evenly across the available partitions of a topic.
    • (Optional) In the Write acknowledgement area, select the appropriate write acknowledgement a leader Kafka broker must receive for a produce request to be successful. Fastly's Kafka client will attempt limited redelivery of failed produce requests with exponential backoff and jitter to reduce thundering herd scenarios.
    • (Optional) In the Compression codec area, select the appropriate codec to use for compression of your logs.
    • (Optional) From the Use SASL controls, select whether or not to enable SASL authentication. SASL authentication can be enabled concurrently with TLS encryption. When you select Yes, additional SASL authentication fields appear.
    • From the SASL authentication mechanism menu, select the appropriate challenge-response mechanism to use for authenticating the SASL client authentication username and password.
    • In the User field, enter the SASL client authentication username.
    • In the Password field, enter the SASL client authentication password.
    • (Optional) From the Use TLS controls, select whether or not to enable TLS encryption for the Kafka endpoint. TLS encryption can be enabled concurrently with SASL authentication. When you select Yes, additional TLS fields appear.
    • In the TLS hostname field, optionally enter a hostname to verify the logging destination server's certificate. This should be one of the Subject Alternative Name (SAN) fields for the certificate. Common Names (CN) are not supported. If the hostname is not specified, the hostname of the first broker in the Brokers field will be used. This field only appears when you select Yes from the Use TLS menu.
    • (Optional) In the TLS CA certificate field, copy and paste the certification authority (CA) certificate used to verify that the Kafka broker's certificate is valid. The certificate you upload must be in PEM format. Consider uploading the certificate if it's not signed by a well-known certification authority. This value is not required if your TLS certificate is signed by a well-known authority. This field only appears when you select Yes from the Use TLS menu.
    • (Optional) In the TLS client certificate field, copy and paste the TLS client certificate used to authenticate to the Kafka broker. The TLS client certificate you upload must be in PEM format and must be accompanied by a client key. A TLS client certificate allows your Kafka broker to authenticate that Fastly is performing the connection. This field only appears when you select Yes from the Use TLS menu.
    • (Optional) In the TLS client key field, copy and paste the TLS client key used to authenticate to the Kafka broker. The TLS client key you upload must be in PEM format and must be accompanied by a TLS client certificate. A TLS client key allows your Kafka broker to authenticate that Fastly is performing the connection. This field only appears when you select Yes from the Use TLS menu.
  4. Click Create to create the new logging endpoint.
  5. Click Activate to deploy your configuration changes.

Example format

The following is an example format string for sending data to Apache Kafka. Our discussion of format strings provides more information.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
{
"timestamp": "%{strftime(\{"%Y-%m-%dT%H:%M:%S%z"\}, time.start)}V",
"client_ip": "%{req.http.Fastly-Client-IP}V",
"geo_country": "%{client.geo.country_name}V",
"geo_city": "%{client.geo.city}V",
"host": "%{if(req.http.Fastly-Orig-Host, req.http.Fastly-Orig-Host, req.http.Host)}V",
"url": "%{json.escape(req.url)}V",
"request_method": "%{json.escape(req.method)}V",
"request_protocol": "%{json.escape(req.proto)}V",
"request_referer": "%{json.escape(req.http.referer)}V",
"request_user_agent": "%{json.escape(req.http.User-Agent)}V",
"response_state": "%{json.escape(fastly_info.state)}V",
"response_status": %{resp.status}V,
"response_reason": %{if(resp.response, "%22"+json.escape(resp.response)+"%22", "null")}V,
"response_body_size": %{resp.body_bytes_written}V,
"fastly_server": "%{json.escape(server.identity)}V",
"fastly_is_edge": %{if(fastly.ff.visits_this_service == 0, "true", "false")}V
}
Was this guide helpful?

Do not use this form to send sensitive information. If you need assistance, contact support. This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.