Extracting your data

IMPORTANT

This guide only applies to Next-Gen WAF customers with access to the Next-Gen WAF control panel.

Next-Gen WAF stores requests that contain attacks and anomalies, with some qualifications. If you would like to extract this data in bulk for ingestion into your own systems, we offer a request feed API endpoint which makes available a feed of recent data, suitable to be called by (for example) an hourly cron.

This functionality is typically used by security operation center (SOC) teams to automatically import data into security information and event management (SIEM) solutions such as Datadog, ELK, and other commercial systems.

Data extraction vs searching

We have a separate API endpoint for searching request data. Its use case is for finding requests that meet certain criteria, as opposed to bulk data extraction:

SearchingData Extraction
Search using full query syntaxReturns all requests, optionally filtered by signals
Limited to 1,000 requestsReturns all requests
Window: up to 7 days at a timeWindow: past 24 hours
Retention: 30 days24 hours

Time span restrictions

The following restrictions are in effect when using this endpoint:

  • The until parameter has a maximum of five minutes in the past. This is to allow our data pipeline sufficient time to process incoming requests - see below.
  • The from parameter has a minimum value of 24 hours and five minutes in the past.
  • Both the from and until parameters must fall on full minute boundaries.
  • Both the from and until parameters require Unix timestamps with second level detail (e.g., 1445437680).

Delayed data

A five-minute delay is enforced to build in time to collect and aggregate data across all of your running agents, and then ingest, analyze, and augment the data in our systems. Our five-minute delay is a tradeoff between data that is both timely and complete.

Pagination

This endpoint returns data either 1,000 requests at a time or by the size specified in the limit query parameter. If the time span specified contains more than 1,000 requests (default) or more than defined by the limit parameter, a next URL will be provided to retrieve the next batch. Each next URL is valid for one minute from the time it's generated.

Retrieved data can vary in size, sometimes greatly. To avoid exceeding URL size limitations, send the next parameter and its value as POST parameters in a POST request using a Content-Type of application/x-www-form-urlencoded.

Sort order

As a result of our data warehousing implementation, the data you get back from this endpoint will be complete for the time span specified, but is not guaranteed to be sorted. Once all data for the given time span has been accumulated, it can be sorted using the timestamp field, if necessary.

Rate limiting

Limits for concurrent connections to this endpoint:

  • Two per site (also known as a workspace)
  • Five per corp (also known as an account)

Example usage

A common way to use this endpoint is to set up a cron that runs at 5 minutes past each hour and fetches the previous full hour's worth of data. In the example below, we calculate the previous full hour's start and end timestamps and use them to call the API.

Python

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
import requests
import os
import json
import calendar
from datetime import datetime, timedelta, timezone
# Set up environment variables
NGWAF_EMAIL = os.getenv('NGWAF_USER_EMAIL')
NGWAF_TOKEN = os.getenv('NGWAF_TOKEN')
NGWAF_CORP = os.getenv('CORP_NAME')
NGWAF_SITE = os.getenv('SITE_NAME')
if not NGWAF_EMAIL or not NGWAF_TOKEN or not NGWAF_CORP or not NGWAF_SITE:
raise EnvironmentError("Please set NGWAF_EMAIL, NGWAF_TOKEN, NGWAF_CORP, and NGWAF_SITE environment variables.")
# Base URL for the API
base_url = 'https://dashboard.signalsciences.net/api/v0'
# Set up headers with authentication
headers = {
'x-api-user': NGWAF_EMAIL,
'x-api-token': NGWAF_TOKEN
}
# Calculate UTC timestamps for the previous full hour
until_time = datetime.now(timezone.utc).replace(minute=0, second=0, microsecond=0)
from_time = until_time - timedelta(hours=1)
until_time = calendar.timegm(until_time.utctimetuple())
from_time = calendar.timegm(from_time.utctimetuple())
# Set up the initial URL for the GET request
get_url = f'{base_url}/corps/{NGWAF_CORP}/sites/{NGWAF_SITE}/feed/requests?from={from_time}&until={until_time}'
# Debugging: print the URL and timestamps
print(f"Fetching data from: {get_url}")
print(f"from_time: {from_time}, until_time: {until_time}")
def fetch_paginated_data(url):
data_list = []
while url:
# Make the initial GET request
response_raw = requests.get(url, headers=headers)
if response_raw.status_code != 200:
raise RuntimeError(f"Failed to fetch data from {url}. Status Code: {response_raw.status_code}")
response = response_raw.json()
data_list.extend(response.get('data', []))
next_uri = response.get('next', {}).get('uri', '')
if not next_uri:
break
# Extract the next parameter from the URI
next_value = next_uri.split('next=')[-1]
# Prepare the POST request for pagination
post_url = f'{base_url}/corps/{NGWAF_CORP}/sites/{NGWAF_SITE}/feed/requests'
post_data = {'next': next_value}
headers['Content-Type'] = 'application/x-www-form-urlencoded' # Add the necessary header
post_response_raw = requests.post(post_url, headers=headers, data=post_data)
if post_response_raw.status_code != 200:
raise RuntimeError(f"Failed to fetch paginated data from {post_url}. Status Code: {post_response_raw.status_code}")
post_response = post_response_raw.json()
data_list.extend(post_response.get('data', []))
next_uri = post_response.get('next', {}).get('uri', '')
if not next_uri:
break
return data_list
# Fetch data
data = fetch_paginated_data(get_url)
# Output the data or save to a file, etc.
print(json.dumps(data, indent=4))
Was this guide helpful?

Do not use this form to send sensitive information. If you need assistance, contact support. This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.