# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Batch (Multi-URL Scraping)](https://scrapfly.io/docs/scrape-api/batch)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Monitoring

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fmonitoring%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fmonitoring%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fmonitoring%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Scrapfly offers a detailed real-time monitoring dashboard that logs all scrape requests and their results. This dashboard tracks all scrape request for the selected Scrapfly project and environment. It can be filtered and inspected for overall scraping performance:

    

  

  [See Your Monitoring Dashboard](https://scrapfly.io/dashboard/monitoring)

> **Logs Retention** | FREE | DISCOVERY | PRO | STARTUP | ENTERPRISE |
> |---|---|---|---|---|
> | Log retention 1 week | Log retention 1 week | Log retention 2 weeks | Log retention 3 weeks | Log retention 4 weeks |
> 
>   [Screenshots](https://scrapfly.io/docs/scrape-api/screenshot), [Debug](https://scrapfly.io/docs/scrape-api/debug), [Cache](https://scrapfly.io/docs/scrape-api/cache) belongs to log and inherit of the same retention - As soon as the log is deleted, they are also deleted

## Filters

 Filters allow you to sample or search logs from the monitoring section to investigate or check that everything works as expected.

#### Time Frame

 Eight pre-configured time frames are available (past month, past week, past day, past three hours, past hour, past 30 minutes, past 15 minutes, past 5 minutes, past 5 minutes). You can also define an arbitrary time frame.

### Dimensions

You can filter the following values:

 | Name | Value | Support Multiple | Description |
|---|---|---|---|
| url | **string** | Yes | Filter for URL, support glob operator with.  **e.g:**  `https://*.wikipedia.org/wiki/*` |
| success | **bool** `true` or `false` | No | Filter for success or failed request - includes &gt;= 500 and network errors |
| domain | **string** | Yes | Filter for a domain, the domain (TLD+n) includes a subdomain, support glob operator with   **e.g.:**  `*.google.com` |
| root\_domain | **string** | yes | Filter for root domain (TLD+1), do not include subdomain, support glob operator with   **e.g.;** `goo*.com` |
| method | **string** | Yes | Filter for method, supported values : GET, PUT, POST, PATCH |
| status\_code | **int** | Yes | Filter for status code   **e.g:**  `401` or `502,504`. Wildcard is also supported `2**` |
| cost | **int** | 1 | Filter based of on cost spent on API Credits |
| origin | **string** | Yes | Filter for origin, supported values : API, SCHEDULER |
| retries | **int** | No | Filter for retries amount |
| duration | **int** | No | Filter for duration |
| error\_code | **string** | Yes | Filter by [error codes](https://scrapfly.io/docs/scrape-api/errors) e.g: `ERR::SCRAPE::SCENARIO_EXECUTION` |
| errored | **bool** `true` or `false` | No | Filter scrape having an error |

 For metrics that support multiple notations, simply separate with `,` like `status=200,204`

### Chaining Filters &amp; Multi Values

 You can chain multiple filters with space.

 ```
status_code=200 host=*.wikipedia.org success=true duration>=5
```

 

   

 

 

 You can filter over multiple values, in this case, `OR` operator is applied.

 ```
status_code=401,500,502,503
```

 

   

 

 

## Operators

Following operators are supported :

- **`=`** Equal
- **`!`** Not equal
- **`>`** Greater than
- **`>=`** Greater or equal to
- **`<`** Lower than
- **`<=`** Lower or equal to
 
## API — Web Scraping

> **Enterprise Plan Required** The Monitoring API is only available on **Enterprise** or **Custom** subscriptions. Calls from other plans return `HTTP 402 Payment Required` with `{"reason": "Enterprise or Custom subscription required"}`. See [pricing](https://scrapfly.io/pricing) to upgrade.
> 
>  **Time-window limit:** queries are capped at **90 days** (start to end). Wider windows return `HTTP 403` with `{"reason": "Period too large"}`. Use `period` or split into multiple calls.

### Monitoring for other products

 [  Web Scraping current 

 origin=WEB\_SCRAPING\_API 

 

 ](https://scrapfly.io/docs/monitoring) 

 [  Screenshot 

 origin=SCREENSHOT\_API 

 

 ](https://scrapfly.io/docs/screenshot-api/monitoring) 

 [  Extraction 

 origin=EXTRACTION\_API 

 

 ](https://scrapfly.io/docs/extraction-api/monitoring) 

 [  Crawler 

 origin=CRAWLER\_API 

 

 ](https://scrapfly.io/docs/crawler-api/monitoring) 

 [  Cloud Browser 

 session-based 

 

 ](https://scrapfly.io/docs/cloud-browser-api/monitoring) 

 

> **Quick verification — does my key work?**Run this curl from your terminal. A valid Enterprise key returns a JSON envelope with `account_metrics`; a non-Enterprise key returns 402, an invalid key returns 401.
> 
>  ```
> curl -sk 'https://api.scrapfly.io/scrape/monitoring/metrics?key=&period=last24h&aggregation=account' | jq .account_metrics
> ```

With monitoring API you can query aggregates or domain specific metrics.

- All date are in UTC
- JSON or MSGPACK format are available, use the header `accept` to control it, `application/json` or `application/msgpack`
- Pass your API KEY via `key=xxx` query param or header `Authorization` `Bearer xxxx`
 
###  GET `/scrape/monitoring/metrics` 

 

Retrieve metrics from the current subscription period, which include three aggregation levels: account, project and targets (top 100).

 ```
https://api.scrapfly.io/scrape/monitoring/metrics?key=&aggregation=account&period=last24h
```

 

   

 

#### Code Examples

    Python    TypeScript    Go    Rust    cURL  

  ```
from scrapfly import ScrapflyClient, ScraperAPI

client = ScrapflyClient(key='')

stats = client.get_monitoring_metrics(
    aggregation=[ScraperAPI.MONITORING_ACCOUNT_AGGREGATION],
    period=ScraperAPI.MONITORING_PERIOD_LAST_24H,
)
print(stats['account_metrics'])
```

 

   

 

 

 ```
import { ScrapflyClient } from 'scrapfly-sdk';

const client = new ScrapflyClient({ key: '' });

const stats = await client.getMonitoringMetrics({
    aggregation: ['account'],
    period: 'last24h',
});
console.log(stats.account_metrics);
```

 

   

 

 

 ```
package main

import (
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    stats, err := client.GetMonitoringMetrics(scrapfly.MonitoringMetricsOptions{
        Aggregation: []scrapfly.MonitoringAggregation{scrapfly.MonitoringAggregationAccount},
        Period:      scrapfly.MonitoringPeriodLast24h,
    })
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(stats["account_metrics"])
}
```

 

   

 

 

 ```
use scrapfly_sdk::{Client, MonitoringAggregation, MonitoringMetricsOptions, MonitoringPeriod};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error="">> {
    let client = Client::builder().api_key("").build()?;
    let stats = client
        .get_monitoring_metrics(&MonitoringMetricsOptions {
            aggregation: Some(vec![MonitoringAggregation::Account]),
            period: Some(MonitoringPeriod::Last24h),
            ..Default::default()
        })
        .await?;
    println!("{}", stats["account_metrics"]);
    Ok(())
}</dyn>
```

 

   

 

 

 ```
curl 'https://api.scrapfly.io/scrape/monitoring/metrics?key=&aggregation=account&period=last24h'
```

 

   

 

 

 

#### Parameters

 | Name | Description |
|---|---|
| format | - `structured` default - `prometheus` |
| aggregation | Enable aggregations, possible values are:  - `account` default - `project` - `target`   You can combine multiple aggregations at once: `account,project,target` for example |
| period | This parameter is mutually exclusive with `start` and `end`.  - `last5m` - `last1h` - `last7d` - `last24h` default - `subscription` (*use current subscription period*) |
| start | The format should be `Y-m-d H:i:s`. It's mutually exclusive with `period`; `end` must also be set.    Examples: - `2024-01-01 00:00:00` |
| end | The format should be `Y-m-d H:i:s`. It's mutually exclusive with `period`; `start` must also be set.    Examples: - `2024-01-01 00:00:00` |
| group\_subdomain | Enable or disable subdomain grouping when `target` aggregation is requested. *When using subdomain grouping, provide the root domain as input. E.g: `web-scraping.dev` and not `www.web-scraping.dev`.*  - `true` default - `false` |

 

 

###  GET `/scrape/monitoring/metrics/target` 

 

Retrieve metrics and timeseries for a given target domain.

 ```
https://api.scrapfly.io/scrape/monitoring/metrics/target?domain=httpbin.dev&key=&period=last24h
```

 

   

 

#### Code Examples

    Python    TypeScript    Go    Rust    cURL  

  ```
from scrapfly import ScrapflyClient, ScraperAPI

client = ScrapflyClient(key='')

stats = client.get_monitoring_target_metrics(
    domain='httpbin.dev',
    period=ScraperAPI.MONITORING_PERIOD_LAST_24H,
)
print(stats)
```

 

   

 

 

 ```
import { ScrapflyClient } from 'scrapfly-sdk';

const client = new ScrapflyClient({ key: '' });

const stats = await client.getMonitoringTargetMetrics({
    domain: 'httpbin.dev',
    period: 'last24h',
});
console.log(stats);
```

 

   

 

 

 ```
package main

import (
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    stats, err := client.GetMonitoringTargetMetrics(scrapfly.MonitoringTargetMetricsOptions{
        Domain: "httpbin.dev",
        Period: scrapfly.MonitoringPeriodLast24h,
    })
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(stats)
}
```

 

   

 

 

 ```
use scrapfly_sdk::{Client, MonitoringPeriod, MonitoringTargetMetricsOptions};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error="">> {
    let client = Client::builder().api_key("").build()?;
    let stats = client
        .get_monitoring_target_metrics(&MonitoringTargetMetricsOptions {
            domain: "httpbin.dev".into(),
            group_subdomain: false,
            period: Some(MonitoringPeriod::Last24h),
            start: None,
            end: None,
        })
        .await?;
    println!("{}", stats);
    Ok(())
}</dyn>
```

 

   

 

 

 ```
curl 'https://api.scrapfly.io/scrape/monitoring/metrics/target?domain=httpbin.dev&key=&period=last24h'
```

 

   

 

 

 

#### Parameters

 | Name | Description |
|---|---|
| domain   required | - `httpbin.dev` - `web-scraping.dev` - `...` |
| period | This parameter is mutually exclusive with `start` and `end`.  - `last5m` default - `last1h` - `last7d` - `last24h` - `subscription` (*use current subscription period*) |
| group\_subdomain | Enable or disable subdomain grouping. *When using subdomain grouping, provide the root domain as input. E.g: `web-scraping.dev` and not `www.web-scraping.dev`.*  - `true` default - `false` |
| start | The format should be `Y-m-d H:i:s`. It's mutually exclusive with `period`; `end` must also be set.    Examples: - `2024-01-01 00:00:00` |
| end | The format should be `Y-m-d H:i:s`. It's mutually exclusive with `period`; `start` must also be set.    Examples: - `2024-01-01 00:00:00` |