# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Cloud Browser — Monitoring

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fmonitoring%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fmonitoring%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcloud-browser-api%2Fmonitoring%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 The Cloud Browser Monitoring API exposes session-level metrics the same way the [Cloud Browser dashboard](https://scrapfly.io/dashboard/cloud-browser) does: total runs, runtime, bandwidth, success rate, and a per-pool breakdown.

> **Why does Cloud Browser look different?** Cloud Browser is **session-based** (one allocation = one long-lived browser, billed by runtime + bandwidth) — unlike Web Scraping / Screenshot / Extraction / Crawler which are **request-based** (per-API-credit). As a result the response shape and parameter set differ: there is no `domain`, no `aggregation`, no `include_webhook`, no `/target` sub-endpoint. Instead you get an `account_metrics` object + a `proxy_pool_breakdown` array, and a separate `/timeseries` endpoint for per-bucket charts.

> **Enterprise Plan Required** The Monitoring API is only available on **Enterprise** or **Custom** subscriptions. Calls from other plans return `HTTP 402 Payment Required`. See [pricing](https://scrapfly.io/pricing) to upgrade.
> 
>  **Time-window limit:** queries are capped at **90 days** (start to end). Wider windows return `HTTP 403` with `{"reason": "Period too large"}`. Use `period` or split into multiple calls.

> **Quick verification**Run this curl from your terminal. A valid Enterprise key returns a JSON envelope with `account_metrics`; a non-Enterprise key returns 402, an invalid key returns 401.
> 
>  ```
> curl -sk 'https://api.scrapfly.io/browser/monitoring/metrics?key=&period=last24h' | jq .account_metrics
> ```

###  GET `/browser/monitoring/metrics` 

 

 Returns Cloud Browser session aggregates for the requested period + a per-pool breakdown.

#### Code Examples

   Python  TypeScript  Go  Rust  cURL 

  ```
from scrapfly import ScrapflyClient, ScraperAPI

client = ScrapflyClient(key='')

stats = client.get_browser_monitoring_metrics(
    period=ScraperAPI.MONITORING_PERIOD_LAST_24H,
)
print(stats['account_metrics'])
print(stats['proxy_pool_breakdown'])
```

 

   

 

 

 ```
import { ScrapflyClient } from 'scrapfly-sdk';

const client = new ScrapflyClient({ key: '' });

const stats = await client.getBrowserMonitoringMetrics({ period: 'last24h' });
console.log(stats.account_metrics);
console.log(stats.proxy_pool_breakdown);
```

 

   

 

 

 ```
package main

import (
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }
    stats, err := client.GetBrowserMonitoringMetrics(scrapfly.CloudBrowserMonitoringOptions{
        Period: scrapfly.MonitoringPeriodLast24h,
    })
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(stats["account_metrics"])
    fmt.Println(stats["proxy_pool_breakdown"])
}
```

 

   

 

 

 ```
use scrapfly_sdk::{Client, CloudBrowserMonitoringOptions, MonitoringPeriod};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error="">> {
    let client = Client::builder().api_key("").build()?;
    let stats = client
        .get_browser_monitoring_metrics(&CloudBrowserMonitoringOptions {
            period: Some(MonitoringPeriod::Last24h),
            ..Default::default()
        })
        .await?;
    println!("{}", stats["account_metrics"]);
    println!("{}", stats["proxy_pool_breakdown"]);
    Ok(())
}</dyn>
```

 

   

 

 

 ```
curl 'https://api.scrapfly.io/browser/monitoring/metrics?key=&period=last24h'
```

 

   

 

 

 

#### Parameters

 | Name | Description |
|---|---|
| period | One of `last5m`, `last1h`, `last24h`, `last7d`, `subscription`. Mutually exclusive with `start`/`end`. |
| start | UTC `Y-m-d H:i:s`. Must be paired with `end`. |
| end | UTC `Y-m-d H:i:s`. Must be paired with `start`. |
| proxy\_pool | Optional. Filter to a single proxy pool (e.g. `public_datacenter_pool`, `public_residential_pool`, `byop`). |

#### Response shape

 ```
{
  "period_start": "2026-04-15T00:00:00Z",
  "period_end":   "2026-04-15T23:59:59Z",
  "account_metrics": {
    "total_runs":              123,
    "success_runs":            120,
    "error_runs":              3,
    "success_rate":            97.56,
    "total_runtime_ms":        452000,
    "avg_runtime_ms":          3674.80,
    "total_bandwidth_up_bytes":  1834000,
    "total_bandwidth_down_bytes": 9876000,
    "total_bandwidth_bytes":     11710000,
    "avg_bandwidth_bytes":       95203.25
  },
  "proxy_pool_breakdown": [
    {
      "proxy_pool":           "public_datacenter_pool",
      "total_runs":           90,
      "success_runs":         88,
      "error_runs":           2,
      "total_runtime_ms":     320000,
      "total_bandwidth_bytes": 8200000
    }
  ]
}
```

 

 

###  GET `/browser/monitoring/metrics/timeseries` 

 

Per-bucket session timeseries for the requested period. Use this to feed time-series charts.

#### Code Examples

   Python  TypeScript  Go  Rust  cURL 

  ```
from scrapfly import ScrapflyClient, ScraperAPI

client = ScrapflyClient(key='')

result = client.get_browser_monitoring_timeseries(
    period=ScraperAPI.MONITORING_PERIOD_LAST_24H,
)
for point in result['series']:
    print(point)
```

 

   

 

 

 ```
import { ScrapflyClient } from 'scrapfly-sdk';

const client = new ScrapflyClient({ key: '' });

const result = await client.getBrowserMonitoringTimeseries({ period: 'last24h' });
for (const point of result.series) console.log(point);
```

 

   

 

 

 ```
package main

import (
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }
    result, err := client.GetBrowserMonitoringTimeseries(scrapfly.CloudBrowserMonitoringOptions{
        Period: scrapfly.MonitoringPeriodLast24h,
    })
    if err != nil {
        log.Fatal(err)
    }
    for _, point := range result["series"].([]any) {
        fmt.Println(point)
    }
}
```

 

   

 

 

 ```
use scrapfly_sdk::{Client, CloudBrowserMonitoringOptions, MonitoringPeriod};

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error="">> {
    let client = Client::builder().api_key("").build()?;
    let result = client
        .get_browser_monitoring_timeseries(&CloudBrowserMonitoringOptions {
            period: Some(MonitoringPeriod::Last24h),
            ..Default::default()
        })
        .await?;
    for point in result["series"].as_array().unwrap() {
        println!("{}", point);
    }
    Ok(())
}</dyn>
```

 

   

 

 

 ```
curl 'https://api.scrapfly.io/browser/monitoring/metrics/timeseries?key=&period=last24h'
```

 

   

 

 

 

 

 

### Monitoring for other products

 [  Web Scraping 

 origin=WEB\_SCRAPING\_API 

 

 ](https://scrapfly.io/docs/monitoring) 

 [  Screenshot 

 origin=SCREENSHOT\_API 

 

 ](https://scrapfly.io/docs/screenshot-api/monitoring) 

 [  Extraction 

 origin=EXTRACTION\_API 

 

 ](https://scrapfly.io/docs/extraction-api/monitoring) 

 [  Crawler 

 origin=CRAWLER\_API 

 

 ](https://scrapfly.io/docs/crawler-api/monitoring) 

 [  Cloud Browser current 

 session-based 

 

 ](https://scrapfly.io/docs/cloud-browser-api/monitoring)