# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Batch (Multi-URL Scraping)](https://scrapfly.io/docs/scrape-api/batch)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Schedule](https://scrapfly.io/docs/scrape-api/schedule)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Schedule](https://scrapfly.io/docs/crawler-api/schedule)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Schedule](https://scrapfly.io/docs/screenshot-api/schedule)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [Captcha Solver](https://scrapfly.io/docs/cloud-browser-api/captcha-solver)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Schedule

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Fschedule%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Fschedule%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fcrawler-api%2Fschedule%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Schedule recurring crawls. Each schedule pairs a [crawler configuration](https://scrapfly.io/docs/crawler-api/getting-started) with a recurrence rule and a [webhook](https://scrapfly.io/docs/crawler-api/webhook); every fire kicks off a fresh crawl and the result is delivered to your endpoint asynchronously.

 Use schedules to keep an index of a target site fresh, run a weekly site audit, or feed a downstream pipeline with new URLs without writing your own scheduler.

## Concepts

- **kind**: `api.crawler`; the crawler configuration is stored under `metadata.crawler_config`. Every parameter accepted by `/crawl` works inside a schedule (page limits, robots.txt handling, extraction rules, etc.).
- **recurrence**: when the schedule fires next. Either a 5-field cron expression or an interval+unit pair.
- **webhook**: the named webhook that will receive the crawl's status and downloadable artifact. Required.
- **status**: `ACTIVE` (firing), `PAUSED` (skipped until resumed), or `CANCELLED` (terminal).
 
> **Webhook is required, all times are UTC** Schedules run in the background, so the crawl artifacts and metadata are published to a configured webhook rather than returned in the API response. Create a webhook from the [webhook dashboard](https://scrapfly.io/dashboard/webhook) before creating a schedule.
> 
>  Every date and cron expression on this page is evaluated in **UTC**. The scheduler does not support a per-schedule timezone. If you need a local-clock cadence, convert it to UTC when building the cron expression or the `scheduled_date`.

> **Crawl duration** Crawls take longer than single scrapes. When picking a recurrence, leave enough headroom that one fire finishes before the next is due. For overlapping fires, set `allow_concurrency=true`.

## Create a schedule

 `POST /crawl/schedules` creates a new schedule for the authenticated account. The body is your full `crawler_config` plus a recurrence and a webhook name.

   Python  TypeScript  Go  Rust  CLI  cURL 

  ```
from scrapfly import ScrapflyClient, CreateScheduleRequest, ScheduleRecurrence

client = ScrapflyClient(key='')

sched = client.create_crawler_schedule(
    {
        'url': 'https://web-scraping.dev',
        'page_limit': 50,
        'max_depth': 3,
        'follow_external_links': False,
        'respect_robots_txt': True,
        'asp': True,
        'country': 'us',
    },
    CreateScheduleRequest(
        webhook_name='my-webhook',
        recurrence=ScheduleRecurrence(cron='0 3 * * 1'),
        max_retries=1,
        notes='Weekly site refresh, Monday 03:00 UTC',
    ),
)
print(sched['id'], sched['status'])
```

 

   

 

 

 ```
import { ScrapflyClient } from 'scrapfly-sdk';

const client = new ScrapflyClient({ key: '' });

const sched = await client.createCrawlerSchedule(
  {
    url: 'https://web-scraping.dev',
    page_limit: 50,
    max_depth: 3,
    follow_external_links: false,
    respect_robots_txt: true,
    asp: true,
    country: 'us',
  },
  {
    webhook_name: 'my-webhook',
    recurrence: { cron: '0 3 * * 1' },
    max_retries: 1,
    notes: 'Weekly site refresh, Monday 03:00 UTC',
  },
);
console.log(sched.id, sched.status);
```

 

   

 

 

 ```
client, _ := scrapfly.New("")

sched, err := client.CreateCrawlerSchedule(
    map[string]interface{}{
        "url":                   "https://web-scraping.dev",
        "page_limit":            50,
        "max_depth":             3,
        "follow_external_links": false,
        "respect_robots_txt":    true,
        "asp":                   true,
        "country":               "us",
    },
    &scrapfly.CreateScheduleRequest{
        WebhookName: "my-webhook",
        Recurrence:  &scrapfly.ScheduleRecurrence{Cron: "0 3 * * 1"},
        MaxRetries:  1,
        Notes:       "Weekly site refresh, Monday 03:00 UTC",
    },
)
if err != nil { log.Fatal(err) }
fmt.Println(sched.ID, sched.Status)
```

 

   

 

 

 ```
let client = Client::builder().api_key("").build()?;

let mut cfg: HashMap<string value=""> = HashMap::new();
cfg.insert("url".into(), json!("https://web-scraping.dev"));
cfg.insert("page_limit".into(), json!(50));
cfg.insert("max_depth".into(), json!(3));
cfg.insert("respect_robots_txt".into(), json!(true));
cfg.insert("asp".into(), json!(true));
cfg.insert("country".into(), json!("us"));

let sched = client.create_crawler_schedule(
    cfg,
    &CreateScheduleRequest {
        webhook_name: "my-webhook".into(),
        recurrence: Some(ScheduleRecurrence { cron: Some("0 3 * * 1".into()), ..Default::default() }),
        max_retries: Some(1),
        notes: Some("Weekly site refresh, Monday 03:00 UTC".into()),
        ..Default::default()
    },
).await?;</string>
```

 

   

 

 

 ```
scrapfly --api-key  crawl schedule create \
    --config-inline '{"url":"https://web-scraping.dev","page_limit":50,"max_depth":3,"respect_robots_txt":true,"asp":true,"country":"us"}' \
    --webhook my-webhook \
    --cron '0 3 * * 1' \
    --max-retries 1 \
    --notes 'Weekly site refresh, Monday 03:00 UTC'
```

 

   

 

 

 ```
curl -X POST 'https://api.scrapfly.io/crawl/schedules?key=YOUR_API_KEY' \
    -H 'Content-Type: application/json' \
    -d '{
        "crawler_config": {
            "url": "https://web-scraping.dev",
            "page_limit": 50,
            "max_depth": 3,
            "follow_external_links": false,
            "respect_robots_txt": true,
            "asp": true,
            "country": "us"
        },
        "webhook_name": "my-webhook",
        "recurrence": {
            "cron": "0 3 * * 1"
        },
        "retry_on_failure": false,
        "max_retries": 1,
        "notes": "Weekly site refresh, Monday 03:00 UTC"
    }'

```

 

   

 

 

 

## Recurrence

The `recurrence` object accepts either of two shapes:

- **Cron mode** (`{ "cron": "0 3 * * 1" }`): a 5-field cron expression evaluated in UTC. Cron mode wins when set.
- **Interval mode** (`{ "interval": 1, "unit": "week" }`): fixed-interval mode with units `minute`, `hour`, `day`, `week`, `month`.
 
 Both modes accept an optional `ends` object to bound the schedule: `{ "type": "date", "date": "2027-01-01T00:00:00Z" }` stops at a specific date, and `{ "type": "count", "count": 12 }` stops after a fixed number of fires.

### scheduled\_date

 `scheduled_date` is the next time the schedule fires, in **UTC**. If you omit it, the schedule fires immediately and then follows the recurrence. To delay the first crawl, set `scheduled_date` explicitly as an RFC3339 timestamp such as `2026-04-27T03:00:00Z` (the trailing `Z` declares UTC).

## List, get, update, delete

- `GET /crawl/schedules` lists every crawler schedule on the account.
- `GET /crawl/schedules/{id}` returns one schedule.
- `PATCH /crawl/schedules/{id}` updates an active schedule (only supplied fields change; paused or cancelled schedules cannot be patched).
- `DELETE /crawl/schedules/{id}` cancels a schedule. Returns `204 No Content`.
 
 ```
curl 'https://api.scrapfly.io/crawl/schedules?key=YOUR_API_KEY'

```

 

   

 

 ```
curl 'https://api.scrapfly.io/crawl/schedules/SCHEDULE_UUID?key=YOUR_API_KEY'

```

 

   

 

 ```
curl -X PATCH 'https://api.scrapfly.io/crawl/schedules/SCHEDULE_UUID?key=YOUR_API_KEY' \
    -H 'Content-Type: application/json' \
    -d '{
        "recurrence": { "cron": "0 3 * * 0" }
    }'

```

 

   

 

 ```
curl -X DELETE 'https://api.scrapfly.io/crawl/schedules/SCHEDULE_UUID?key=YOUR_API_KEY'

```

 

   

 

 For a cross-product view (Web Scraping + Screenshot + Crawler schedules in one list), use `GET /schedules` instead.

## Pause, resume and execute now

 Pause stops future fires while preserving the schedule definition. Resume recomputes the next fire from the current time so missed ticks are not replayed. Execute now triggers an immediate crawl on top of the regular schedule; it respects `allow_concurrency` and is rejected if the same schedule fired in the last five minutes (set `allow_concurrency=true` to bypass).

 ```
curl -X POST 'https://api.scrapfly.io/crawl/schedules/SCHEDULE_UUID/pause?key=YOUR_API_KEY'
curl -X POST 'https://api.scrapfly.io/crawl/schedules/SCHEDULE_UUID/resume?key=YOUR_API_KEY'
curl -X POST 'https://api.scrapfly.io/crawl/schedules/SCHEDULE_UUID/execute?key=YOUR_API_KEY'

```

 

   

 

## Reliability

- **retry\_on\_failure**: when a crawl fails the scheduler retries up to `max_retries` times before recording a failure. Crawl retries replay the entire crawl, so set this conservatively (typically 0 or 1).
- **allow\_concurrency**: when `false` (default), a fire is skipped if the previous crawl is still running. Important for crawls because long crawls can overlap their next tick.
- **consecutive\_failures**: the response includes a counter of consecutive failed fires. After repeated failures the webhook is surfaced in the dashboard for review.
 
## Errors

 Schedule endpoints share a common error envelope. The full description and example response of each code is on the [Errors section](https://scrapfly.io/docs/crawler-api/errors#scheduler) of the documentation.

- [ERR::SCHEDULER::ALREADY\_CANCELLED](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::ALREADY_CANCELLED "Schedule is already cancelled and cannot be cancelled again.") - Schedule is already cancelled and cannot be cancelled again.
- [ERR::SCHEDULER::BACKEND\_ERROR](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::BACKEND_ERROR "The schedule operation could not be completed. Retry the request; if it persists, contact support.") - The schedule operation could not be completed. Retry the request; if it persists, contact support.
- [ERR::SCHEDULER::CANNOT\_MODIFY](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::CANNOT_MODIFY "Only ACTIVE schedules can be modified. Resume the schedule first (POST /schedules/:id/resume) or recreate it.") - Only ACTIVE schedules can be modified. Resume the schedule first (POST /schedules/:id/resume) or recreate it.
- [ERR::SCHEDULER::CONCURRENCY\_BLOCKED](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::CONCURRENCY_BLOCKED "An execute-now request was blocked because the same schedule fired within the last 5 minutes. Set allow_concurrency=true on the schedule to permit overlapping fires.") - An execute-now request was blocked because the same schedule fired within the last 5 minutes. Set allow\_concurrency=true on the schedule to permit overlapping fires.
- [ERR::SCHEDULER::CONFIG\_ERROR](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::CONFIG_ERROR "Schedule request rejected due to invalid configuration. Common causes: missing or empty webhook_name, webhook_name does not match a webhook configured on the project, malformed scrape_config / screenshot_config / crawler_config, or missing recurrence and scheduled_date.") - Schedule request rejected due to invalid configuration. Common causes: missing or empty webhook\_name, webhook\_name does not match a webhook configured on the project, malformed scrape\_config / screenshot\_config / crawler\_config, or missing recurrence and scheduled\_date.
- [ERR::SCHEDULER::CRAWLER\_NOT\_IMPLEMENTED](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::CRAWLER_NOT_IMPLEMENTED "Crawler schedule dispatch is not yet implemented in the worker. The fire was recorded but no crawl was started.") - Crawler schedule dispatch is not yet implemented in the worker. The fire was recorded but no crawl was started.
- [ERR::SCHEDULER::DISABLED](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::DISABLED "The targeted schedule has been disabled") - The targeted schedule has been disabled
- [ERR::SCHEDULER::NOT\_FOUND](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::NOT_FOUND "Schedule not found, or not owned by the authenticated account.") - Schedule not found, or not owned by the authenticated account.
- [ERR::SCHEDULER::QUOTA\_REACHED](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::QUOTA_REACHED "Your subscription's schedule quota is exhausted. Cancel an existing schedule or upgrade your plan to create more.") - Your subscription's schedule quota is exhausted. Cancel an existing schedule or upgrade your plan to create more.
- [ERR::SCHEDULER::WEBHOOK\_DISABLED](https://scrapfly.io/docs/crawler-api/error/ERR::SCHEDULER::WEBHOOK_DISABLED "Schedule fire skipped because its linked webhook is disabled. The schedule has been auto-paused; re-enable the webhook to resume it.") - Schedule fire skipped because its linked webhook is disabled. The schedule has been auto-paused; re-enable the webhook to resume it.