# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Batch (Multi-URL Scraping)](https://scrapfly.io/docs/scrape-api/batch)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Schedule](https://scrapfly.io/docs/scrape-api/schedule)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Schedule](https://scrapfly.io/docs/crawler-api/schedule)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Schedule](https://scrapfly.io/docs/screenshot-api/schedule)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [Captcha Solver](https://scrapfly.io/docs/cloud-browser-api/captcha-solver)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
- [Native Browser MCP](https://scrapfly.io/docs/cloud-browser-api/mcp)
- [DevTools Protocol](https://scrapfly.io/docs/cloud-browser-api/cdp-reference)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Schedule

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fschedule%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fschedule%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscrape-api%2Fschedule%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 Schedule recurring scrapes through the Web Scraping API. Each schedule pairs a full [scrape configuration](https://scrapfly.io/docs/scrape-api/getting-started) with a recurrence rule and a [webhook](https://scrapfly.io/docs/scrape-api/webhook); every fire is processed asynchronously and the result is delivered to your endpoint.

 Use schedules to refresh a catalog, monitor a list of competitor pages, or sample a target on a fixed cadence without writing a cron job in your own infrastructure.

## Concepts

- **kind**: every Web Scraping API schedule has `kind = "api.scrape"` and stores the scrape configuration under `metadata.scrape_config`. The same scrape parameters that work on the live `/scrape` endpoint work inside a schedule.
- **recurrence**: when the schedule fires next. Either a 5-field cron expression or an interval+unit pair.
- **webhook**: the named webhook that will receive each fire's result. Required, because schedules are asynchronous.
- **status**: `ACTIVE` (firing), `PAUSED` (skipped until resumed), or `CANCELLED` (terminal).
 
> **Webhook is required, all times are UTC** Schedules run in the background, so the result is published to a configured webhook rather than returned in the API response. Create a webhook from the [webhook dashboard](https://scrapfly.io/dashboard/webhook) before creating a schedule.
> 
>  Every date and cron expression on this page is evaluated in **UTC**. The scheduler does not support a per-schedule timezone. If you need a local-clock cadence, convert it to UTC when building the cron expression or the `scheduled_date`.

## Create a schedule

 `POST /scrape/schedules` creates a new schedule for the authenticated account. The body is your full `scrape_config` plus a recurrence and a webhook name.

   Python  TypeScript  Go  Rust  CLI  cURL 

  ```
from scrapfly import ScrapflyClient, CreateScheduleRequest, ScheduleRecurrence

client = ScrapflyClient(key='')

sched = client.create_scrape_schedule(
    {
        'url': 'https://web-scraping.dev/products',
        'render_js': True,
        'asp': True,
        'country': 'us',
    },
    CreateScheduleRequest(
        webhook_name='my-webhook',
        recurrence=ScheduleRecurrence(cron='0 */6 * * *'),
        retry_on_failure=True,
        max_retries=3,
        notes='Refresh product catalog every 6 hours',
    ),
)
print(sched['id'], sched['status'])
```

 

   

 

 

 ```
import { ScrapflyClient } from 'scrapfly-sdk';

const client = new ScrapflyClient({ key: '' });

const sched = await client.createScrapeSchedule(
  {
    url: 'https://web-scraping.dev/products',
    render_js: true,
    asp: true,
    country: 'us',
  },
  {
    webhook_name: 'my-webhook',
    recurrence: { cron: '0 */6 * * *' },
    retry_on_failure: true,
    max_retries: 3,
    notes: 'Refresh product catalog every 6 hours',
  },
);
console.log(sched.id, sched.status);
```

 

   

 

 

 ```
package main

import (
    "fmt"
    "log"

    scrapfly "github.com/scrapfly/go-scrapfly"
)

func main() {
    client, err := scrapfly.New("")
    if err != nil {
        log.Fatal(err)
    }

    sched, err := client.CreateScrapeSchedule(
        map[string]interface{}{
            "url":       "https://web-scraping.dev/products",
            "render_js": true,
            "asp":       true,
            "country":   "us",
        },
        &scrapfly.CreateScheduleRequest{
            WebhookName:    "my-webhook",
            Recurrence:     &scrapfly.ScheduleRecurrence{Cron: "0 */6 * * *"},
            RetryOnFailure: true,
            MaxRetries:     3,
            Notes:          "Refresh product catalog every 6 hours",
        },
    )
    if err != nil {
        log.Fatal(err)
    }
    fmt.Println(sched.ID, sched.Status)
}
```

 

   

 

 

 ```
use scrapfly_sdk::{Client, CreateScheduleRequest, ScheduleRecurrence};
use serde_json::{json, Value};
use std::collections::HashMap;

#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::error="">> {
    let client = Client::builder().api_key("").build()?;

    let mut cfg: HashMap<string value=""> = HashMap::new();
    cfg.insert("url".into(), json!("https://web-scraping.dev/products"));
    cfg.insert("render_js".into(), json!(true));
    cfg.insert("asp".into(), json!(true));
    cfg.insert("country".into(), json!("us"));

    let sched = client.create_scrape_schedule(
        cfg,
        &CreateScheduleRequest {
            webhook_name: "my-webhook".into(),
            recurrence: Some(ScheduleRecurrence {
                cron: Some("0 */6 * * *".into()),
                ..Default::default()
            }),
            retry_on_failure: true,
            max_retries: Some(3),
            notes: Some("Refresh product catalog every 6 hours".into()),
            ..Default::default()
        },
    ).await?;
    println!("{} {}", sched.id, sched.status);
    Ok(())
}</string></dyn>
```

 

   

 

 

 ```
scrapfly --api-key  scrape schedule create \
    --config-inline '{"url":"https://web-scraping.dev/products","render_js":true,"asp":true,"country":"us"}' \
    --webhook my-webhook \
    --cron '0 */6 * * *' \
    --retry-on-failure --max-retries 3 \
    --notes 'Refresh product catalog every 6 hours'
```

 

   

 

 

 ```
curl -X POST 'https://api.scrapfly.io/scrape/schedules?key=YOUR_API_KEY' \
    -H 'Content-Type: application/json' \
    -d '{
        "scrape_config": {
            "url": "https://web-scraping.dev/products",
            "render_js": true,
            "country": "us",
            "asp": true
        },
        "webhook_name": "my-webhook",
        "recurrence": {
            "cron": "0 */6 * * *"
        },
        "allow_concurrency": false,
        "retry_on_failure": true,
        "max_retries": 3,
        "notes": "Refresh product catalog every 6 hours"
    }'

```

 

   

 

 

 

The response is the full schedule object, including the generated `id` you will use for follow-up calls:

 ```
{
    "id": "4e7f7470-923d-4193-8b87-4503a0ccd224",
    "kind": "api.scrape",
    "status": "ACTIVE",
    "next_scheduled_date": "2026-04-26T18:00:00Z",
    "scheduled_date": "2026-04-26T18:00:00Z",
    "recurrence": {
        "cron": "0 */6 * * *"
    },
    "metadata": {
        "scrape_config": {
            "url": "https://web-scraping.dev/products",
            "render_js": true,
            "country": "us",
            "asp": true
        },
        "webhook_name": "my-webhook",
        "user_uuid": "...",
        "project_uuid": "...",
        "env": "LIVE"
    },
    "notes": "Refresh product catalog every 6 hours",
    "created_by": "api:scp-live-...",
    "created_at": "2026-04-26T14:56:50Z",
    "updated_at": "2026-04-26T14:56:50Z",
    "cancelled_at": null,
    "allow_concurrency": false,
    "retry_on_failure": true,
    "max_retries": 3,
    "consecutive_failures": 0
}

```

 

   

 

## Recurrence

The `recurrence` object accepts either of two shapes:

- **Cron mode** (`{ "cron": "0 */6 * * *" }`): a 5-field cron expression evaluated in UTC. Cron mode wins when set.
- **Interval mode** (`{ "interval": 6, "unit": "hour" }`): fixed-interval mode with units `minute`, `hour`, `day`, `week`, `month`.
 
 Both modes accept an optional `ends` object to bound the schedule: `{ "type": "date", "date": "2027-01-01T00:00:00Z" }` stops firing at a specific date, and `{ "type": "count", "count": 10 }` stops after a fixed number of fires.

### scheduled\_date

 `scheduled_date` is the next time the schedule fires, in **UTC**. If you omit it, the schedule fires immediately and then follows the recurrence. To delay the first fire, set `scheduled_date` explicitly as an RFC3339 timestamp such as `2026-04-27T09:00:00Z` (the trailing `Z` declares UTC).

## List schedules

`GET /scrape/schedules` returns every Web Scraping API schedule on the account. Optional query parameters:

- `status=ACTIVE` filters by status.
 
 ```
curl 'https://api.scrapfly.io/scrape/schedules?key=YOUR_API_KEY'

```

 

   

 

 For a cross-product view (Web Scraping + Screenshot + Crawler schedules in one list), use `GET /schedules` instead.

## Get a schedule

`GET /scrape/schedules/{id}` returns one schedule by id.

 ```
curl 'https://api.scrapfly.io/scrape/schedules/SCHEDULE_UUID?key=YOUR_API_KEY'

```

 

   

 

## Update a schedule

 `PATCH /scrape/schedules/{id}` updates an active schedule's recurrence, scrape configuration, retry settings, or notes. Only fields supplied in the body are changed; the rest are preserved. Paused or cancelled schedules cannot be modified.

 ```
curl -X PATCH 'https://api.scrapfly.io/scrape/schedules/SCHEDULE_UUID?key=YOUR_API_KEY' \
    -H 'Content-Type: application/json' \
    -d '{
        "recurrence": { "cron": "0 0 * * *" },
        "max_retries": 1
    }'

```

 

   

 

## Pause, resume and execute now

 Pause stops future fires while preserving the schedule definition. Resume recomputes the next fire from the current time so missed ticks are not replayed. Execute now triggers an immediate fire on top of the regular schedule; it respects `allow_concurrency` and is rejected if the same schedule fired in the last five minutes (set `allow_concurrency=true` to bypass).

 ```
# Pause an active schedule (no future fires until resumed)
curl -X POST 'https://api.scrapfly.io/scrape/schedules/SCHEDULE_UUID/pause?key=YOUR_API_KEY'

# Resume a paused schedule
curl -X POST 'https://api.scrapfly.io/scrape/schedules/SCHEDULE_UUID/resume?key=YOUR_API_KEY'

# Trigger an immediate fire regardless of next_scheduled_date
curl -X POST 'https://api.scrapfly.io/scrape/schedules/SCHEDULE_UUID/execute?key=YOUR_API_KEY'

```

 

   

 

## Cancel a schedule

 `DELETE /scrape/schedules/{id}` cancels a schedule. Cancellation is terminal: the row is preserved for audit but no further fires happen and the schedule cannot be resumed. Returns `204 No Content`.

 ```
curl -X DELETE 'https://api.scrapfly.io/scrape/schedules/SCHEDULE_UUID?key=YOUR_API_KEY'

```

 

   

 

## Reliability

- **retry\_on\_failure**: when a fire fails (timeout, upstream error, etc.) the scheduler retries up to `max_retries` times before recording a failure for that occurrence.
- **allow\_concurrency**: when `false` (default), a fire is skipped if the previous one is still running. Set `true` for fire-and-forget schedules where overlap is acceptable.
- **consecutive\_failures**: the response includes a counter of consecutive failed fires. After repeated failures the webhook is treated as unhealthy and the dashboard surfaces it for review.
 
## Errors

 Schedule endpoints share a common error envelope. The full description and example response of each code is on the [Errors section](https://scrapfly.io/docs/scrape-api/errors#scheduler) of the documentation.

- [ERR::SCHEDULER::ALREADY\_CANCELLED](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::ALREADY_CANCELLED "Schedule is already cancelled and cannot be cancelled again.") - Schedule is already cancelled and cannot be cancelled again.
- [ERR::SCHEDULER::BACKEND\_ERROR](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::BACKEND_ERROR "The schedule operation could not be completed. Retry the request; if it persists, contact support.") - The schedule operation could not be completed. Retry the request; if it persists, contact support.
- [ERR::SCHEDULER::CANNOT\_MODIFY](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::CANNOT_MODIFY "Only ACTIVE schedules can be modified. Resume the schedule first (POST /schedules/:id/resume) or recreate it.") - Only ACTIVE schedules can be modified. Resume the schedule first (POST /schedules/:id/resume) or recreate it.
- [ERR::SCHEDULER::CONCURRENCY\_BLOCKED](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::CONCURRENCY_BLOCKED "An execute-now request was blocked because the same schedule fired within the last 5 minutes. Set allow_concurrency=true on the schedule to permit overlapping fires.") - An execute-now request was blocked because the same schedule fired within the last 5 minutes. Set allow\_concurrency=true on the schedule to permit overlapping fires.
- [ERR::SCHEDULER::CONFIG\_ERROR](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::CONFIG_ERROR "Schedule request rejected due to invalid configuration. Common causes: missing or empty webhook_name, webhook_name does not match a webhook configured on the project, malformed scrape_config / screenshot_config / crawler_config, or missing recurrence and scheduled_date.") - Schedule request rejected due to invalid configuration. Common causes: missing or empty webhook\_name, webhook\_name does not match a webhook configured on the project, malformed scrape\_config / screenshot\_config / crawler\_config, or missing recurrence and scheduled\_date.
- [ERR::SCHEDULER::CRAWLER\_NOT\_IMPLEMENTED](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::CRAWLER_NOT_IMPLEMENTED "Crawler schedule dispatch is not yet implemented in the worker. The fire was recorded but no crawl was started.") - Crawler schedule dispatch is not yet implemented in the worker. The fire was recorded but no crawl was started.
- [ERR::SCHEDULER::DISABLED](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::DISABLED "The targeted schedule has been disabled") - The targeted schedule has been disabled
- [ERR::SCHEDULER::NOT\_FOUND](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::NOT_FOUND "Schedule not found, or not owned by the authenticated account.") - Schedule not found, or not owned by the authenticated account.
- [ERR::SCHEDULER::QUOTA\_REACHED](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::QUOTA_REACHED "Your subscription's schedule quota is exhausted. Cancel an existing schedule or upgrade your plan to create more.") - Your subscription's schedule quota is exhausted. Cancel an existing schedule or upgrade your plan to create more.
- [ERR::SCHEDULER::WEBHOOK\_DISABLED](https://scrapfly.io/docs/scrape-api/error/ERR::SCHEDULER::WEBHOOK_DISABLED "Schedule fire skipped because its linked webhook is disabled. The schedule has been auto-paused; re-enable the webhook to resume it.") - Schedule fire skipped because its linked webhook is disabled. The schedule has been auto-paused; re-enable the webhook to resume it.