# Scrapfly Documentation

## Table of Contents

### Dashboard

- [Intro](https://scrapfly.io/docs)
- [Project](https://scrapfly.io/docs/project)
- [Account](https://scrapfly.io/docs/account)
- [Workspace & Team](https://scrapfly.io/docs/workspace-and-team)
- [Billing](https://scrapfly.io/docs/billing)

### Products

#### MCP Server

- [Getting Started](https://scrapfly.io/docs/mcp/getting-started)
- [Tools & API Spec](https://scrapfly.io/docs/mcp/tools)
- [Authentication](https://scrapfly.io/docs/mcp/authentication)
- [Examples & Use Cases](https://scrapfly.io/docs/mcp/examples)
- [FAQ](https://scrapfly.io/docs/mcp/faq)
##### Integrations

- [Overview](https://scrapfly.io/docs/mcp/integrations)
- [Claude Desktop](https://scrapfly.io/docs/mcp/integrations/claude-desktop)
- [Claude Code](https://scrapfly.io/docs/mcp/integrations/claude-code)
- [ChatGPT](https://scrapfly.io/docs/mcp/integrations/chatgpt)
- [Cursor](https://scrapfly.io/docs/mcp/integrations/cursor)
- [Cline](https://scrapfly.io/docs/mcp/integrations/cline)
- [Windsurf](https://scrapfly.io/docs/mcp/integrations/windsurf)
- [Zed](https://scrapfly.io/docs/mcp/integrations/zed)
- [Roo Code](https://scrapfly.io/docs/mcp/integrations/roo-code)
- [VS Code](https://scrapfly.io/docs/mcp/integrations/vscode)
- [LangChain](https://scrapfly.io/docs/mcp/integrations/langchain)
- [LlamaIndex](https://scrapfly.io/docs/mcp/integrations/llamaindex)
- [CrewAI](https://scrapfly.io/docs/mcp/integrations/crewai)
- [OpenAI](https://scrapfly.io/docs/mcp/integrations/openai)
- [n8n](https://scrapfly.io/docs/mcp/integrations/n8n)
- [Make](https://scrapfly.io/docs/mcp/integrations/make)
- [Zapier](https://scrapfly.io/docs/mcp/integrations/zapier)
- [Vapi AI](https://scrapfly.io/docs/mcp/integrations/vapi)
- [Agent Builder](https://scrapfly.io/docs/mcp/integrations/agent-builder)
- [Custom Client](https://scrapfly.io/docs/mcp/integrations/custom-client)


#### Web Scraping API

- [Getting Started](https://scrapfly.io/docs/scrape-api/getting-started)
- [API Specification]()
- [Monitoring](https://scrapfly.io/docs/monitoring)
- [Customize Request](https://scrapfly.io/docs/scrape-api/custom)
- [Debug](https://scrapfly.io/docs/scrape-api/debug)
- [Anti Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)
- [Proxy](https://scrapfly.io/docs/scrape-api/proxy)
- [Proxy Mode](https://scrapfly.io/docs/scrape-api/proxy-mode)
- [Proxy Mode - Screaming Frog](https://scrapfly.io/docs/scrape-api/proxy-mode/screaming-frog)
- [Proxy Mode - Apify](https://scrapfly.io/docs/scrape-api/proxy-mode/apify)
- [(Auto) Data Extraction](https://scrapfly.io/docs/scrape-api/extraction)
- [Javascript Rendering](https://scrapfly.io/docs/scrape-api/javascript-rendering)
- [Javascript Scenario](https://scrapfly.io/docs/scrape-api/javascript-scenario)
- [SSL](https://scrapfly.io/docs/scrape-api/ssl)
- [DNS](https://scrapfly.io/docs/scrape-api/dns)
- [Cache](https://scrapfly.io/docs/scrape-api/cache)
- [Session](https://scrapfly.io/docs/scrape-api/session)
- [Webhook](https://scrapfly.io/docs/scrape-api/webhook)
- [Screenshot](https://scrapfly.io/docs/scrape-api/screenshot)
- [Errors](https://scrapfly.io/docs/scrape-api/errors)
- [Timeout](https://scrapfly.io/docs/scrape-api/understand-timeout)
- [Throttling](https://scrapfly.io/docs/throttling)
- [Troubleshoot](https://scrapfly.io/docs/scrape-api/troubleshoot)
- [Billing](https://scrapfly.io/docs/scrape-api/billing)
- [FAQ](https://scrapfly.io/docs/scrape-api/faq)

#### Crawler API

- [Getting Started](https://scrapfly.io/docs/crawler-api/getting-started)
- [API Specification]()
- [Retrieving Results](https://scrapfly.io/docs/crawler-api/results)
- [WARC Format](https://scrapfly.io/docs/crawler-api/warc-format)
- [Data Extraction](https://scrapfly.io/docs/crawler-api/extraction-rules)
- [Webhook](https://scrapfly.io/docs/crawler-api/webhook)
- [Billing](https://scrapfly.io/docs/crawler-api/billing)
- [Errors](https://scrapfly.io/docs/crawler-api/errors)
- [Troubleshoot](https://scrapfly.io/docs/crawler-api/troubleshoot)
- [FAQ](https://scrapfly.io/docs/crawler-api/faq)

#### Screenshot API

- [Getting Started](https://scrapfly.io/docs/screenshot-api/getting-started)
- [API Specification]()
- [Accessibility Testing](https://scrapfly.io/docs/screenshot-api/accessibility)
- [Webhook](https://scrapfly.io/docs/screenshot-api/webhook)
- [Billing](https://scrapfly.io/docs/screenshot-api/billing)
- [Errors](https://scrapfly.io/docs/screenshot-api/errors)

#### Extraction API

- [Getting Started](https://scrapfly.io/docs/extraction-api/getting-started)
- [API Specification]()
- [Rules Template](https://scrapfly.io/docs/extraction-api/rules-and-template)
- [LLM Extraction](https://scrapfly.io/docs/extraction-api/llm-prompt)
- [AI Auto Extraction](https://scrapfly.io/docs/extraction-api/automatic-ai)
- [Webhook](https://scrapfly.io/docs/extraction-api/webhook)
- [Billing](https://scrapfly.io/docs/extraction-api/billing)
- [Errors](https://scrapfly.io/docs/extraction-api/errors)
- [FAQ](https://scrapfly.io/docs/extraction-api/faq)

#### Proxy Saver

- [Getting Started](https://scrapfly.io/docs/proxy-saver/getting-started)
- [Fingerprints](https://scrapfly.io/docs/proxy-saver/fingerprints)
- [Optimizations](https://scrapfly.io/docs/proxy-saver/optimizations)
- [SSL Certificates](https://scrapfly.io/docs/proxy-saver/certificates)
- [Protocols](https://scrapfly.io/docs/proxy-saver/protocols)
- [Pacfile](https://scrapfly.io/docs/proxy-saver/pacfile)
- [Secure Credentials](https://scrapfly.io/docs/proxy-saver/security)
- [Billing](https://scrapfly.io/docs/proxy-saver/billing)

#### Cloud Browser API

- [Getting Started](https://scrapfly.io/docs/cloud-browser-api/getting-started)
- [Proxy & Geo-Targeting](https://scrapfly.io/docs/cloud-browser-api/proxy)
- [Unblock API](https://scrapfly.io/docs/cloud-browser-api/unblock)
- [File Downloads](https://scrapfly.io/docs/cloud-browser-api/file-downloads)
- [Session Resume](https://scrapfly.io/docs/cloud-browser-api/session-resume)
- [Human-in-the-Loop](https://scrapfly.io/docs/cloud-browser-api/human-in-the-loop)
- [Debug Mode](https://scrapfly.io/docs/cloud-browser-api/debug-mode)
- [Bring Your Own Proxy](https://scrapfly.io/docs/cloud-browser-api/bring-your-own-proxy)
- [Browser Extensions](https://scrapfly.io/docs/cloud-browser-api/extensions)
##### Integrations

- [Puppeteer](https://scrapfly.io/docs/cloud-browser-api/puppeteer)
- [Playwright](https://scrapfly.io/docs/cloud-browser-api/playwright)
- [Selenium](https://scrapfly.io/docs/cloud-browser-api/selenium)
- [Vercel Agent Browser](https://scrapfly.io/docs/cloud-browser-api/agent-browser)
- [Browser Use](https://scrapfly.io/docs/cloud-browser-api/browser-use)
- [Stagehand](https://scrapfly.io/docs/cloud-browser-api/stagehand)

- [Billing](https://scrapfly.io/docs/cloud-browser-api/billing)
- [Errors](https://scrapfly.io/docs/cloud-browser-api/errors)


### Tools

- [Antibot Detector](https://scrapfly.io/docs/tools/antibot-detector)

### SDK

- [Golang](https://scrapfly.io/docs/sdk/golang)
- [Python](https://scrapfly.io/docs/sdk/python)
- [Rust](https://scrapfly.io/docs/sdk/rust)
- [TypeScript](https://scrapfly.io/docs/sdk/typescript)
- [Scrapy](https://scrapfly.io/docs/sdk/scrapy)

### Integrations

- [Getting Started](https://scrapfly.io/docs/integration/getting-started)
- [LangChain](https://scrapfly.io/docs/integration/langchain)
- [LlamaIndex](https://scrapfly.io/docs/integration/llamaindex)
- [CrewAI](https://scrapfly.io/docs/integration/crewai)
- [Zapier](https://scrapfly.io/docs/integration/zapier)
- [Make](https://scrapfly.io/docs/integration/make)
- [n8n](https://scrapfly.io/docs/integration/n8n)

### Academy

- [Overview](https://scrapfly.io/academy)
- [Web Scraping Overview](https://scrapfly.io/academy/scraping-overview)
- [Tools](https://scrapfly.io/academy/tools-overview)
- [Reverse Engineering](https://scrapfly.io/academy/reverse-engineering)
- [Static Scraping](https://scrapfly.io/academy/static-scraping)
- [HTML Parsing](https://scrapfly.io/academy/html-parsing)
- [Dynamic Scraping](https://scrapfly.io/academy/dynamic-scraping)
- [Hidden API Scraping](https://scrapfly.io/academy/hidden-api-scraping)
- [Headless Browsers](https://scrapfly.io/academy/headless-browsers)
- [Hidden Web Data](https://scrapfly.io/academy/hidden-web-data)
- [JSON Parsing](https://scrapfly.io/academy/json-parsing)
- [Data Processing](https://scrapfly.io/academy/data-processing)
- [Scaling](https://scrapfly.io/academy/scaling)
- [Walkthrough Summary](https://scrapfly.io/academy/walkthrough-summary)
- [Scraper Blocking](https://scrapfly.io/academy/scraper-blocking)
- [Proxies](https://scrapfly.io/academy/proxies)

---

# Scrapfly Screenshot API

 [  View as markdown ](https://scrapfly.io/?view=markdown)   Copy for LLM    Copy for LLM  [     Open in ChatGPT ](https://chatgpt.com/?hints=search&prompt=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscreenshot-api%2Fgetting-started%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Claude ](https://claude.ai/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscreenshot-api%2Fgetting-started%20so%20I%20can%20ask%20questions%20about%20it.) [     Open in Perplexity ](https://www.perplexity.ai/search/new?q=Read%20from%20https%3A%2F%2Fscrapfly.io%2Fdocs%2Fscreenshot-api%2Fgetting-started%20so%20I%20can%20ask%20questions%20about%20it.) 

 

 

 The Screenshot API allows capturing screenshots of any web page or specific parts of the web page. This API comes with many convenience features like blocking bypass, viewport settings, full page capture auto scroll, banner blocking and much more!

> If you need advanced scraping capabilities for screenshot capture like browser interaction, cookies, headers, etc. Use the [full Web Scraping API to take screenshot instead](https://scrapfly.io/docs/scrape-api/screenshot). Screenshot API is a simplified version suited for more general screenshot capture.

 This API is designed to be as simple as possible while maintaining the flexibility to capture any web page and is fully controlled through GET requests and URL parameters making it accessible in any environment.

 Minimal API call is a GET request with `url` and `key` parameters:

 ```
https://api.scrapfly.io/screenshot?url=https%3A%2F%2Fweb-scraping.dev%2Fproduct%2F1&key=
```

 

   

 

 

 

 [  See Usage ](#spec) 

## Intro Video

  

## On Steroids  

- Automatically unblock websites without extra configuration   .
- Gzip compression is available through the accept-encoding: gzip header.
- Direct screenshot results as `jpg, png, webp, gif` using [format](#api_param_format).
- Block ads, pop-ups, modals and banners with [options=block\_banners](#api_param_options).
- Auto scroll to the bottom of the page to load all page details with [auto\_scroll](#api_param_auto_scroll).
- Execute JavaScript code on the page before taking the screenshot using the [js parameter](#api_param_js).
 
## Quality of Life  

- All screenshot requests and metadata are automatically tracked on a [Web Dashboard  ](https://scrapfly.io/docs/monitoring).
- Multi project/scraper support through [Project Management  ](https://scrapfly.io/docs/project).
- Ability to debug and replay scrape requests from the dashboard log page.
- API [Status page](https://scrapfly.statuspage.io/) with a notification subscription.
- Full API transparency through meta HTTP headers: 
    - **X-Scrapfly-Api-Cost** - API cost billed
    - **X-Scrapfly-Remaining-Api-Credit** Remaining Api credit. If 0, billed in extra credits
    - **X-Scrapfly-Account-Concurrent-Usage** You current concurrency usage of your account
    - **X-Scrapfly-Account-Remaining-Concurrent-Usage** Maximum concurrency allowed by the account
    - **X-Scrapfly-Project-Concurrent-Usage** Concurrency usage of the project
    - **X-Scrapfly-Project-Remaining-Concurrent-Usage** If the concurrency limit is set on the project otherwise equal to the account concurrency
    - **X-Scrapfly-Screenshot-Url** URL for the screenshot store on Scrapfly's servers for later retrieval.
     
     Concurrency is defined by your subscription
 
## Billing  

 Scrapfly uses a credit system to bill Screenshot API requests.

 Billing is reported in every scrape response through the `X-Scrapfly-API-Cost` header and the monitoring dashboard and can be controlled through Scrapfly budget settings.

 For more see [Screenshot Billing](https://scrapfly.io/docs/screenshot-api/billing).

## Errors  

 Scrapfly uses conventional HTTP response codes to indicate the success or failure of an API request.

 **Codes in the 2xx** range indicate success.

 **Codes in the 4xx** range indicate an error that failed given the information provided (e.g., a required parameter was omitted, not permitted, max concurrency reached, etc.).

 **Codes in the 5xx** range indicate an error with Scrapfly's servers.

---

 **HTTP 422 - Request Failed** provide extra headers in order to help as much as possible:

- **X-Scrapfly-Reject-Code:** Error Code
- **X-Scrapfly-Reject-Description:** URL to the related documentation
- **X-Scrapfly-Reject-Retryable:** Indicate if the screenshot is retryable
 
> It is important to properly handle HTTP client errors in order to access the error headers and body. These details contain valuable information for troubleshooting, resolving the issue or reaching the support.

###   HTTP Status Code Summary 

 | 200 - OK | Everything worked as expected. |
|---|---|
| 400 - Bad Request | The request was unacceptable, often due to missing a required parameter or a bad value or a bad format. |
| 401 - Unauthorized | No valid API key provided. |
| 402 - Payment Required | A payment issue occur and need to be resolved |
| 403 - Forbidden | The API key doesn't have permissions to perform the request. |
| 422 - Request Failed | The parameters were valid but the request failed. |
| 429 - Too Many Requests | All free quota used or max allowed concurrency or domain throttled |
| 500, 502, 503 - Server Errors | Something went wrong on Scrapfly's end. |
| 504 - Timeout | The screenshot have timeout |
| You can check out the [full error list](https://scrapfly.io/docs/screenshot-api/errors) to learn more. |

 

 

 

 

 

---

## Specification

 Scrapfly has loads of features and the best way to discover them is through the specification docs below.

The following **headers are available** with the screenshot response:

- **X-Scrapfly-Upstream-Http-Code:** The Status code the page
- **X-Scrapfly-Upstream-Url:** The actual url of the screenshot after potential redirection
- **X-Scrapfly-Screenshot-Url:** Screenshot storage URL on scrapfly servers. Use `?key=YOUR-SCRAPFLY-KEY` for retrieval

To start, you can try out the API directly using your browser:

 ```
https://api.scrapfly.io/screenshot?url=https%3A%2F%2Fweb-scraping.dev%2Fproduct%2F1&key=
```

 

   

 

 



Or by using curl in your terminal:

 ```
$ curl -G \
--request "GET" \
--url "https://api.scrapfly.io/screenshot" \
--data-urlencode "key=" \
--data-urlencode "url=https://web-scraping.dev/product/1" -o screenshot.jpg
```

 

   

 

  Command Explanation- **`curl -G`**: 
    - `curl` is a command-line tool for transferring data with URLs.
    - `-G` specifies that the request should be a GET request and appends the data specified with `--data-urlencode` as query parameters.
- **`--request "GET"`**: 
    - `--request "GET"` explicitly sets the request method to GET. This is redundant since `-G` already indicates a GET request.
- **URL**: 
    - The URL of the API endpoint being accessed: `https://api.scrapfly.io/screenshot`.
- **`--data-urlencode "key=__API_KEY__"`**: 
    - `--data-urlencode` encodes data as a URL parameter.
    - `"key=__API_KEY__"` is the API key used for authentication.
- **`--data-urlencode "url=https://web-scraping.dev/product/1"`**: 
    - `--data-urlencode` encodes data as a URL parameter.
    - `"url=https://web-scraping.dev/product/1"` is the URL of the web page to be screenshotted, URL-encoded.
- **`--data-urlencode "options=load_images"`**: 
    - `--data-urlencode` encodes data as a URL parameter.
    - `"options=load_images"` specifies that images should be loaded in the screenshot.
- **`-o screenshot.jpg`**: 
    - `-o` specifies the output file for the response.
    - `screenshot.jpg` is the name of the file where the screenshot will be saved.
 
 This will save the results to `screenshot.jpg` in the current directory:

 ```
open screenshot.jpg
```

 

   

 

 



> **Only documents of content-type `text/*` are eligible** for screenshot, otherwise the error [ERR::SCREENSHOT::INVALID\_CONTENT\_TYPE](https://scrapfly.io/docs/screenshot-api/error/ERR::SCREENSHOT::INVALID_CONTENT_TYPE) will be returned.

 With that in mind, now you can explore the API specification to see all features that are available through URL parameters:

   

 [`url`](#api_param_url) 

  required 

    

 

Target url to scrape. [Must be url encoded ](https://scrapfly.io/web-scraping-tools/urlencode)

`https://example.com/page`

 

 

 [`key`](#api_param_key) 

  required 

    

 

API Key to authenticate the call

`scp-live-xxx...`

 

 

 [`format`](#api_param_format) 

 default: jpg 

    

 

Format of the screenshot image

`jpg` `png` `webp` `gif`

 

 

 [`capture`](#api_param_capture) 

 default: viewport 

    

 

Area to capture: viewport, full page, or CSS selector/XPath for specific element

`viewport` `fullpage` `#header` `//div/img`

 

   More details**Capture modes:**

- `viewport` - Visible screen area only
- `fullpage` - Entire page including scrolled content
- `vertical` - Vertical section of page
- **CSS Selector:** `#header`, `.product-image`
- **XPath:** `//div/img[1]`
 
 When using selectors, only the matching element is captured. Useful for extracting specific page components.

 

  

 [`resolution`](#api_param_resolution) 

 default: 1920x1080 

    

 

Screen resolution (width x height)

`1920x1080` `375x812`

 

   More details**Common resolutions:**

- `1920x1080` - Desktop Full HD (default)
- `1440x900` - Desktop standard
- `375x812` - iPhone X/11/12
- `320x860` - Mobile portrait
- `768x1024` - iPad portrait
 
 Use mobile resolutions to capture responsive/mobile versions of websites.

 

  

 [`country`](#api_param_country) 

  popular default: us 

    [  ](https://scrapfly.io/docs/scrape-api/proxy?iframe=1#geo "Preview") [  ](https://scrapfly.io/docs/scrape-api/proxy#geo "Docs") 

 

Proxy country location (ISO 3166 alpha-2). Supports multiple, weighted, or exclusions

`us` `us,ca,mx` `-gb`

 

   More details**Country selection modes:**

- **Single country:** `country=us`
- **Multiple countries:** `country=us,ca,mx` (random selection)
- **Exclusions:** `country=-gb` (exclude UK)
- **Weighted:** `country=us:10,gb:5`
 
 Uses ISO 3166-1 alpha-2 country codes. See [available countries by proxy pool](https://scrapfly.io/docs/scrape-api/proxy#geo).

 

  

 [`timeout`](#api_param_timeout) 

 default: 60000 

    [  ](https://scrapfly.io/docs/scrape-api/understand-timeout?iframe=1 "Preview") [  ](https://scrapfly.io/docs/scrape-api/understand-timeout "Docs") 

 

Maximum time allowed in milliseconds (min: 60000, max: 120000)

`60000` `120000`

 

 

  Rendering Options 

 [`rendering_wait`](#api_param_rendering_wait) 

 default: 1000 

    [  ](https://scrapfly.io/docs/scrape-api/javascript-rendering?iframe=1 "Preview") [  ](https://scrapfly.io/docs/scrape-api/javascript-rendering "Docs") 

 

Delay in milliseconds to wait after page load

`2000` `5000`

 

 

 [`wait_for_selector`](#api_param_wait_for_selector) 

  popular default: null 

    [  ](https://scrapfly.io/docs/scrape-api/javascript-rendering?iframe=1 "Preview") [  ](https://scrapfly.io/docs/scrape-api/javascript-rendering "Docs") 

 

Wait until CSS selector or XPath is visible before capturing

`body` `#content` `//button`

 

 

 [`options`](#api_param_options) 

 default: null 

    

 

Screenshot flags: `dark_mode`, `block_banners`, `print_media_format`

`block_banners` `dark_mode`

 

   More details**Available options:**

- `dark_mode` - Enable dark theme rendering
- `block_banners` - Remove cookie banners and overlays
- `print_media_format` - Render page in print mode
 
**Combining options:** Use comma-separated values:

 `options=block_banners,dark_mode` Options can significantly improve screenshot quality by removing distracting elements.

 

  

 [`auto_scroll`](#api_param_auto_scroll) 

 default: false 

    

 

Auto-scroll to bottom to trigger lazy-loaded content

`true` `false`

 

 

 [`js`](#api_param_js) 

 default: null 

    [  ](https://scrapfly.io/docs/scrape-api/javascript-rendering?iframe=1 "Preview") [  ](https://scrapfly.io/docs/scrape-api/javascript-rendering "Docs") 

 

JavaScript to execute (base64 encoded, max 16KB). [Encode here ](https://scrapfly.io/web-scraping-tools/base64)

`ZG9jdW1lbnQuYm9keS5zdHls...`

 

 

  Caching Options 

 [`cache`](#api_param_cache) 

 default: false 

    [  ](https://scrapfly.io/docs/scrape-api/cache?iframe=1 "Preview") [  ](https://scrapfly.io/docs/scrape-api/cache "Docs") 

 

Enable caching for repeated screenshots of same URL

`true` `false`

 

 

 [`cache_ttl`](#api_param_cache_ttl) 

 default: 86400 

    [  ](https://scrapfly.io/docs/scrape-api/cache?iframe=1#ttl_eviction "Preview") [  ](https://scrapfly.io/docs/scrape-api/cache#ttl_eviction "Docs") 

 

Cache time-to-live in seconds

`60` `3600` `86400`

 

 

 [`cache_clear`](#api_param_cache_clear) 

 default: false 

    [  ](https://scrapfly.io/docs/scrape-api/cache?iframe=1#ttl_eviction "Preview") [  ](https://scrapfly.io/docs/scrape-api/cache#ttl_eviction "Docs") 

 

Force cache refresh on this request

`true` `false`

 

 

  Accessibility Testing 

 [`vision_deficiency`](#api_param_vision_deficiency) 

  popular default: none 

    [  ](https://scrapfly.io/docs/screenshot-api/accessibility?iframe=1 "Preview") [  ](https://scrapfly.io/docs/screenshot-api/accessibility "Docs") 

 

Simulate vision deficiency for accessibility testing (WCAG compliance)

`deuteranopia` `protanopia` `tritanopia`

 

   More details**Available vision deficiency types:**

- `none` - Normal vision (default)
- `deuteranopia` - Red-green color blindness (green-blind), affects ~6% of males
- `protanopia` - Red-green color blindness (red-blind), affects ~2% of males
- `tritanopia` - Blue-yellow color blindness, affects ~0.01%
- `achromatopsia` - Complete color blindness (monochromacy), affects ~0.003%
- `blurredVision` - Blurred/unfocused vision, affects ~2.2B globally
 
 Use this for WCAG 2.2, Section 508, ADA, and European Accessibility Act compliance testing. See [Accessibility Testing Guide](https://scrapfly.io/docs/screenshot-api/accessibility).

 

  

 



## Using HEAD Requests

 Screenshot API also support `HEAD` type requests for operations that do not need an immediate data stream. This approach can **save significant amounts of bandwidth** and increase capture speeds as no response body is returned just a URL to the screenshot.

 Scrapfly stores all of your screenshots on our servers so you can download them later in your integrations by storing the screenshot storage URL from `X-Scrapfly-Screenshot-Url` header.

 ```
$ curl -G \
    --head
    --url "https://api.scrapfly.io/screenshot" \
    --data-urlencode "key=" \
    --data-urlencode "url=https://web-scraping.dev/product/1"

HTTP/2 200
content-type: image/jpeg
date: Tue, 18 Feb 2025 07:05:29 GMT
vary: Accept-Encoding
x-request-id: 20a07b29-a1d1-4e50-9f64-a39766aee488
x-scrapfly-account-concurrent-usage: 1
x-scrapfly-account-remaining-concurrent-usage: 9
x-scrapfly-api-cost: 60
x-scrapfly-project-concurrent-usage: 1
x-scrapfly-project-remaining-concurrent-usage: 9
x-scrapfly-remaining-api-credit: 526904
x-scrapfly-response-time: 8.510000
x-scrapfly-screenshot-url:
https://api.scrapfly.io/scrape/screenshot/01JMBY09ETZSH1ZB6NAHFR8WJP/main
# ^^^ screenshot url avaiable in the headers. Attach `?key=YOUR-SCRAPFLY-KEY` to retrieve the image at any time.
x-scrapfly-upstream-http-code: 200
x-scrapfly-upstream-url: https://web-scraping.dev/product/1
content-length: 340684
```

 

   

 

  Command Explanation- **`curl -G`** : 
    - `curl` is a command-line tool for transferring data with URLs.
    - `-G` specifies that the request should be a GET request and appends the data specified with `--data-urlencode` as query parameters.
- **`--head`** : 
    - `--head` makes a HEAD request instead of a GET request. This retrieves headers only, without the response body.
- **URL**: 
    - The URL of the API endpoint being accessed: `https://api.scrapfly.io/screenshot`.
- **`--data-urlencode "key=__API_KEY__"`** : 
    - `--data-urlencode` encodes data as a URL parameter.
    - `"key=__API_KEY__"` is the API key used for authentication.
- **`--data-urlencode "url=https://web-scraping.dev/product/1"`** : 
    - `--data-urlencode` encodes data as a URL parameter.
    - `"url=https://web-scraping.dev/product/1"` is the URL of the web page to be screenshotted, URL-encoded.
 
  

 

 Note that screenshot store **duration depends on your plan's [log retention policy](https://scrapfly.io/docs/monitoring)** which varies between 1 to 4 weeks.

> All scrapfly store URLs still **require authentication** which can be done by attaching `?key` parameter with your API key. i.e. `https://api.scrapfly.io/screenshot/01JMBY09ETZSH1ZB6NAHFR8WJP/main?key=YOUR-SCRAPFLY-KEY`

## Related Errors

 All related errors are listed below. You can see full description and example of error response on the [Errors section](https://scrapfly.io/docs/extraction-api/errors).

- [ERR::SCREENSHOT::INVALID\_CONTENT\_TYPE](https://scrapfly.io/docs/extraction-api/error/ERR::SCREENSHOT::INVALID_CONTENT_TYPE "Only content type text/html is supported for screenshot") - Only content type text/html is supported for screenshot
- [ERR::SCREENSHOT::UNABLE\_TO\_TAKE\_SCREENSHOT](https://scrapfly.io/docs/extraction-api/error/ERR::SCREENSHOT::UNABLE_TO_TAKE_SCREENSHOT "For some reason we were unable to take the screenshot") - For some reason we were unable to take the screenshot
 
## FAQ

### Why are some images missing in my screenshot?

 Some images can be loaded dynamically and render slowly. Try setting `rendering_wait` parameter to a few seconds (e.g. `3000`) or wait for elements to load explicitly using `wait_for_selector` parameter.

 Some images only load when scrolled into viewport so you can try increasing the `viewport` parameter to bigger values than the default `1920x1080`. You can also use `auto_scroll` to have Scrapfly scroll to the very bottom of the page to force image loading.

 Finally, some images can be blocked from loading by modals and banners. Use the `block_banners` flag in the `options` parameter to close any pop-ups, modals or banners i.e. `options=block_banners`.