// PRODUCT

Screenshot API

Render any URL to a pixel-perfect image. Viewport control, wait conditions, CSS injection, and full anti-bot bypass in one API call.

Real browsers on every capture. Anti-bot bypass built in.

  • Clean, ad-free screenshots. Block cookie banners, ads, and overlays with built-in page modification options before the shutter fires.
  • Capture on your terms. Full page, above-fold, specific element by CSS or XPath selector, custom viewport and DPR, PNG/JPEG/WebP/PDF output.
1,000 free credits. No credit card required.

55k+

developers using Scrapfly APIs

98%

capture success on protected targets

5

output formats: PNG, JPEG, WebP, GIF, PDF

1s

typical capture time from browser pool


CAPABILITIES

Every Capture Scenario Covered

Viewport, wait conditions, page modifications, schedules, anti-bot bypass. All composable on one endpoint.

Capture Pipeline

Every request flows through a composable capture pipeline. Anti-bot bypass, ad removal, JS scenarios, and format optimization activate only when you set the relevant parameter. Unused layers add zero latency.

Real Browser (Scrapium) full Chromium render, JS execution, lazy-load forced via auto_scroll
Ad-block + Banner Dismiss block_banners preset removes cookie popups, overlays, and ads before shutter fires
JS Scenario click, fill, scroll, wait_for_selector, wait_until - execute before capture
Capture full_page, viewport, or CSS/XPath element selector - custom width, height, DPR
Optimize PNG lossless, JPEG/WebP quality-tunable, PDF - server-side compression
Deliver image bytes in response or cached URL via log_url, configurable TTL up to 7 days
5 formats PNG, JPEG, WebP, GIF, PDF
Any viewport desktop, tablet, mobile
Full page scroll-to-bottom auto
0 credits on failed capture

View Screenshot API docs →

Wait Conditions

Capture the page at exactly the right moment. Wait for a CSS selector to appear, for network activity to go idle, or hold for a fixed rendering delay in milliseconds before the shutter fires.

Selector wait_for_selector
Network idle wait_until
Fixed ms rendering_wait
networkidle
domcontentloaded
wait_for_selector
rendering_wait

Page Modifications Before Capture

Inject custom CSS to hide cookie banners, ads, and overlays before capture. Use the options parameter with built-in presets like block_banners and dark_mode, or supply your own stylesheet. Run any JS scenario step - click, fill a form field, scroll to position - before the shutter fires.

Block banners cookie popups, ads
Dark mode built-in preset
Custom CSS hide any element
JS scenario click, fill, scroll
block_banners
dark_mode
custom CSS inject
click
fill
scroll

Anti-Bot Bypass

Add asp=true and the same bypass engine powering the Web Scraping API handles Cloudflare, DataDome, Akamai, and more. TLS fingerprint, JS challenges, and behavioral signals are solved server-side before the screenshot is taken. No extra code on your end.

JA3/JA4 TLS fingerprint
HTTP/2 SETTINGS frame
Behavioral mouse + scroll
Free retries challenges don't cost

View full bypass catalog →

Viewport and Resolution Control

Set any width and height for the browser viewport before capture. Use resolution to pass custom dimensions. Set dpr for retina (2x/3x) output. Combine with auto_scroll=true to force lazy-loaded content into the frame. Preset device sizes cover desktop, tablet, and mobile without pixel math.

Desktop 1280 x 720 default
Tablet 768 x 1024
Mobile 375 x 812
2x / 3x DPR retina output
resolution param
dpr param
auto_scroll

View viewport docs →

Output Formats

Five formats via the format parameter. PNG is lossless and the default. JPEG and WebP accept a quality value (1-100) for size tuning. PDF renders the full page as a printable document. All formats are returned as binary in the response body.

PNG lossless default
JPEG/WebP quality tunable
PDF from URL
png
jpeg
webp
pdf
gif

Scheduled Captures and Visual Diff

Schedule recurring captures on any cron interval via the dashboard or API. Each run is stored and compared to the previous one. Pixel-level diff highlighting shows exactly what changed on the page. Suited for visual regression testing, competitive page monitoring, and AI vision training datasets.

Cron any interval
Pixel diff per-run comparison
Alert on change
Stored history retained
Visual regression testing
Competitive monitoring
AI vision training data

View monitoring docs →

Server-Side Cache

Cache captured images on Scrapfly infrastructure with a configurable TTL. Repeat requests return the stored image at 0 credits. Max TTL is 7 days.

0credits on hit
7 daysmax TTL

Request Observability

Every capture returns a log_url. Inspect the full request timeline, rendered HTML, final URL after redirects, and load timings in the dashboard.

log_urlper capture
Timelinefull trace

Screenshot + Data in One Call

The Screenshot API is focused on image capture. For structured data from the same page in one call, the Web Scraping API supports screenshots[name]=fullpage alongside HTML extraction - data and image in a single request. For session-based workflows (login flows, multi-step navigation) before a capture, the Cloud Browser gives you a persistent Chromium session you drive directly.

Pay Only for Delivered Images

Failed captures don't consume credits. Timeouts, upstream errors, and bypass failures are free. You pay for the image, not the attempt. Use cost_budget to cap the maximum credit spend per request.

0credits on fail
cost_budgetper-request cap

View billing docs →

Developer Tools

Test selector targeting before writing a single line of capture code. Verify the CSS or XPath expression you plan to pass to the capture parameter against a live page, then copy it straight into your API call.

Browse all developer tools →


CODE

Capture Any Page in Two Lines

Pick a capture style, pick a language. Real examples, real endpoints.

Default PNG capture of a page. All options optional.

from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/login?cookies',
        # use one of many modifiers to modify the page
        options=["block_banners"],
    )
)
client.save_screenshot(api_response, "sa_options")
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/reviews',
        // use one of many modifiers to modify the page
        options: ['block_banners']
    })
);

console.log(api_result.image);
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/login?cookies \
options==block_banners

Wait for a selector, network idle, or custom JS before capturing.

from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/reviews',
        # wait for specific element to appear
        wait_for_selector=".review",
        # or for a specific time
        rendering_wait=3000,  # 3 seconds
    )
)
client.save_screenshot(api_response, "sa_flow")
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/reviews',
        // for for specific element to appear
        wait_for_selector: ".review",
        // or for a specific time
        rendering_wait: 3000,  // 3 seconds
    })
);

console.log(api_result.image);
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/reviews \
wait_for_selector==.review \
rendering_wait==3000

PNG, JPEG, WebP, GIF, or PDF output. Configurable quality and compression.

from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # directly capture in your file type
        format="jpg",
        # jpg, png, webp, gif etc.
    )
)
client.save_screenshot(api_response, "sa_format")
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // directly capture in your file type
        format: "jpg", 
        // jpg, png, webp, gif etc.
    })
);

console.log(api_result.image);
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
format==jpg

Retina (2x/3x DPR), custom viewport dimensions, device pixel ratios.

from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # set viewport resolution: desktop, mobile, tablet
        # resolution="1920x1080",  # desktop (default)
        # resolution="375x812",    # mobile
        resolution="1024x768",     # tablet
    )
)
client.save_screenshot(api_response, "sa_resolution")
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // set viewport: desktop, mobile, or tablet
        // resolution: "1920x1080",  // desktop (default)
        // resolution: "375x812",    // mobile
        resolution: "1024x768",     // tablet
    })
);

console.log(api_result.image);
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
resolution==1920x1080

Full-page, viewport-only, or specific CSS-selector element capture.

from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # use XPath or CSS selectors to tell what to screenshot
        capture='#reviews',
        # force scrolling to the bottom of the page to load all areas
        auto_scroll=True,
    )
)
client.save_screenshot(api_response, "sa_areas")
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // use XPath or CSS selectors to tell what to screenshot
        capture: '#reviews',
        // force scrolling to the bottom of the page to load all areas
        auto_scroll: true,
    })
);

console.log(api_result.image);
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
capture==#reviews \
auto_scroll==true

Server-side cache with TTL. Repeat captures are free and instant.

from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # enable cache
        cache=True,
        # optionally set expiration
        cache_ttl=3600, # 1 hour
        # or clear cache any time
        # cache_clear=True,
    )
)
client.save_screenshot(api_response, "sa_cache")
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // enable cache
        cache: true,
        // optionally set expiration
        cache_ttl: 3600, // 1 hour
        // or clear cache any time
        // cache_clear: true,
    })
);

console.log(api_result.image);
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
cache==true \
cache_ttl==3600

Same asp=true flag bypasses protection before the capture. See the bypass catalog.

from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # auto anti-bot bypass, no extra configuration needed
    )
)
client.save_screenshot(api_response, "sa_bypass")
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // auto anti-bot bypass, no extra configuration needed
    })
);

console.log(api_result.image);
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1

LEARN

Docs, Tools, And Ready-Made Examples

Everything you need to go from first capture to production screenshot pipeline.

API Reference

Every parameter, every response field, with runnable cURL examples for the screenshot endpoint.

Developer Docs →

Academy

Interactive courses on web scraping, anti-bot bypass, and browser automation.

Start learning →

Open-Source Examples

Production-ready screenshot scripts on GitHub. Copy, paste, and customize for your target.

Explore repo →

Developer Tools

Selector tester, cURL-to-Python converter, antibot detector, and more.

Browse tools →

// INTEGRATIONS

Seamlessly integrate with frameworks & platforms

Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.


FAQ

Frequently Asked Questions

What is the Scrapfly Screenshot API?

A hosted service that renders any URL in a real browser and returns the image. One API call delivers a PNG, JPEG, WebP, GIF, or PDF with no browser infrastructure to manage on your end. Anti-bot bypass, proxy rotation, and page modification are all available as parameters.

How does anti-bot bypass work for screenshots?

The same bypass engine used in the Web Scraping API is available here. Set asp=true and the engine selects the right TLS fingerprint, browser profile, and proxy location for the target. Cloudflare, DataDome, Akamai, and PerimeterX challenges are solved server-side before the screenshot is taken.

Can I capture only part of a page?

Yes. Use the capture parameter with a CSS selector or XPath expression and the API crops to that element. Combine with auto_scroll=true to force lazy content into view before capture.

How do I remove cookie banners and ads from screenshots?

Pass options=["block_banners"] to activate the built-in banner blocker. For more control, inject custom CSS via the custom_css parameter to hide any element by selector before the shutter fires.

What output formats are supported?

PNG (lossless, default), JPEG, WebP, GIF, and PDF are all available via the format parameter. PNG and WebP are recommended for content with text; JPEG for photographic pages where file size matters.

Can I schedule recurring captures for visual monitoring?

Yes. Create a scheduled job via the dashboard or API and set the capture on any cron interval. Each run is stored and compared to the previous one. Pixel-level diff highlighting shows exactly what changed, making it suitable for visual regression testing and competitive page monitoring.

What happens if a capture fails?

Failed captures don't consume credits. Timeouts, upstream errors, and bypass failures are all free retries. Every response includes a log_url that opens the full request timeline in the dashboard so you can diagnose exactly what happened.


// PRICING

Transparent, usage-based pricing

One plan covers the full Scrapfly platform. Pick a monthly credit budget; every API shares the same credit pool. No per-product lock-in, no surprise line items.

Free tier

1,000 free credits on signup. Roughly 65 screenshots, no card required.

Pay on success

Failed captures are always free. You only pay for delivered images.

No lock-in

Upgrade, downgrade, or cancel anytime. No contract.

Need the full data pipeline? We unbundle the stack.

The Screenshot API is focused on image capture. For structured data from the same pages, explore Web Scraping API for HTML and JSON with anti-bot bypass, Extraction API for AI-powered structured output, Browser API for hosted Playwright and Puppeteer, Scrapium for stealth Chromium you drive directly, or Curlium for raw HTTP with perfect TLS fingerprints.

Get Free API Key
1,000 free credits. No card.