  // PRODUCT# The Best Screenshot API

 Capture any web page with Scrapfly's screenshot API. Pixel-perfect images through real Chromium, with viewport control, wait conditions, CSS injection, and full anti-bot bypass in one call.

##  Real browsers on every capture. Anti-bot bypass built in. 

- **Clean, ad-free screenshots.** Block cookie banners, ads, and overlays with built-in page modification options before the shutter fires.
- **Capture on your terms.** Full page, above-fold, specific element by CSS or XPath selector, custom viewport and DPR, PNG/JPEG/WebP/PDF output.
 
 [ Get Free API Key ](https://scrapfly.io/register) [ Developer Docs ](https://scrapfly.io/docs/screenshot-api/getting-started) 

 1,000 free credits. No credit card required. 

 





'; var lanesEl = el.querySelector('[data-lanes]'); // Engine slug → marketing-source SVG (filled silhouette, FA-style). // The two icon URLs are pre-resolved by Twig's asset() helper and // stored on the wrapper as data-attrs, so the  picks up the // correct CDN-prefixed URL in prod and the same-origin URL in dev // without the JS having to know about it. var engineIcons = { CURLIUM: el.dataset.curliumIcon, SCRAPIUM: el.dataset.scrapiumIcon }; function engineIconSrc (engineName) { return engineIcons[engineName] || ''; } function buildLane (idx, initialStep) { var job = pickJob(); var laneEl = document.createElement('div'); laneEl.className = 'anim-scrape__lane'; laneEl.innerHTML = '' + '' + '' + job.country + '/' + job.pool + '' + '`' + job.path + '`' + '' + stageNames\[initialStep\] + '' + '' + '

' + '

'; return { el: laneEl, engineIconEl: laneEl.querySelector('[data-engine-icon]'), geoEl: laneEl.querySelector('[data-geo]'), pathEl: laneEl.querySelector('[data-path]'), stageEl: laneEl.querySelector('[data-stage-text]'), elapsedEl: laneEl.querySelector('[data-elapsed]'), barEl: laneEl.querySelector('[data-bar]'), step: initialStep, cumul: 0, job: job }; } // Three lanes, each starting at a different stage so they look // genuinely parallel from the first frame (no staggered fade-in). var lanes = [buildLane(0, 0), buildLane(1, 1), buildLane(2, 2)]; lanes.forEach(function (l) { lanesEl.appendChild(l.el); }); // Live RPS readout — sums lane completions over a rolling window. var statRpsEl = el.querySelector('[data-rps]'); var completionTimes = []; function updateRps () { var now = Date.now(); completionTimes = completionTimes.filter(function (t) { return now - t &lt; 1000; }); statRpsEl.textContent = completionTimes.length === 0 ? '—' : String(completionTimes.length * 3); } function tickLane (lane) { var s = lane.step; var deltas = lane.job.deltas; if (s &lt; deltas.length) { var d = deltas[s][0] + Math.random() * (deltas[s][1] - deltas[s][0]); lane.cumul += d; lane.stageEl.textContent = stageNames[s]; lane.elapsedEl.textContent = Math.round(lane.cumul) + 'ms'; } lane.barEl.setAttribute('data-progress', String(s)); lane.step++; if (lane.step &gt; deltas.length) { // Cycle complete — record completion, then re-roll the entire // job (path + engine + geo + pool) so each lane keeps showing // realistic Scrapfly variety rather than reusing the same // engine/path forever. completionTimes.push(Date.now()); lane.step = 0; lane.cumul = 0; lane.job = pickJob(); lane.pathEl.textContent = lane.job.path; lane.geoEl.textContent = lane.job.country + '/' + lane.job.pool; lane.engineIconEl.src = engineIconSrc(lane.job.engine); lane.engineIconEl.alt = lane.job.engine; lane.engineIconEl.title = lane.job.engine; lane.stageEl.textContent = stageNames[0]; lane.elapsedEl.textContent = ''; } } // Each lane ticks on its own interval — staggered so they don't // synchronize over time (~480ms ± per-lane jitter). var intervals = lanes.map(function (lane, i) { // Phase-stagger first tick by 160ms × lane index. setTimeout(function () { tickLane(lane); }, i * 160); return setInterval(function () { tickLane(lane); }, 460 + i * 40); }); var rpsInterval = setInterval(updateRps, 250); // Return the first interval so the existing teardown contract holds. // (No teardown is currently invoked, but stay symmetrical with the // other drivers that all return one setInterval handle.) void intervals; void rpsInterval; return intervals[0]; }, browser: function (el) { el.innerHTML = 'CDP EVENTS ' + '
'; var feed = el.querySelector('[data-feed]'); // Each event carries a realistic [minDelta, maxDelta] in ms — the // gap from the *previous* event in the same request flight. Numbers // mirror what Chrome DevTools shows on a real CDP trace: tens of ms // between network events, hundreds for DOM/load milestones, ~10ms // for Input dispatch. Hardcoded timestamps are dropped from detail // strings so the feed-time column is the single source of truth. var events = [ ['Network.requestWillBeSent', 'GET web-scraping.dev/abc', [15, 25]], ['Page.frameStartedLoading', 'frame=main', [5, 15]], ['Network.responseReceived', 'status=200, type=document', [40, 120]], ['Page.domContentEventFired', 'frame=main', [180, 320]], ['Runtime.executionContextCreated', 'origin=web-scraping.dev', [10, 25]], ['DOM.documentUpdated', 'nodes=1,284', [20, 60]], ['Page.loadEventFired', 'frame=main', [120, 240]], ['Network.dataReceived', '124.3 KB', [15, 45]], ['Input.dispatchMouseEvent', 'click (842, 316)', [5, 15]] ]; var i = 0; // tCdp is the simulated CDP clock in ms, NOT wall-clock time. It // resets at the start of each request flight (every full cycle of // events) so the feed reads as a fresh trace, not a 30-minute // session log. var tCdp = 0; function tick () { var idx = i % events.length; if (idx === 0) tCdp = 0; var ev = events[idx]; var jitter = ev[2][0] + Math.random() * (ev[2][1] - ev[2][0]); tCdp += jitter; var dt = Math.round(tCdp) + 'ms'; var li = document.createElement('li'); li.innerHTML = '' + dt + '' + '' + ev\[0\] + '' + '' + ev\[1\] + ''; feed.insertBefore(li, feed.firstChild); while (feed.children.length &gt; 6) feed.removeChild(feed.lastChild); i++; } for (var k = 0; k &lt; 4; k++) tick(); return setInterval(tick, 950); }, screenshot: function (el) { el.innerHTML = 'CAPTURING ' + '' + '' + '

' + '

' + '

' + '' + 'PNG' + 'JPEG' + 'WEBP' + 'FULL PAGE' + '

' + '

'; var fmts = el.querySelectorAll('[data-fmt]'); var shutter = el.querySelector('[data-shutter]'); var spec = el.querySelector('[data-spec]'); var elapsed = el.querySelector('[data-elapsed]'); // Each format combines a realistic viewport spec, capture latency, // and resulting payload size. Numbers cross-checked against // Scrapfly screenshot benchmarks: PNG/JPEG/WEBP on 1920×1080 land // 180-400ms; full-page on a long article scrolls + stitches and // takes 700-1200ms. var presets = [ { dim: '1920×1080', size: '184 KB', latencyMs: [180, 320] }, { dim: '1920×1080', size: '92 KB', latencyMs: [160, 260] }, { dim: '1920×1080', size: '76 KB', latencyMs: [200, 360] }, { dim: '1920×6840', size: '1.4 MB', latencyMs: [780, 1180] } ]; var step = 0; var anim = null; function tick () { var p = presets[step]; var latency = Math.round(p.latencyMs[0] + Math.random() * (p.latencyMs[1] - p.latencyMs[0])); spec.textContent = p.dim + ' • ' + p.size; elapsed.textContent = latency + 'ms'; // Web Animations API for the shutter sweep — replaces a CSS // transition + offsetWidth-reflow restart trick. Each tick we // cancel the previous animation and run a fresh one; WAAPI keeps // the work on the compositor thread, so no main-thread reflow. if (anim) anim.cancel(); anim = shutter.animate( [{ width: '0%' }, { width: '100%' }], { duration: latency, easing: 'cubic-bezier(.2,.8,.2,1)', fill: 'forwards' } ); fmts.forEach(function (f, i) { f.classList.toggle('anim-screenshot__format--active', i === step); }); step = (step + 1) % fmts.length; } tick(); return setInterval(tick, 1500); }, extract: function (el) { el.innerHTML = 'SCHEMA HYDRATION ' + '' + '{ name: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' price: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' in\_stock: \_\_\_\_,
' + ' rating: \_\_\_\_ }' + '

'; var records = [ { name: '"Widget Pro"', price: '49.99', in_stock: 'true', rating: '4.7' }, { name: '"Acme Runner"', price: '129.00', in_stock: 'true', rating: '4.3' }, { name: '"Vintage Chair"', price: '340.00', in_stock: 'false', rating: '4.9' }, { name: '"Coffee Grinder"', price: '89.50', in_stock: 'true', rating: '4.6' } ]; var keys = ['name', 'price', 'in_stock', 'rating']; var stat = el.querySelector('[data-stat]'); // Counter that ticks up each completed record so the panel reads // as "ongoing batch extraction" rather than a single shot demo. var totalRecords = 0; var rec = 0, step = 0; function tick () { var key = keys[step]; var field = el.querySelector('[data-field="' + key + '"]'); if (field) { field.textContent = records[rec % records.length][key]; field.className = 'v v-new'; } step++; if (step &gt;= keys.length) { step = 0; rec++; totalRecords++; if (stat) stat.textContent = totalRecords.toLocaleString('en-US') + ' records • ~340ms/rec'; setTimeout(function () { keys.forEach(function (k) { var f = el.querySelector('[data-field="' + k + '"]'); if (!f) return; f.textContent = k === 'in_stock' || k === 'rating' ? '____' : '____________'; f.className = 'pending'; }); }, 600); } } // Faster field reveal — 250ms feels like a template extraction // (regex/CSS), not a slow LLM dribble. Total per-record: ~1s. return setInterval(tick, 250); }, crawl: function (el) { el.innerHTML = '' + '**0 urls discovered**' + 'depth 1/5 • 0 req/s' + '

' + '```
web-scraping.dev/
```

'; var countEl = el.querySelector('[data-count]'); var depthEl = el.querySelector('[data-depth]'); var rpsEl = el.querySelector('[data-rps]'); var treeEl = el.querySelector('[data-tree]'); var branches = [ '├─ /products (1,284 pages)', '│ ├─ /products/shoes (392)', '│ ├─ /products/bags (218)', '│ └─ /products/accessories (674)', '├─ /articles (3,902 pages)', '│ ├─ /articles/2024/', '│ └─ /articles/2025/', '├─ /reviews (8,401)', '└─ /sitemap.xml' ]; // Counter starts plausible, climbs by realistic-per-tick batches // (~10 req/s sustained = 65/tick at 650ms cadence; we vary per // tick to read as live discovery rather than a clock). var count = 1, branchIdx = 0, depth = 1; function tick () { var batch = 50 + Math.floor(Math.random() * 60); count += batch; countEl.textContent = count.toLocaleString('en-US'); // RPS oscillates around 8-15 — the typical Scrapfly crawler // throttle for a single seed under default politeness. rpsEl.textContent = String(8 + Math.floor(Math.random() * 8)); if (branchIdx &lt; branches.length) { treeEl.innerHTML += '\n' + branches[branchIdx]; branchIdx++; depth = Math.min(5, 1 + Math.floor(branchIdx / 2)); depthEl.textContent = String(depth); } else { setTimeout(function () { treeEl.innerHTML = 'web-scraping.dev/'; branchIdx = 0; depth = 1; count = 1; depthEl.textContent = '1'; countEl.textContent = '1'; }, 1400); branchIdx = branches.length + 1; } } return setInterval(tick, 650); } }; document.querySelectorAll('[data-hero-anim]').forEach(function (el) { var kind = el.getAttribute('data-hero-anim'); var driver = drivers[kind]; if (driver) driver(el); }); })(); 

 

 

---

## 55k+

developers using Scrapfly APIs

 



 

## 98%

capture success on protected targets

 



 

## 5

output formats: PNG, JPEG, WebP, GIF, PDF

 



 

## 1s

typical capture time from browser pool

 



 

 

 

---

 CAPABILITIES## Every Capture Scenario Covered

Viewport, wait conditions, page modifications, schedules, anti-bot bypass. All composable on one endpoint.

 

 ### Capture Pipeline

Every request flows through a composable capture pipeline. Anti-bot bypass, ad removal, JS scenarios, and format optimization activate only when you set the relevant parameter. Unused layers add zero latency.

  **Real Browser (Scrapium)** full Chromium render, JS execution, lazy-load forced via auto\_scroll 

 

  **Ad-block + Banner Dismiss** block\_banners preset removes cookie popups, overlays, and ads before shutter fires 

 

  **JS Scenario** click, fill, scroll, wait\_for\_selector, wait\_until - execute before capture 

 

  **Capture** full\_page, viewport, or CSS/XPath element selector - custom width, height, DPR 

 

  **Optimize** PNG lossless, JPEG/WebP quality-tunable, PDF - server-side compression 

 

  **Deliver** image bytes in response or cached URL via log\_url, configurable TTL up to 7 days 

 

 

  **5 formats** PNG, JPEG, WebP, GIF, PDF 

  **Any viewport** desktop, tablet, mobile 

  **Full page** scroll-to-bottom auto 

  **0 credits** on failed capture 

 

[View Screenshot API docs →](https://scrapfly.io/docs/screenshot-api/getting-started)

 



 

 

 ### Wait Conditions

Capture the page at exactly the right moment. Wait for a CSS selector to appear, for network activity to go idle, or hold for a fixed rendering delay in milliseconds before the shutter fires.

  **Selector** wait\_for\_selector 

  **Network idle** wait\_until 

  **Fixed ms** rendering\_wait 

 

networkidle

domcontentloaded

wait\_for\_selector

rendering\_wait

 

 



 

 ### Page Modifications Before Capture

Inject custom CSS to hide cookie banners, ads, and overlays before capture. Use the `options` parameter with built-in presets like `block_banners` and `dark_mode`, or supply your own stylesheet. Run any JS scenario step - click, fill a form field, scroll to position - before the shutter fires.

  **Block banners** cookie popups, ads 

  **Dark mode** built-in preset 

  **Custom CSS** hide any element 

  **JS scenario** click, fill, scroll 

 

block\_banners

dark\_mode

custom CSS inject

click

fill

scroll

 

 



 

 

 ### Anti-Bot Bypass

Add `asp=true` and the same bypass engine powering the Web Scraping API handles Cloudflare, DataDome, Akamai, and more. TLS fingerprint, JS challenges, and behavioral signals are solved server-side before the screenshot is taken. No extra code on your end.

  **JA3/JA4** TLS fingerprint 

  **HTTP/2** SETTINGS frame 

  **Behavioral** mouse + scroll 

  **Free retries** challenges don't cost 

 

 [Cloudflare](https://scrapfly.io/bypass/cloudflare) 

 [DataDome](https://scrapfly.io/bypass/datadome) 

 [Akamai](https://scrapfly.io/bypass/akamai) 

 [PerimeterX](https://scrapfly.io/bypass/perimeterx) 

 [Kasada](https://scrapfly.io/bypass/kasada) 

 [Imperva](https://scrapfly.io/bypass/incapsula) 

 [F5](https://scrapfly.io/bypass/f5) 

 [AWS WAF](https://scrapfly.io/bypass/aws-waf) 

 

[View full bypass catalog →](https://scrapfly.io/bypass)

 



 

 

 ### Viewport and Resolution Control

Set any width and height for the browser viewport before capture. Use `resolution` to pass custom dimensions. Set `dpr` for retina (2x/3x) output. Combine with `auto_scroll=true` to force lazy-loaded content into the frame. Preset device sizes cover desktop, tablet, and mobile without pixel math.

  **Desktop** 1280 x 720 default 

  **Tablet** 768 x 1024 

  **Mobile** 375 x 812 

  **2x / 3x DPR** retina output 

 

resolution param

dpr param

auto\_scroll

 

[View viewport docs →](https://scrapfly.io/docs/screenshot-api/getting-started)

 



 

 ### Output Formats

Five formats via the `format` parameter. PNG is lossless and the default. JPEG and WebP accept a `quality` value (1-100) for size tuning. PDF renders the full page as a printable document. All formats are returned as binary in the response body.

  **PNG** lossless default 

  **JPEG/WebP** quality tunable 

  **PDF** from URL 

 

png

jpeg

webp

pdf

gif

 

 



 

 

 ### Scheduled Captures and Visual Diff

Schedule recurring captures on any cron interval via the dashboard or API. Each run is stored and compared to the previous one. Pixel-level diff highlighting shows exactly what changed on the page. Suited for visual regression testing, competitive page monitoring, and AI vision training datasets.

  **Cron** any interval 

  **Pixel diff** per-run comparison 

  **Alert** on change 

  **Stored** history retained 

 

Visual regression testing



Competitive monitoring



AI vision training data



 

[View monitoring docs →](https://scrapfly.io/docs/screenshot-api/monitoring)

 



 

 

 ### Server-Side Cache

Cache captured images on Scrapfly infrastructure with a configurable TTL. Repeat requests return the stored image at 0 credits. Max TTL is 7 days.

0credits on hit

**7 days**max TTL

 

 



 

 ### Request Observability

Every capture returns a `log_url`. Inspect the full request timeline, rendered HTML, final URL after redirects, and load timings in the dashboard.

**log\_url**per capture

**Timeline**full trace

 

 



 

 ### Screenshot + Data in One Call

The Screenshot API is focused on image capture. For structured data from the same page in one call, the Web Scraping API supports `screenshots[name]=fullpage` alongside HTML extraction - data and image in a single request. For session-based workflows (login flows, multi-step navigation) before a capture, the Cloud Browser gives you a persistent Chromium session you drive directly.

 [Web Scraping API](https://scrapfly.io/products/web-scraping-api) 

 [Cloud Browser](https://scrapfly.io/products/cloud-browser-api) 

 

 



 

 

 ### Pay Only for Delivered Images

Failed captures don't consume credits. Timeouts, upstream errors, and bypass failures are free. You pay for the image, not the attempt. Use `cost_budget` to cap the maximum credit spend per request.

0credits on fail

**cost\_budget**per-request cap

 

[View billing docs →](https://scrapfly.io/docs/screenshot-api/billing)

 



 

 ### Developer Tools

Test selector targeting before writing a single line of capture code. Verify the CSS or XPath expression you plan to pass to the `capture` parameter against a live page, then copy it straight into your API call.

 [Selector tester](https://scrapfly.io/web-scraping-tools/css-xpath-tester) 

 [JA3 fingerprint](https://scrapfly.io/web-scraping-tools/ja3-fingerprint) 

 [Device fingerprint](https://scrapfly.io/web-scraping-tools/device-fingerprint) 

 [All tools](https://scrapfly.io/web-scraping-tools) 

 

[Browse all developer tools →](https://scrapfly.io/web-scraping-tools)

 



 

 

 

---

 CODE## Capture Any Page in Two Lines

Pick a capture style, pick a language. Real examples, real endpoints.

 

 [ Basic Capture ](#sa-strat-options) [ Wait Conditions ](#sa-strat-flow) [ Format + Quality ](#sa-strat-format) [ Viewport + DPR ](#sa-strat-resolution) [ Element Areas ](#sa-strat-areas) [ Caching ](#sa-strat-cache) [ Anti-Bot Bypass ](#sa-strat-bypass) 

Default PNG capture of a page. All options optional.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/login?cookies',
        # use one of many modifiers to modify the page
        options=["block_banners"],
    )
)
client.save_screenshot(api_response, "sa_options")
```

 ```
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/reviews',
        // use one of many modifiers to modify the page
        options: ['block_banners']
    })
);

console.log(api_result.image);
```

 ```
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/login?cookies \
options==block_banners
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

Wait for a selector, network idle, or custom JS before capturing.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/reviews',
        # wait for specific element to appear
        wait_for_selector=".review",
        # or for a specific time
        rendering_wait=3000,  # 3 seconds
    )
)
client.save_screenshot(api_response, "sa_flow")
```

 ```
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/reviews',
        // for for specific element to appear
        wait_for_selector: ".review",
        // or for a specific time
        rendering_wait: 3000,  // 3 seconds
    })
);

console.log(api_result.image);
```

 ```
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/reviews \
wait_for_selector==.review \
rendering_wait==3000
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

PNG, JPEG, WebP, GIF, or PDF output. Configurable quality and compression.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # directly capture in your file type
        format="jpg",
        # jpg, png, webp, gif etc.
    )
)
client.save_screenshot(api_response, "sa_format")
```

 ```
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // directly capture in your file type
        format: "jpg", 
        // jpg, png, webp, gif etc.
    })
);

console.log(api_result.image);
```

 ```
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
format==jpg
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

Retina (2x/3x DPR), custom viewport dimensions, device pixel ratios.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # set viewport resolution: desktop, mobile, tablet
        # resolution="1920x1080",  # desktop (default)
        # resolution="375x812",    # mobile
        resolution="1024x768",     # tablet
    )
)
client.save_screenshot(api_response, "sa_resolution")
```

 ```
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // set viewport: desktop, mobile, or tablet
        // resolution: "1920x1080",  // desktop (default)
        // resolution: "375x812",    // mobile
        resolution: "1024x768",     // tablet
    })
);

console.log(api_result.image);
```

 ```
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
resolution==1920x1080
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

Full-page, viewport-only, or specific CSS-selector element capture.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # use XPath or CSS selectors to tell what to screenshot
        capture='#reviews',
        # force scrolling to the bottom of the page to load all areas
        auto_scroll=True,
    )
)
client.save_screenshot(api_response, "sa_areas")
```

 ```
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // use XPath or CSS selectors to tell what to screenshot
        capture: '#reviews',
        // force scrolling to the bottom of the page to load all areas
        auto_scroll: true,
    })
);

console.log(api_result.image);
```

 ```
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
capture==#reviews \
auto_scroll==true
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

Server-side cache with TTL. Repeat captures are free and instant.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # enable cache
        cache=True,
        # optionally set expiration
        cache_ttl=3600, # 1 hour
        # or clear cache any time
        # cache_clear=True,
    )
)
client.save_screenshot(api_response, "sa_cache")
```

 ```
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // enable cache
        cache: true,
        // optionally set expiration
        cache_ttl: 3600, // 1 hour
        // or clear cache any time
        // cache_clear: true,
    })
);

console.log(api_result.image);
```

 ```
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
cache==true \
cache_ttl==3600
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

Same `asp=true` flag bypasses protection before the capture. See the [bypass catalog](https://scrapfly.io/"/bypass").

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScreenshotConfig, ScrapflyClient

client = ScrapflyClient(key="API KEY")

api_response = client.screenshot(
    ScreenshotConfig(
        url='https://web-scraping.dev/product/1',
        # auto anti-bot bypass, no extra configuration needed
    )
)
client.save_screenshot(api_response, "sa_bypass")
```

 ```
import { 
    ScrapflyClient, ScreenshotConfig,
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });

let api_result = await client.screenshot(
    new ScreenshotConfig({
        url: 'https://web-scraping.dev/product/1',
        // auto anti-bot bypass, no extra configuration needed
    })
);

console.log(api_result.image);
```

 ```
http https://api.scrapfly.io/screenshot \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

 

 

---

 LEARN## Docs, Tools, And Ready-Made Examples

Everything you need to go from first capture to production screenshot pipeline.

 

 ### API Reference

Every parameter, every response field, with runnable cURL examples for the screenshot endpoint.

 [ Developer Docs → ](https://scrapfly.io/docs/screenshot-api/getting-started) 



 

 ### Academy

Interactive courses on web scraping, anti-bot bypass, and browser automation.

 [ Start learning → ](https://scrapfly.io/academy) 



 

 ### Open-Source Examples

Production-ready screenshot scripts on GitHub. Copy, paste, and customize for your target.

 [ Explore repo → ](https://github.com/scrapfly/scrapfly-scrapers) 



 

 ### Developer Tools

Selector tester, cURL-to-Python converter, antibot detector, and more.

 [ Browse tools → ](https://scrapfly.io/web-scraping-tools) 



 

 

 

---

  // INTEGRATIONS## Seamlessly integrate with frameworks &amp; platforms

Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.

 ### No-code automation

 [  Zapier ](https://scrapfly.io/integration/zapier) [  Make ](https://scrapfly.io/integration/make) [  n8n ](https://scrapfly.io/integration/n8n) 

 

### LLM &amp; RAG frameworks

 [  LlamaIndex ](https://scrapfly.io/integration/llamaindex) [  LangChain ](https://scrapfly.io/integration/langchain) [  CrewAI ](https://scrapfly.io/integration/crewai) 

 

### First-class SDKs

 [  Python pip install scrapfly-sdk ](https://scrapfly.io/docs/sdk/python) [  TypeScript Node, Deno, Bun ](https://scrapfly.io/docs/sdk/typescript) [  Go go get scrapfly-sdk ](https://scrapfly.io/docs/sdk/golang) [  Rust cargo add scrapfly-sdk ](https://scrapfly.io/docs/sdk/rust) [  Scrapy Full-feature extension ](https://scrapfly.io/docs/sdk/scrapy) 

 

 

 [ See all integrations  ](https://scrapfly.io/integration) 

 

---

  FAQ## Frequently Asked Questions

 

  ### What is the Scrapfly Screenshot API?

 A hosted service that renders any URL in a real browser and returns the image. One API call delivers a PNG, JPEG, WebP, GIF, or PDF with no browser infrastructure to manage on your end. Anti-bot bypass, proxy rotation, and page modification are all available as parameters.

 

   ### How does anti-bot bypass work for screenshots?

 The same bypass engine used in the Web Scraping API is available here. Set `asp=true` and the engine selects the right TLS fingerprint, browser profile, and proxy location for the target. Cloudflare, DataDome, Akamai, and PerimeterX challenges are solved server-side before the screenshot is taken.

 

   ### Can I capture only part of a page?

 Yes. Use the `capture` parameter with a CSS selector or XPath expression and the API crops to that element. Combine with `auto_scroll=true` to force lazy content into view before capture.

 

   ### How do I remove cookie banners and ads from screenshots?

 Pass `options=["block_banners"]` to activate the built-in banner blocker. For more control, inject custom CSS via the `custom_css` parameter to hide any element by selector before the shutter fires.

 

   ### What output formats are supported?

 PNG (lossless, default), JPEG, WebP, GIF, and PDF are all available via the `format` parameter. PNG and WebP are recommended for content with text; JPEG for photographic pages where file size matters.

 

   ### Can I schedule recurring captures for visual monitoring?

 Yes. Create a scheduled job via the dashboard or API and set the capture on any cron interval. Each run is stored and compared to the previous one. Pixel-level diff highlighting shows exactly what changed, making it suitable for visual regression testing and competitive page monitoring.

 

   ### What happens if a capture fails?

 Failed captures don't consume credits. Timeouts, upstream errors, and bypass failures are all free retries. Every response includes a `log_url` that opens the full request timeline in the dashboard so you can diagnose exactly what happened.

 

  

 

  ---

 // PRICING## Transparent, usage-based pricing

One plan covers the full Scrapfly platform. Pick a monthly credit budget; every API shares the same credit pool. No per-product lock-in, no surprise line items.

 

  **Free tier**1,000 free credits on signup. Roughly 65 screenshots, no card required.

 

 

  **Pay on success**Failed captures are always free. You only pay for delivered images.

 

 

  **No lock-in**Upgrade, downgrade, or cancel anytime. No contract.

 

 

 

 [ See pricing  ](https://scrapfly.io/pricing) [ Start free ](https://scrapfly.io/register) 

 

 

### Need the full data pipeline? We unbundle the stack.

 The Screenshot API is focused on image capture. For structured data from the same pages, explore [Web Scraping API](https://scrapfly.io/products/web-scraping-api) for HTML and JSON with [anti-bot bypass](https://scrapfly.io/bypass), [Extraction API](https://scrapfly.io/products/extraction-api) for AI-powered structured output, [Browser API](https://scrapfly.io/products/cloud-browser-api) for hosted Playwright and Puppeteer, [Scrapium](https://scrapfly.io/scrapium) for stealth Chromium you drive directly, or [Curlium](https://scrapfly.io/curlium) for raw HTTP with perfect TLS fingerprints.

 

 [Get Free API Key](https://scrapfly.io/register)1,000 free credits. No card.