 # The Best Proxy Saver

 Cut your proxy bill with Scrapfly Proxy Saver. Plug your existing Bright Data, Oxylabs, Webshare, SmartProxy, or DataImpulse account into Scrapfly and get anti-bot bypass, fingerprint management, and automatic routing on top. No code changes needed.

- **Works with 5 providers out of the box.** Same `asp=true` path, just set `proxy_pool=` to your provider.
- **Pay per GB, not per request.** Bandwidth re-negotiated in bulk - cheaper than going direct to most vendors.
 
 [ Get Free API Key ](https://scrapfly.io/register) [ Developer Docs ](https://scrapfly.io/docs/proxy-saver/getting-started) 

 1,000 free credits. No credit card required. 

 





'; var lanesEl = el.querySelector('[data-lanes]'); // Engine slug → marketing-source SVG (filled silhouette, FA-style). // The two icon URLs are pre-resolved by Twig's asset() helper and // stored on the wrapper as data-attrs, so the  picks up the // correct CDN-prefixed URL in prod and the same-origin URL in dev // without the JS having to know about it. var engineIcons = { CURLIUM: el.dataset.curliumIcon, SCRAPIUM: el.dataset.scrapiumIcon }; function engineIconSrc (engineName) { return engineIcons[engineName] || ''; } function buildLane (idx, initialStep) { var job = pickJob(); var laneEl = document.createElement('div'); laneEl.className = 'anim-scrape__lane'; laneEl.innerHTML = '' + '' + '' + job.country + '/' + job.pool + '' + '`' + job.path + '`' + '' + stageNames\[initialStep\] + '' + '' + '

' + '

'; return { el: laneEl, engineIconEl: laneEl.querySelector('[data-engine-icon]'), geoEl: laneEl.querySelector('[data-geo]'), pathEl: laneEl.querySelector('[data-path]'), stageEl: laneEl.querySelector('[data-stage-text]'), elapsedEl: laneEl.querySelector('[data-elapsed]'), barEl: laneEl.querySelector('[data-bar]'), step: initialStep, cumul: 0, job: job }; } // Three lanes, each starting at a different stage so they look // genuinely parallel from the first frame (no staggered fade-in). var lanes = [buildLane(0, 0), buildLane(1, 1), buildLane(2, 2)]; lanes.forEach(function (l) { lanesEl.appendChild(l.el); }); // Live RPS readout — sums lane completions over a rolling window. var statRpsEl = el.querySelector('[data-rps]'); var completionTimes = []; function updateRps () { var now = Date.now(); completionTimes = completionTimes.filter(function (t) { return now - t &lt; 1000; }); statRpsEl.textContent = completionTimes.length === 0 ? '—' : String(completionTimes.length * 3); } function tickLane (lane) { var s = lane.step; var deltas = lane.job.deltas; if (s &lt; deltas.length) { var d = deltas[s][0] + Math.random() * (deltas[s][1] - deltas[s][0]); lane.cumul += d; lane.stageEl.textContent = stageNames[s]; lane.elapsedEl.textContent = Math.round(lane.cumul) + 'ms'; } lane.barEl.setAttribute('data-progress', String(s)); lane.step++; if (lane.step &gt; deltas.length) { // Cycle complete — record completion, then re-roll the entire // job (path + engine + geo + pool) so each lane keeps showing // realistic Scrapfly variety rather than reusing the same // engine/path forever. completionTimes.push(Date.now()); lane.step = 0; lane.cumul = 0; lane.job = pickJob(); lane.pathEl.textContent = lane.job.path; lane.geoEl.textContent = lane.job.country + '/' + lane.job.pool; lane.engineIconEl.src = engineIconSrc(lane.job.engine); lane.engineIconEl.alt = lane.job.engine; lane.engineIconEl.title = lane.job.engine; lane.stageEl.textContent = stageNames[0]; lane.elapsedEl.textContent = ''; } } // Each lane ticks on its own interval — staggered so they don't // synchronize over time (~480ms ± per-lane jitter). var intervals = lanes.map(function (lane, i) { // Phase-stagger first tick by 160ms × lane index. setTimeout(function () { tickLane(lane); }, i * 160); return setInterval(function () { tickLane(lane); }, 460 + i * 40); }); var rpsInterval = setInterval(updateRps, 250); // Return the first interval so the existing teardown contract holds. // (No teardown is currently invoked, but stay symmetrical with the // other drivers that all return one setInterval handle.) void intervals; void rpsInterval; return intervals[0]; }, browser: function (el) { el.innerHTML = 'CDP EVENTS ' + '
'; var feed = el.querySelector('[data-feed]'); // Each event carries a realistic [minDelta, maxDelta] in ms — the // gap from the *previous* event in the same request flight. Numbers // mirror what Chrome DevTools shows on a real CDP trace: tens of ms // between network events, hundreds for DOM/load milestones, ~10ms // for Input dispatch. Hardcoded timestamps are dropped from detail // strings so the feed-time column is the single source of truth. var events = [ ['Network.requestWillBeSent', 'GET web-scraping.dev/abc', [15, 25]], ['Page.frameStartedLoading', 'frame=main', [5, 15]], ['Network.responseReceived', 'status=200, type=document', [40, 120]], ['Page.domContentEventFired', 'frame=main', [180, 320]], ['Runtime.executionContextCreated', 'origin=web-scraping.dev', [10, 25]], ['DOM.documentUpdated', 'nodes=1,284', [20, 60]], ['Page.loadEventFired', 'frame=main', [120, 240]], ['Network.dataReceived', '124.3 KB', [15, 45]], ['Input.dispatchMouseEvent', 'click (842, 316)', [5, 15]] ]; var i = 0; // tCdp is the simulated CDP clock in ms, NOT wall-clock time. It // resets at the start of each request flight (every full cycle of // events) so the feed reads as a fresh trace, not a 30-minute // session log. var tCdp = 0; function tick () { var idx = i % events.length; if (idx === 0) tCdp = 0; var ev = events[idx]; var jitter = ev[2][0] + Math.random() * (ev[2][1] - ev[2][0]); tCdp += jitter; var dt = Math.round(tCdp) + 'ms'; var li = document.createElement('li'); li.innerHTML = '' + dt + '' + '' + ev\[0\] + '' + '' + ev\[1\] + ''; feed.insertBefore(li, feed.firstChild); while (feed.children.length &gt; 6) feed.removeChild(feed.lastChild); i++; } for (var k = 0; k &lt; 4; k++) tick(); return setInterval(tick, 950); }, screenshot: function (el) { el.innerHTML = 'CAPTURING ' + '' + '' + '

' + '

' + '

' + '' + 'PNG' + 'JPEG' + 'WEBP' + 'FULL PAGE' + '

' + '

'; var fmts = el.querySelectorAll('[data-fmt]'); var shutter = el.querySelector('[data-shutter]'); var spec = el.querySelector('[data-spec]'); var elapsed = el.querySelector('[data-elapsed]'); // Each format combines a realistic viewport spec, capture latency, // and resulting payload size. Numbers cross-checked against // Scrapfly screenshot benchmarks: PNG/JPEG/WEBP on 1920×1080 land // 180-400ms; full-page on a long article scrolls + stitches and // takes 700-1200ms. var presets = [ { dim: '1920×1080', size: '184 KB', latencyMs: [180, 320] }, { dim: '1920×1080', size: '92 KB', latencyMs: [160, 260] }, { dim: '1920×1080', size: '76 KB', latencyMs: [200, 360] }, { dim: '1920×6840', size: '1.4 MB', latencyMs: [780, 1180] } ]; var step = 0; var anim = null; function tick () { var p = presets[step]; var latency = Math.round(p.latencyMs[0] + Math.random() * (p.latencyMs[1] - p.latencyMs[0])); spec.textContent = p.dim + ' • ' + p.size; elapsed.textContent = latency + 'ms'; // Web Animations API for the shutter sweep — replaces a CSS // transition + offsetWidth-reflow restart trick. Each tick we // cancel the previous animation and run a fresh one; WAAPI keeps // the work on the compositor thread, so no main-thread reflow. if (anim) anim.cancel(); anim = shutter.animate( [{ width: '0%' }, { width: '100%' }], { duration: latency, easing: 'cubic-bezier(.2,.8,.2,1)', fill: 'forwards' } ); fmts.forEach(function (f, i) { f.classList.toggle('anim-screenshot__format--active', i === step); }); step = (step + 1) % fmts.length; } tick(); return setInterval(tick, 1500); }, extract: function (el) { el.innerHTML = 'SCHEMA HYDRATION ' + '' + '{ name: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' price: \_\_\_\_\_\_\_\_\_\_\_\_,
' + ' in\_stock: \_\_\_\_,
' + ' rating: \_\_\_\_ }' + '

'; var records = [ { name: '"Widget Pro"', price: '49.99', in_stock: 'true', rating: '4.7' }, { name: '"Acme Runner"', price: '129.00', in_stock: 'true', rating: '4.3' }, { name: '"Vintage Chair"', price: '340.00', in_stock: 'false', rating: '4.9' }, { name: '"Coffee Grinder"', price: '89.50', in_stock: 'true', rating: '4.6' } ]; var keys = ['name', 'price', 'in_stock', 'rating']; var stat = el.querySelector('[data-stat]'); // Counter that ticks up each completed record so the panel reads // as "ongoing batch extraction" rather than a single shot demo. var totalRecords = 0; var rec = 0, step = 0; function tick () { var key = keys[step]; var field = el.querySelector('[data-field="' + key + '"]'); if (field) { field.textContent = records[rec % records.length][key]; field.className = 'v v-new'; } step++; if (step &gt;= keys.length) { step = 0; rec++; totalRecords++; if (stat) stat.textContent = totalRecords.toLocaleString('en-US') + ' records • ~340ms/rec'; setTimeout(function () { keys.forEach(function (k) { var f = el.querySelector('[data-field="' + k + '"]'); if (!f) return; f.textContent = k === 'in_stock' || k === 'rating' ? '____' : '____________'; f.className = 'pending'; }); }, 600); } } // Faster field reveal — 250ms feels like a template extraction // (regex/CSS), not a slow LLM dribble. Total per-record: ~1s. return setInterval(tick, 250); }, crawl: function (el) { el.innerHTML = '' + '**0 urls discovered**' + 'depth 1/5 • 0 req/s' + '

' + '```
web-scraping.dev/
```

'; var countEl = el.querySelector('[data-count]'); var depthEl = el.querySelector('[data-depth]'); var rpsEl = el.querySelector('[data-rps]'); var treeEl = el.querySelector('[data-tree]'); var branches = [ '├─ /products (1,284 pages)', '│ ├─ /products/shoes (392)', '│ ├─ /products/bags (218)', '│ └─ /products/accessories (674)', '├─ /articles (3,902 pages)', '│ ├─ /articles/2024/', '│ └─ /articles/2025/', '├─ /reviews (8,401)', '└─ /sitemap.xml' ]; // Counter starts plausible, climbs by realistic-per-tick batches // (~10 req/s sustained = 65/tick at 650ms cadence; we vary per // tick to read as live discovery rather than a clock). var count = 1, branchIdx = 0, depth = 1; function tick () { var batch = 50 + Math.floor(Math.random() * 60); count += batch; countEl.textContent = count.toLocaleString('en-US'); // RPS oscillates around 8-15 — the typical Scrapfly crawler // throttle for a single seed under default politeness. rpsEl.textContent = String(8 + Math.floor(Math.random() * 8)); if (branchIdx &lt; branches.length) { treeEl.innerHTML += '\n' + branches[branchIdx]; branchIdx++; depth = Math.min(5, 1 + Math.floor(branchIdx / 2)); depthEl.textContent = String(depth); } else { setTimeout(function () { treeEl.innerHTML = 'web-scraping.dev/'; branchIdx = 0; depth = 1; count = 1; depthEl.textContent = '1'; countEl.textContent = '1'; }, 1400); branchIdx = branches.length + 1; } } return setInterval(tick, 650); } }; document.querySelectorAll('[data-hero-anim]').forEach(function (el) { var kind = el.getAttribute('data-hero-anim'); var driver = drivers[kind]; if (driver) driver(el); }); })(); 

 

 

---

## 5

major proxy providers supported out of the box

 



 

## $0.20

per GB - bandwidth billing, not per request

 



 

## 190+

countries for residential geo-targeting

 



 

## 55k+

developers using the Scrapfly platform

 



 

 

 

---

 CAPABILITIES## Your Proxies, Supercharged

Bring your existing provider account. Scrapfly adds the intelligence layer on top.

 

 ### Request Pipeline

Every request you route through Proxy Saver passes through a composable stack. Your credentials stay with your provider. Scrapfly layers cache deduplication, fingerprint coherence, and smart retry on top - transparent to your code. Set `proxy_pool=` to your provider name; everything else stays identical.

  **Your Request** same SDK, same endpoint - add `proxy_pool=` param 

 

  **Cache / Dedupe** identical in-flight requests collapsed; cached responses served instantly, no bandwidth consumed 

 

  **Fingerprint Layer** JA3/JA4 TLS + HTTP/2 SETTINGS coherent across the proxy egress; behavioral signals aligned 

 

  **Your Proxy** your Bright Data, Oxylabs, Webshare, SmartProxy, or DataImpulse credentials - unchanged 

 

  **Target** sees a fingerprint-coherent client from a clean IP in your provider pool 

 

  **Response** HTML + `log_url` debug panel + cost headers; retries on soft-fail are free 

 

 

 

  **Cache** dedupe layer 

  **JA3/JA4** TLS coherence 

  **Retries** free on soft-fail 

  **log\_url** every request 

 

**Supported providers**

 [Bright Data](https://scrapfly.io/products/proxy-saver/brightdata) 

 [Oxylabs](https://scrapfly.io/products/proxy-saver/oxylabs) 

 [Webshare](https://scrapfly.io/products/proxy-saver/webshare) 

 [SmartProxy](https://scrapfly.io/products/proxy-saver/smartproxy) 

 [DataImpulse](https://scrapfly.io/products/proxy-saver/dataimpulse) 

 Any SOCKS5/HTTP 

 IPRoyal 

 SOAX 

 Nimble 

 ProxyEmpire 

 

[View Proxy Saver docs →](https://scrapfly.io/docs/proxy-saver/getting-started)

 

 

 



 

 

 ### Bandwidth Savings

Proxy Saver reduces the bandwidth your provider bills you for. Identical concurrent requests are collapsed into a single upstream call - the response is shared across all callers. Cached responses replay without hitting the wire at all. Failed attempts that never reached the target are not billed. The result is fewer GB consumed on your provider contract.

  **Dedupe** identical requests collapsed 

  **Cache** zero upstream GB on cache hit 

  **No-bill** failed bypass attempts free 

 

Request cache

In-flight dedupe

Session stickiness

Smart retry

 

[View billing docs →](https://scrapfly.io/docs/proxy-saver/billing)

 



 

 ### Smart Routing

Scrapfly profiles each target and picks the best proxy tier for the job - datacenter where it works, residential when required. If a route fails, it escalates automatically without you touching any code. You choose the provider; Scrapfly chooses the session configuration.

  **Datacenter** default, lowest cost 

  **Residential** auto-escalate 

  **190+** countries 

 

[View routing docs →](https://scrapfly.io/docs/proxy-saver/optimizations)

 



 

 

 ### TLS / JA3 / JA4 Coherence

Anti-bot stacks correlate TLS fingerprints against the proxy exit ASN. Proxy Saver overlays real browser TLS and HTTP/2 fingerprints on your egress so the client signature matches the proxy identity end to end. Pick a specific browser profile or let Scrapfly select the best match per target automatically.

 **JA3** TLS client 

 **JA4** next-gen print 

 **HTTP/2** SETTINGS frame 

 

[JA3 checker](https://scrapfly.io/web-scraping-tools/ja3-fingerprint)

[HTTP/2 checker](https://scrapfly.io/web-scraping-tools/http2-fingerprint)

 

[View fingerprint docs →](https://scrapfly.io/docs/proxy-saver/fingerprints)

 



 

 ### Anti-Bot Bypass on Top

Keep your existing proxy and get Scrapfly's ASP stack at the same time. Enable `asp=true` and the API handles TLS fingerprinting, JavaScript challenges, puzzle captchas, and behavioral signals - regardless of which proxy provider you use underneath. Failed challenge retries do not consume credits.

  **Fingerprint** TLS + HTTP/2 

  **Challenges** JS + puzzle 

  **Retries** free on fail 

  **Behavioral** mouse + scroll 

 

asp=true

Cloudflare

DataDome

Akamai

Imperva

PerimeterX

 

[View ASP docs →](https://scrapfly.io/docs/scrape-api/anti-scraping-protection)

 



 

 

 ### Full Observability

Every request gets a `log_url` with the same full debug panel available on the Web Scraping API. Inspect request and response headers, cookies, rendered HTML, and HAR waterfall. Debug proxy issues in production without guesswork - see exactly what your provider returned and where it failed.

**log\_url**every request

**HAR**waterfall

**1-click**replay

 

 



 

 ### Session Modes

Choose rotating sessions (new IP per request, maximum pool spread) or sticky sessions (pin to the same exit IP across a sequence of requests). Session stickiness is coordinated at the Scrapfly edge - no extra proxy-side session management required.

**Rotating**per request

**Sticky**pinned session

 

 



 

 ### Zero Code Changes

Proxy Saver works on the same Web Scraping API endpoint. Set `proxy_pool=` to your provider name and everything else stays identical. No new SDK, no integration work, no migration - typically under five minutes to connect.

**same**endpoint

**same**SDK

 

 



 

 

 ### No Provider? Use the Scrapfly Residential Pool

No existing proxy contract needed. Use Scrapfly's managed residential pool directly via `proxy_pool="public_residential_pool"`. 190+ countries, auto-rotation, session stickiness. No separate provider dashboard, no per-vendor auth - just the same Scrapfly API key.

  **190+** countries 

  **Auto** rotation 

  **Sticky** session option 

  **ISP IPs** residential exits 

 

Geo-targeted

IP cooling

No proxy mgmt

country param

 

 



 

 

// SEE ALSO

 

 [ // WEB SCRAPING API **Managed proxy alternative** Don't have a proxy contract? The WSA includes Scrapfly's residential pool, anti-bot bypass, and JS rendering in one call. ](https://scrapfly.io/products/web-scraping-api) 

 [ // UNBLOCKER **Pure bypass layer** Headless bypass without custom proxy credentials. Point Unblocker at a target and get a clean response - no proxy account required. ](https://scrapfly.io/products/unblocker) 

 [ // BRIGHT DATA **Proxy Saver + Bright Data** Layer Scrapfly's ASP stack on top of your Bright Data residential pool without changing your contract. ](https://scrapfly.io/products/proxy-saver/brightdata) 

 [ // OXYLABS **Proxy Saver + Oxylabs** Connect your Oxylabs account and get fingerprint coherence, caching, and retry logic on top. ](https://scrapfly.io/products/proxy-saver/oxylabs) 

 

 

---

 CODE## Bring Your Proxies or Use Ours

Plug in your existing provider or use the Scrapfly residential pool.

 

 [ Bring Your Own Proxy ](#ps-strat-byop) [ Scrapfly Residential ](#ps-strat-residential) 

Point Scrapfly at your existing Bright Data, Oxylabs, Webshare, SmartProxy, or DataImpulse account.

     Python TypeScript HTTP / cURL Go  

     

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

# Bring your own proxy from Bright Data, Oxylabs, Webshare, etc.
api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://httpbin.dev/html',
        # use your own proxy URL
        proxy_pool="custom",
        # asp=True still applies anti-bot bypass on top
        asp=True
    )
)
print(api_response.result)
```

 ```
import {
    ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });
// Bring your own proxy - plug in Bright Data, Oxylabs, Webshare, etc.
let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://httpbin.dev/html',
        proxy_pool: 'custom',
        asp: true
    })
);
console.log(api_result.result);
```

 ```
# Bring your own proxy from Bright Data, Oxylabs, Webshare, etc.
http "https://api.scrapfly.io/scrape" \
  key==$SCRAPFLY_KEY \
  url==https://httpbin.dev/html \
  proxy_pool==custom \
  asp==true
```

 ```
package main

import (
	"fmt"
	"github.com/scrapfly/go-scrapfly"
)

func main() {
	client, _ := scrapfly.New("API KEY")
	// Bring your own proxy - route through your provider
	result, _ := client.Scrape(&scrapfly.ScrapeConfig{
		URL:       "https://httpbin.dev/html",
		ProxyPool: "custom",
		// ASP bypass still applies on top
		ASP: true,
	})
	fmt.Println(result.Result.Content)
}
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) [ Go SDK docs → ](https://scrapfly.io/docs/sdk/python) 

 

190+ country codes, managed pool, session stickiness. Same code as WSA, just add `proxy_pool`.

     Python TypeScript HTTP / cURL  

    

 ```
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")

# Use Scrapfly's residential pool - pay per GB, not per request
api_response: ScrapeApiResponse = client.scrape(
    ScrapeConfig(
        url='https://httpbin.dev/html',
        proxy_pool="public_residential_pool",
        # geo-target at country level
        country="US",
        asp=True
    )
)
print(api_response.result)
```

 ```
import {
    ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';

const client = new ScrapflyClient({ key: "API KEY" });
// Residential pool - pay per GB, geo-target by country
let api_result = await client.scrape(
    new ScrapeConfig({
        url: 'https://httpbin.dev/html',
        proxy_pool: 'public_residential_pool',
        country: 'US',
        asp: true
    })
);
console.log(api_result.result);
```

 ```
# Use Scrapfly's residential pool - pay per GB, not per request
http "https://api.scrapfly.io/scrape" \
  key==$SCRAPFLY_KEY \
  url==https://httpbin.dev/html \
  proxy_pool==public_residential_pool \
  country==US \
  asp==true
```

 

 

 [ Python SDK docs → ](https://scrapfly.io/docs/sdk/python) [ TypeScript SDK docs → ](https://scrapfly.io/docs/sdk/typescript) [ HTTP API docs → ](https://scrapfly.io/docs) 

 

 

 

---

 LEARN## Docs, Tools, And Setup Guides

Everything you need to connect your proxy account and start saving bandwidth.

 

 ### Proxy Saver Docs

Getting started, billing, fingerprints, optimization guides.

 [ View Proxy Saver docs &amp;rarr; ](https://scrapfly.io/docs/proxy-saver/getting-started) 



 

 ### Academy

Interactive courses on web scraping, anti-bot, and data extraction.

 [ View academy &amp;rarr; ](https://scrapfly.io/academy) 



 

 ### Open-Source Scrapers

40+ production-ready scrapers on GitHub. Copy, paste, customize.

 [ View scrapers repo &amp;rarr; ](https://github.com/scrapfly/scrapfly-scrapers) 



 

 ### Developer Tools

JA3 checker, TLS fingerprint, HTTP/2 fingerprint, selector tester.

 [ View dev tools &amp;rarr; ](https://scrapfly.io/web-scraping-tools) 



 

 

 

---

  // INTEGRATIONS## Seamlessly integrate with frameworks &amp; platforms

Plug Scrapfly into your favorite tools, or build custom workflows with our first-class SDKs.

 ### No-code automation

 [  Zapier ](https://scrapfly.io/integration/zapier) [  Make ](https://scrapfly.io/integration/make) [  n8n ](https://scrapfly.io/integration/n8n) 

 

### LLM &amp; RAG frameworks

 [  LlamaIndex ](https://scrapfly.io/integration/llamaindex) [  LangChain ](https://scrapfly.io/integration/langchain) [  CrewAI ](https://scrapfly.io/integration/crewai) 

 

### First-class SDKs

 [  Python pip install scrapfly-sdk ](https://scrapfly.io/docs/sdk/python) [  TypeScript Node, Deno, Bun ](https://scrapfly.io/docs/sdk/typescript) [  Go go get scrapfly-sdk ](https://scrapfly.io/docs/sdk/golang) [  Rust cargo add scrapfly-sdk ](https://scrapfly.io/docs/sdk/rust) [  Scrapy Full-feature extension ](https://scrapfly.io/docs/sdk/scrapy) 

 

 

 [ See all integrations  ](https://scrapfly.io/integration) 

 

---

  FAQ## Frequently Asked Questions

 

  ### What is Proxy Saver?

 Proxy Saver is Scrapfly's bring-your-own-proxy layer. You plug in your existing proxy provider account - Bright Data, Oxylabs, Webshare, SmartProxy, or DataImpulse - and Scrapfly adds anti-bot bypass, fingerprint management, and automatic route optimization on top. You keep your provider relationship, you get Scrapfly's intelligence layer on top of it.

 

   ### How does bandwidth billing work?

 Proxy Saver charges $0.20 per GB of traffic routed through the service. Fingerprint impersonation adds $0.10 per GB. This is billed on top of whatever your proxy provider charges you directly. Because Scrapfly renegotiates bandwidth in bulk with providers, the effective cost is often lower than going direct - especially for residential traffic.

 

   ### Do I need to change my code to use Proxy Saver?

 No. Proxy Saver works on the same Scrapfly Web Scraping API endpoint. You set `proxy_pool=` to your provider name and everything else stays the same. The same SDK, the same parameters, the same response format. No migration, no new integration work, typically under five minutes to connect.

 

   ### Which proxy providers are supported?

 Bright Data, Oxylabs, Webshare, SmartProxy, and DataImpulse are supported out of the box. If you use a different provider, see the [Proxy Saver docs](https://scrapfly.io/"/docs/proxy-saver/getting-started") for details on custom proxy URL configuration.

 

   ### Can I use Proxy Saver without an existing proxy provider?

 Yes. If you don't have an existing proxy contract, you can use Scrapfly's managed residential pool directly via `proxy_pool="public_residential_pool"`. It covers 190+ countries with automatic rotation and session stickiness. You pay the same per-GB rate.

 

   ### What is fingerprint impersonation and why does it matter?

 TLS and HTTP/2 fingerprints identify the client software making a request - many anti-bot systems flag traffic with mismatched fingerprints even when the proxy IP is clean. Fingerprint impersonation replaces your client's fingerprint with one from a real browser profile, making requests look indistinguishable from organic user traffic. Enable it with the fingerprint option in the Proxy Saver dashboard.

 

   ### How does automatic routing work?

 Scrapfly profiles target sites and builds a routing decision tree based on which proxy type reliably bypasses each target's defenses at the lowest cost. When you make a request, the router picks the cheapest option that meets the bypass threshold. If a cheaper route starts failing, it automatically escalates to a more capable one. You get cost efficiency without manually tuning proxy selection per target.

 

  

 

  ---

 // PRICING## Bandwidth-based billing, no per-request markup.

Pay for data transferred, not for the number of requests. $0.20 / GB standard, +$0.10 / GB with fingerprint impersonation.

 

  **Free tier**1,000 free credits on signup. Enough to try all proxy modes, no card required.

 

 

  **Pay on success**Only billed on successful proxy responses. Failures and upstream 4xx are free.

 

 

  **No lock-in**Upgrade, downgrade, or cancel anytime. No contract.

 

 

 

 [ See pricing  ](https://scrapfly.io/pricing) [ Start free ](https://scrapfly.io/register)