# Brightdata to Scrapfly Migration Guide

 Complete parameter mapping and code examples for migrating from Brightdata Web Unlocker to Scrapfly. Simplified authentication and more features in a unified API.

 [  Start For Free - 1,000 Credits ](https://scrapfly.io/register) [  Back to Brightdata Alternative Page ](https://scrapfly.io/compare/brightdata-alternative) 

 

 

 

##  [Key Architectural Differences](#key-differences) 

 Brightdata and Scrapfly have different architectures. Understanding these differences will help you migrate more effectively.

 | Aspect | Brightdata Web Unlocker | Scrapfly |
|---|---|---|
| Authentication | Customer ID + Zone + Password + SSL cert | Single API key |
| Access Methods | Proxy-based or REST API | REST API with SDKs |
| JavaScript Rendering | Requires Browser API (separate product) | Built-in with `render_js` |
| Browser Automation | Requires Browser API (separate product) | JS Scenarios in same API |
| Data Extraction | Not available | AI-powered Extraction API |
| Multi-page Crawling | Not available | Crawler API included |

 

 

 

 

##  [Complete Parameter Mapping](#parameter-mapping) 

 Brightdata Web Unlocker uses different parameter names and request formats. This table shows how to map them to Scrapfly.

 | Brightdata Parameter | Scrapfly Parameter | Notes |
|---|---|---|
| `Authorization: Bearer [API_KEY]` | [`key`](https://scrapfly.io/docs/scrape-api/getting-started#api_param_key) | API authentication. Scrapfly uses query param or SDK |
| `zone` | N/A | Scrapfly doesn't use zones. Single API key covers all features |
| `url` | [`url`](https://scrapfly.io/docs/scrape-api/getting-started#api_param_url) | Target URL to scrape (same) |
| `format: raw` | [`format`](https://scrapfly.io/docs/scrape-api/getting-started#api_param_format) | Response format: `raw`, `markdown`, `text`, `clean_html` |
| `data_format: markdown` | [`format=markdown`](https://scrapfly.io/docs/scrape-api/getting-started#api_param_format) | Convert HTML to markdown |
| `data_format: screenshot` | [`screenshots`](https://scrapfly.io/docs/screenshot-api/getting-started) | Use Screenshot API or `screenshots` parameter |
| `-country-[code]` (proxy username) | [`country`](https://scrapfly.io/docs/scrape-api/proxy#country) | 2-letter ISO country code (e.g., "us", "gb") |
| `-ua-mobile` (proxy username) | [`os=android`](https://scrapfly.io/docs/scrape-api/javascript-rendering#os) | Mobile user agent targeting |
| `x-unblock-expect` header | [`wait_for_selector`](https://scrapfly.io/docs/scrape-api/javascript-rendering#wait-for-selector) | Wait for element before returning response |
| `body` | [`body`](https://scrapfly.io/docs/scrape-api/getting-started#api_param_body) | Request body for POST requests |
| Premium domains | [`asp=True`](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) | Scrapfly ASP handles all protected sites without separate tiers |
| `-session-[id]` (proxy username) | [`session`](https://scrapfly.io/docs/scrape-api/session) | Session management for persistent cookies/IP |
| N/A (requires Browser API) | [`render_js`](https://scrapfly.io/docs/scrape-api/javascript-rendering#api_param_render_js) | JavaScript rendering included in Scrapfly |
| N/A (requires Browser API) | [`js_scenario`](https://scrapfly.io/docs/scrape-api/javascript-rendering#js_scenario) | Browser automation: clicks, forms, scrolls (**Scrapfly exclusive**) |
| N/A | [`session`](https://scrapfly.io/docs/scrape-api/session) | Session management for persistent cookies (**Scrapfly exclusive**) |
| N/A | [`cache`](https://scrapfly.io/docs/scrape-api/cache) | Response caching (**Scrapfly exclusive**) |
| N/A | [`auto_scroll`](https://scrapfly.io/docs/scrape-api/javascript-rendering#auto-scroll) | Auto-scroll for lazy content (**Scrapfly exclusive**) |
| N/A | [`extraction_model`](https://scrapfly.io/docs/extraction-api/getting-started) | AI-powered data extraction (**Scrapfly exclusive**) |

 

 

 

 

##  [Migration Code Examples](#code-examples) 

 Side-by-side code examples showing how to migrate from Brightdata Web Unlocker to Scrapfly.

  [  Python ](#python-example) [  JavaScript ](#javascript-example) [  cURL ](#curl-example) 

 ###  Brightdata (REST API)

 ```
import requests

API_KEY = "YOUR_BRIGHTDATA_API_KEY"
ZONE = "your_zone_name"

response = requests.post(
    "https://api.brightdata.com/request",
    headers={
        "Content-Type": "application/json",
        "Authorization": f"Bearer {API_KEY}"
    },
    json={
        "zone": ZONE,
        "url": "https://example.com",
        "format": "raw"
    }
)

print(response.text)
```

 

 

###  Scrapfly

 ```
from scrapfly import ScrapflyClient, ScrapeConfig

# No zones, no complex auth
client = ScrapflyClient(key="YOUR_SCRAPFLY_API_KEY")

result = client.scrape(ScrapeConfig(
    url="https://example.com",
    asp=True,        # Anti-bot bypass
    render_js=True   # JS rendering included
))

print(result.content)
```

 

 

 

### With Geo-targeting and Wait for Element

 ```
# Brightdata with geo and expect element
response = requests.post(
    "https://api.brightdata.com/request",
    headers={
        "Content-Type": "application/json",
        "Authorization": f"Bearer {API_KEY}"
    },
    json={
        "zone": ZONE,
        "url": "https://example.com",
        "format": "raw",
        "headers": {
            "x-unblock-expect": '{"element": ".product-price"}'
        }
    }
)
# Note: Country targeting requires proxy-based access
```

 

 

 ```
# Scrapfly with geo and wait_for_selector
result = client.scrape(ScrapeConfig(
    url="https://example.com",
    asp=True,
    render_js=True,
    country="us",                        # Simple geo-targeting
    wait_for_selector=".product-price"   # Wait for element
))

print(result.content)
```

 

 

 

 

###  Brightdata (REST API)

 ```
const axios = require('axios');

const API_KEY = 'YOUR_BRIGHTDATA_API_KEY';
const ZONE = 'your_zone_name';

const response = await axios.post(
    'https://api.brightdata.com/request',
    {
        zone: ZONE,
        url: 'https://example.com',
        format: 'raw'
    },
    {
        headers: {
            'Content-Type': 'application/json',
            'Authorization': `Bearer ${API_KEY}`
        }
    }
);

console.log(response.data);
```

 

 

###  Scrapfly

 ```
const { ScrapflyClient } = require('scrapfly-sdk');

// Single API key, no zones
const client = new ScrapflyClient({
    key: 'YOUR_SCRAPFLY_API_KEY'
});

const result = await client.scrape({
    url: 'https://example.com',
    asp: true,       // Anti-bot bypass
    render_js: true  // JS rendering included
});

console.log(result.result.content);
```

 

 

 

 

###  Brightdata (REST API)

 ```
curl -X POST "https://api.brightdata.com/request" \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -d '{
    "zone": "your_zone_name",
    "url": "https://example.com",
    "format": "raw"
  }'
```

 

 

###  Scrapfly

 ```
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_API_KEY\
&url=https%3A%2F%2Fexample.com\
&asp=true\
&render_js=true"
```

 

 

 

### Brightdata Proxy-based vs Scrapfly REST

 ```
# Brightdata proxy-based access (complex auth)
curl "https://example.com" \
  --proxy brd.superproxy.io:33335 \
  --proxy-user brd-customer-CUSTOMER_ID-zone-ZONE:PASSWORD \
  -k
```

 

 

 ```
# Scrapfly REST API (simple auth)
curl "https://api.scrapfly.io/scrape\
?key=YOUR_API_KEY\
&url=https%3A%2F%2Fexample.com\
&asp=true"
```

 

 

 

 

 

 

 

 

##  [🤖 AI Migration Assistant](#ai-migration) 

 Use Claude or ChatGPT to automatically convert your Brightdata code to Scrapfly. Copy this prompt and paste it along with your existing code.

####  Copy This Prompt

 ```
I'm migrating from Brightdata Web Unlocker to Scrapfly. Here's my current code using Brightdata's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).

Key differences:
- Brightdata uses zones + API key auth; Scrapfly uses single API key
- Brightdata REST API endpoint: api.brightdata.com/request
- Scrapfly REST API endpoint: api.scrapfly.io/scrape

Parameter mappings:
- zone (Brightdata) → Not needed (Scrapfly uses single API key)
- format: raw → format (values: raw, markdown, text, clean_html)
- data_format: markdown → format=markdown
- data_format: screenshot → Use Screenshot API
- -country-[code] (proxy username) → country parameter
- -session-[id] (proxy username) → session parameter
- -ua-mobile → os=android
- x-unblock-expect header → wait_for_selector
- Premium domains → asp=True (handles all protected sites)

Scrapfly exclusive features:
- render_js=True: JavaScript rendering (Brightdata requires separate Browser API)
- js_scenario: Browser automation (clicks, forms, scrolls)
- asp=True: Anti-Scraping Protection
- session: Session management
- cache: Response caching
- extraction_model: AI data extraction

Scrapfly SDK Docs: https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs: https://scrapfly.io/docs/scrape-api/getting-started?view=markdown

My current Brightdata code:
[PASTE YOUR CODE HERE]
```

 

  **How to Use:**1. Copy the prompt above
2. Open [Claude](https://claude.ai) or [ChatGPT](https://chat.openai.com)
3. Paste the prompt and replace `[PASTE YOUR CODE HERE]` with your Brightdata code
4. Review the generated Scrapfly code and test it with your free 1,000 credits
 
 **Developer Tools:** Use our [cURL to Python converter](https://scrapfly.io/web-scraping-tools/curl-python) and [selector tester](https://scrapfly.io/web-scraping-tools/css-xpath-tester) to speed up development.

 

 

 

 

 

##  [Scrapfly Exclusive Features](#exclusive-features) 

 Features available in Scrapfly that aren't available in Brightdata Web Unlocker.

 [####  JS Scenarios

Automate browser interactions: clicks, form fills, scrolls, and conditional logic. Brightdata requires their separate Browser API for similar functionality.

 ](https://scrapfly.io/products/web-scraping-api) [####  Extraction API

AI-powered data extraction with pre-built models for products, articles, jobs, and more. Use LLM prompts for custom extraction without CSS selectors.

 ](https://scrapfly.io/products/extraction-api) [####  Crawler API

Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules. Not available in Brightdata Web Unlocker.

 ](https://scrapfly.io/products/crawler-api) [####  Smart Caching

Cache responses to reduce costs and improve response times. Set custom TTL and clear cache on demand.

 ](https://scrapfly.io/products/web-scraping-api) [####  Auto Scroll

Automatically scroll pages to trigger lazy-loaded content. Essential for infinite scroll pages like social media feeds.

 ](https://scrapfly.io/products/web-scraping-api) [####  Proxy Saver

Bandwidth optimization that reduces residential proxy costs by up to 50%. Blocks junk traffic, stubs images, and caches responses.

 ](https://scrapfly.io/products/proxy-saver) 

 

 

 

##  [Frequently Asked Questions](#faq) 

  #### Do I need to set up zones in Scrapfly?

No. Scrapfly uses a single API key for all features. There are no zones, zone passwords, or SSL certificates to manage:

 ```
# Brightdata requires zone setup
"zone": "your_zone_name"
"Authorization: Bearer API_KEY"

# Scrapfly uses single API key
client = ScrapflyClient(key="YOUR_API_KEY")
```

 

 

   #### How do I handle Brightdata's premium domains?

Brightdata charges extra for 60+ "premium domains" like Walmart, Target, and Costco. Scrapfly uses variable credit costs based on actual complexity, but there's no separate premium tier:

 ```
# Scrapfly handles all protected sites with asp=True
result = client.scrape(ScrapeConfig(
    url="https://www.walmart.com/product",
    asp=True  # Handles "premium" sites without extra tier
))
```

 

 

   #### How do I add JavaScript rendering?

Brightdata's Web Unlocker doesn't include JavaScript rendering. You need their separate Browser API. Scrapfly includes JS rendering in the core API:

 ```
# Scrapfly with JS rendering
result = client.scrape(ScrapeConfig(
    url="https://example.com",
    render_js=True,       # JS rendering included
    rendering_wait=3000   # Optional wait time
))
```

 

[Learn more about JS rendering ](https://scrapfly.io/docs/scrape-api/javascript-rendering)

 

   #### How do I migrate browser automation?

If you're using Brightdata's Browser API for automation, Scrapfly's JS Scenarios provide similar functionality in the same API:

 ```
# Scrapfly JS Scenario for clicking and filling forms
result = client.scrape(ScrapeConfig(
    url="https://example.com",
    render_js=True,
    js_scenario=[
        {"click": {"selector": "#load-more"}},
        {"wait": 2000},
        {"fill": {"selector": "#search", "value": "query"}},
        {"click": {"selector": "#submit"}}
    ]
))
```

 

[Learn more about JS Scenarios ](https://scrapfly.io/docs/scrape-api/javascript-rendering#js_scenario)

 

   #### How do I test my migration?

1. **Sign up for free:** Get 1,000 API credits with no credit card required
2. **Run parallel testing:** Keep Brightdata running while testing Scrapfly
3. **Compare results:** Verify that Scrapfly returns the same data
4. **Gradual migration:** Switch traffic gradually (e.g., 10% → 50% → 100%)
 
 

  

 

 

 

## Start Your Migration Today

Test Scrapfly on your targets with 1,000 free API credits. No credit card required.

- 1,000 free API credits
- Full API access
- Migration support
- Same-day response from our team
 
 [  Start For Free ](https://scrapfly.io/register) Need help with migration? [Contact our team ](mailto:support@scrapfly.io)