Brightdata to Scrapfly Migration Guide
Complete parameter mapping and code examples for migrating from Brightdata Web Unlocker to Scrapfly. Simplified authentication and more features in a unified API.
Key Architectural Differences
Brightdata and Scrapfly have different architectures. Understanding these differences will help you migrate more effectively.
| Aspect | Brightdata Web Unlocker | Scrapfly |
|---|---|---|
| Authentication | Customer ID + Zone + Password + SSL cert | Single API key |
| Access Methods | Proxy-based or REST API | REST API with SDKs |
| JavaScript Rendering | Requires Browser API (separate product) | Built-in with render_js |
| Browser Automation | Requires Browser API (separate product) | JS Scenarios in same API |
| Data Extraction | Not available | AI-powered Extraction API |
| Multi-page Crawling | Not available | Crawler API included |
Complete Parameter Mapping
Brightdata Web Unlocker uses different parameter names and request formats. This table shows how to map them to Scrapfly.
| Brightdata Parameter | Scrapfly Parameter | Notes |
|---|---|---|
Authorization: Bearer [API_KEY] |
key |
API authentication. Scrapfly uses query param or SDK |
zone |
N/A | Scrapfly doesn't use zones. Single API key covers all features |
url |
url |
Target URL to scrape (same) |
format: raw |
format |
Response format: raw, markdown, text, clean_html |
data_format: markdown |
format=markdown |
Convert HTML to markdown |
data_format: screenshot |
screenshots |
Use Screenshot API or screenshots parameter |
-country-[code] (proxy username) |
country |
2-letter ISO country code (e.g., "us", "gb") |
-ua-mobile (proxy username) |
os=android |
Mobile user agent targeting |
x-unblock-expect header |
wait_for_selector |
Wait for element before returning response |
body |
body |
Request body for POST requests |
| Premium domains | asp=True |
Scrapfly ASP handles all protected sites without separate tiers |
-session-[id] (proxy username) |
session |
Session management for persistent cookies/IP |
| N/A (requires Browser API) | render_js |
JavaScript rendering included in Scrapfly |
| N/A (requires Browser API) | js_scenario |
Browser automation: clicks, forms, scrolls (Scrapfly exclusive) |
| N/A | session |
Session management for persistent cookies (Scrapfly exclusive) |
| N/A | cache |
Response caching (Scrapfly exclusive) |
| N/A | auto_scroll |
Auto-scroll for lazy content (Scrapfly exclusive) |
| N/A | extraction_model |
AI-powered data extraction (Scrapfly exclusive) |
Migration Code Examples
Side-by-side code examples showing how to migrate from Brightdata Web Unlocker to Scrapfly.
Brightdata (REST API)
import requests
API_KEY = "YOUR_BRIGHTDATA_API_KEY"
ZONE = "your_zone_name"
response = requests.post(
"https://api.brightdata.com/request",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
},
json={
"zone": ZONE,
"url": "https://example.com",
"format": "raw"
}
)
print(response.text)
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig
# No zones, no complex auth
client = ScrapflyClient(key="YOUR_SCRAPFLY_API_KEY")
result = client.scrape(ScrapeConfig(
url="https://example.com",
asp=True, # Anti-bot bypass
render_js=True # JS rendering included
))
print(result.content)
With Geo-targeting and Wait for Element
# Brightdata with geo and expect element
response = requests.post(
"https://api.brightdata.com/request",
headers={
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}"
},
json={
"zone": ZONE,
"url": "https://example.com",
"format": "raw",
"headers": {
"x-unblock-expect": '{"element": ".product-price"}'
}
}
)
# Note: Country targeting requires proxy-based access
# Scrapfly with geo and wait_for_selector
result = client.scrape(ScrapeConfig(
url="https://example.com",
asp=True,
render_js=True,
country="us", # Simple geo-targeting
wait_for_selector=".product-price" # Wait for element
))
print(result.content)
Brightdata (REST API)
const axios = require('axios');
const API_KEY = 'YOUR_BRIGHTDATA_API_KEY';
const ZONE = 'your_zone_name';
const response = await axios.post(
'https://api.brightdata.com/request',
{
zone: ZONE,
url: 'https://example.com',
format: 'raw'
},
{
headers: {
'Content-Type': 'application/json',
'Authorization': `Bearer ${API_KEY}`
}
}
);
console.log(response.data);
Scrapfly
const { ScrapflyClient } = require('scrapfly-sdk');
// Single API key, no zones
const client = new ScrapflyClient({
key: 'YOUR_SCRAPFLY_API_KEY'
});
const result = await client.scrape({
url: 'https://example.com',
asp: true, // Anti-bot bypass
render_js: true // JS rendering included
});
console.log(result.result.content);
Brightdata (REST API)
curl -X POST "https://api.brightdata.com/request" \
-H "Content-Type: application/json" \
-H "Authorization: Bearer YOUR_API_KEY" \
-d '{
"zone": "your_zone_name",
"url": "https://example.com",
"format": "raw"
}'
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_API_KEY\
&url=https%3A%2F%2Fexample.com\
&asp=true\
&render_js=true"
Brightdata Proxy-based vs Scrapfly REST
# Brightdata proxy-based access (complex auth)
curl "https://example.com" \
--proxy brd.superproxy.io:33335 \
--proxy-user brd-customer-CUSTOMER_ID-zone-ZONE:PASSWORD \
-k
# Scrapfly REST API (simple auth)
curl "https://api.scrapfly.io/scrape\
?key=YOUR_API_KEY\
&url=https%3A%2F%2Fexample.com\
&asp=true"
π€ AI Migration Assistant
Use Claude or ChatGPT to automatically convert your Brightdata code to Scrapfly. Copy this prompt and paste it along with your existing code.
Copy This Prompt
I'm migrating from Brightdata Web Unlocker to Scrapfly. Here's my current code using Brightdata's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).
Key differences:
- Brightdata uses zones + API key auth; Scrapfly uses single API key
- Brightdata REST API endpoint: api.brightdata.com/request
- Scrapfly REST API endpoint: api.scrapfly.io/scrape
Parameter mappings:
- zone (Brightdata) β Not needed (Scrapfly uses single API key)
- format: raw β format (values: raw, markdown, text, clean_html)
- data_format: markdown β format=markdown
- data_format: screenshot β Use Screenshot API
- -country-[code] (proxy username) β country parameter
- -session-[id] (proxy username) β session parameter
- -ua-mobile β os=android
- x-unblock-expect header β wait_for_selector
- Premium domains β asp=True (handles all protected sites)
Scrapfly exclusive features:
- render_js=True: JavaScript rendering (Brightdata requires separate Browser API)
- js_scenario: Browser automation (clicks, forms, scrolls)
- asp=True: Anti-Scraping Protection
- session: Session management
- cache: Response caching
- extraction_model: AI data extraction
Scrapfly SDK Docs: https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs: https://scrapfly.io/docs/scrape-api/getting-started?view=markdown
My current Brightdata code:
[PASTE YOUR CODE HERE]
- Copy the prompt above
- Open Claude or ChatGPT
- Paste the prompt and replace
[PASTE YOUR CODE HERE]with your Brightdata code - Review the generated Scrapfly code and test it with your free 1,000 credits
Developer Tools: Use our cURL to Python converter and selector tester to speed up development.
Scrapfly Exclusive Features
Features available in Scrapfly that aren't available in Brightdata Web Unlocker.
JS Scenarios
Automate browser interactions: clicks, form fills, scrolls, and conditional logic. Brightdata requires their separate Browser API for similar functionality.
Extraction API
AI-powered data extraction with pre-built models for products, articles, jobs, and more. Use LLM prompts for custom extraction without CSS selectors.
Crawler API
Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules. Not available in Brightdata Web Unlocker.
Smart Caching
Cache responses to reduce costs and improve response times. Set custom TTL and clear cache on demand.
Auto Scroll
Automatically scroll pages to trigger lazy-loaded content. Essential for infinite scroll pages like social media feeds.
Proxy Saver
Bandwidth optimization that reduces residential proxy costs by up to 50%. Blocks junk traffic, stubs images, and caches responses.
Frequently Asked Questions
Do I need to set up zones in Scrapfly?
No. Scrapfly uses a single API key for all features. There are no zones, zone passwords, or SSL certificates to manage:
# Brightdata requires zone setup
"zone": "your_zone_name"
"Authorization: Bearer API_KEY"
# Scrapfly uses single API key
client = ScrapflyClient(key="YOUR_API_KEY")
How do I handle Brightdata's premium domains?
Brightdata charges extra for 60+ "premium domains" like Walmart, Target, and Costco. Scrapfly uses variable credit costs based on actual complexity, but there's no separate premium tier:
# Scrapfly handles all protected sites with asp=True
result = client.scrape(ScrapeConfig(
url="https://www.walmart.com/product",
asp=True # Handles "premium" sites without extra tier
))
How do I add JavaScript rendering?
Brightdata's Web Unlocker doesn't include JavaScript rendering. You need their separate Browser API. Scrapfly includes JS rendering in the core API:
# Scrapfly with JS rendering
result = client.scrape(ScrapeConfig(
url="https://example.com",
render_js=True, # JS rendering included
rendering_wait=3000 # Optional wait time
))
How do I migrate browser automation?
If you're using Brightdata's Browser API for automation, Scrapfly's JS Scenarios provide similar functionality in the same API:
# Scrapfly JS Scenario for clicking and filling forms
result = client.scrape(ScrapeConfig(
url="https://example.com",
render_js=True,
js_scenario=[
{"click": {"selector": "#load-more"}},
{"wait": 2000},
{"fill": {"selector": "#search", "value": "query"}},
{"click": {"selector": "#submit"}}
]
))
How do I test my migration?
- Sign up for free: Get 1,000 API credits with no credit card required
- Run parallel testing: Keep Brightdata running while testing Scrapfly
- Compare results: Verify that Scrapfly returns the same data
- Gradual migration: Switch traffic gradually (e.g., 10% β 50% β 100%)
Start Your Migration Today
Test Scrapfly on your targets with 1,000 free API credits. No credit card required.
- 1,000 free API credits
- Full API access
- Migration support
- Same-day response from our team
Need help with migration? Contact our team