Crawlbase to Scrapfly Migration Guide
Complete parameter mapping and code examples for migrating from Crawlbase to Scrapfly. Most teams complete migration in under 2 hours with zero downtime.
Complete Parameter Mapping
Crawlbase and Scrapfly use different parameter names. This table shows exact mappings for all features.
| Crawlbase Parameter | Scrapfly Parameter | Notes |
|---|---|---|
token |
key |
API authentication key |
url |
url |
Target URL to scrape (same) |
&javascript=true (JS token) |
render_js=true |
Enable JavaScript rendering. Crawlbase requires separate JS token; Scrapfly uses single key with parameter. |
&premium=true |
proxy_pool=public_residential_pool |
Use residential proxy pool |
country |
country |
2-letter ISO country code (e.g., "us", "gb") |
page_wait (milliseconds) |
rendering_wait (milliseconds) |
Wait time after page load (both in ms) |
ajax_wait |
wait_for_selector |
Wait for AJAX. Scrapfly offers more precise selector-based waiting. |
css_click_selector |
js_scenario |
Click elements. Scrapfly offers full browser automation (click, fill, scroll). |
scroll |
auto_scroll=true |
Scroll page to load lazy content |
scroll_interval |
js_scenario |
Custom scroll timing via JS Scenario |
device |
os |
Operating system for browser fingerprint |
format=json |
(default behavior) | Scrapfly always returns structured JSON response with content |
format=html |
format=raw |
Get raw HTML content |
get_cookies |
(included by default) | Cookies included in response metadata |
set_cookies |
cookies |
Send custom cookies with request |
store_session |
session |
Session name for persistent cookies/state |
user_agent |
headers[User-Agent] |
Custom User-Agent header |
autoparse |
extraction_model |
Auto-parse structured data. Use Scrapfly's Extraction API with models like product, article |
scraper |
extraction_template |
Predefined scraper templates. Use Scrapfly's Extraction Templates |
screenshot=true |
screenshots[main]=fullpage |
Capture page screenshot (or use Screenshot API) |
async=true |
webhook_name |
Async processing with webhook callback |
cookies_session |
session |
Session-based cookies persistence (same as store_session) |
| N/A | asp |
Anti-Scraping Protection for bypassing anti-bot (Scrapfly exclusive) |
| N/A | cache |
Enable response caching (Scrapfly exclusive) |
| N/A | cache_ttl |
Cache time-to-live in seconds (Scrapfly exclusive) |
| N/A | extraction_model |
AI-powered structured data extraction (Scrapfly exclusive) |
| N/A | tags |
Custom tags for request tracking and analytics (Scrapfly exclusive) |
| N/A | correlation_id |
Custom ID for request tracking (Scrapfly exclusive) |
| N/A | webhook_name |
Async webhook notifications (Scrapfly exclusive) |
Token Simplification
Crawlbase requires separate tokens for different features. Scrapfly uses a single API key with parameters.
| Crawlbase Token Type | Scrapfly Equivalent |
|---|---|
| Normal Token (static pages) | Single API key with default params |
| JavaScript Token (dynamic pages) | Single API key + render_js=true |
Premium Token + &premium=true |
Single API key + proxy_pool=public_residential_pool |
One Scrapfly API key handles all use cases. No token management complexity.
Migration Code Examples
Side-by-side code examples showing how to migrate from Crawlbase to Scrapfly. Select your language below.
Crawlbase
import requests
from urllib.parse import urlencode
# Crawlbase requires separate tokens!
JS_TOKEN = 'YOUR_CRAWLBASE_JS_TOKEN'
url = 'https://example.com'
params = {
'token': JS_TOKEN,
'url': url,
'country': 'us',
'page_wait': 5000,
'premium': 'true'
}
response = requests.get(
'https://api.crawlbase.com/',
params=params
)
print(response.text)
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig
# One key for all features
client = ScrapflyClient(key="YOUR_SCRAPFLY_API_KEY")
result = client.scrape(ScrapeConfig(
url="https://example.com",
render_js=True,
asp=True, # Anti-bot bypass
country="us",
proxy_pool="public_residential_pool",
rendering_wait=5000
))
print(result.content)
Crawlbase
const axios = require('axios');
const JS_TOKEN = 'YOUR_CRAWLBASE_JS_TOKEN';
const url = 'https://example.com';
const response = await axios.get(
'https://api.crawlbase.com/', {
params: {
token: JS_TOKEN,
url: url,
country: 'us',
page_wait: 5000,
premium: 'true'
}
});
console.log(response.data);
Scrapfly
const { ScrapflyClient } = require('scrapfly-sdk');
const client = new ScrapflyClient({
key: 'YOUR_SCRAPFLY_API_KEY'
});
const result = await client.scrape({
url: 'https://example.com',
render_js: true,
asp: true,
country: 'us',
proxy_pool: 'public_residential_pool',
rendering_wait: 5000
});
console.log(result.result.content);
Crawlbase
curl "https://api.crawlbase.com/\
?token=YOUR_JS_TOKEN\
&url=https%3A%2F%2Fexample.com\
&country=us\
&page_wait=5000\
&premium=true"
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_API_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&asp=true\
&country=us\
&proxy_pool=public_residential_pool\
&rendering_wait=5000"
π€ AI Migration Assistant
Use Claude or ChatGPT to automatically convert your Crawlbase code to Scrapfly. Copy this prompt and paste it along with your existing code.
Copy This Prompt
I'm migrating from Crawlbase to Scrapfly. Here's my current code using Crawlbase's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).
Key parameter mappings:
- token β key (Note: Scrapfly uses single key for all features)
- JS Token usage β key + render_js=True
- url β url (same)
- javascript=true β render_js=True
- premium=true β proxy_pool="public_residential_pool"
- country β country (same)
- page_wait β rendering_wait (both in milliseconds)
- ajax_wait β wait_for_selector (CSS selector)
- css_click_selector β js_scenario (use click action)
- scroll=true β auto_scroll=True
- device β os
- store_session β session
- cookies_session β session
- user_agent β headers["User-Agent"]
- set_cookies β cookies
- autoparse β extraction_model (use Scrapfly Extraction API)
- scraper β extraction_template
- screenshot=true β screenshots={"main": "fullpage"}
- async=true β webhook_name (configure webhook in dashboard)
Additional Scrapfly features to consider:
- asp=True for anti-bot bypass (exclusive to Scrapfly)
- cache=True for response caching
- extraction_model for AI data extraction
Scrapfly SDK Docs (markdown for LLM): https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs (markdown for LLM): https://scrapfly.io/docs/scrape-api/getting-started?view=markdown
My current Crawlbase code:
[PASTE YOUR CODE HERE]
- Copy the prompt above
- Open Claude or ChatGPT
- Paste the prompt and replace
[PASTE YOUR CODE HERE]with your Crawlbase code - Review the generated Scrapfly code and test it with your free 1,000 credits
Developer Tools: Use our cURL to Python converter and selector tester to speed up development.
Common Migration Scenarios
Token Consolidation
Crawlbase requires separate Normal/JS/Premium tokens. Scrapfly uses one API key with feature parameters.
3 tokens β 1 key + params
Anti-Bot Bypass
Crawlbase has limited anti-bot capabilities. Scrapfly offers asp=True for industry-leading bypass.
N/A β asp=True
Geotargeting
Both use 2-letter ISO country codes. Same parameter name country.
country=us β country=us
Wait Timing
Both use milliseconds. Crawlbase's page_wait maps to Scrapfly's rendering_wait.
page_wait=5000 β rendering_wait=5000
Click Actions
Crawlbase's css_click_selector maps to Scrapfly's powerful JS Scenario for full browser automation.
css_click_selector β js_scenario
Session Management
Crawlbase's store_session maps to Scrapfly's session for persistent cookies.
store_session β session
Scrapfly Exclusive Features
Features available in Scrapfly that aren't available in Crawlbase.
Anti-Scraping Protection (ASP)
Industry-leading anti-bot bypass technology. Works on Cloudflare, PerimeterX, DataDome, and more. Crawlbase has no equivalent.
JS Scenarios
Full browser automation: clicks, form fills, scrolls, conditional logic. Far more powerful than Crawlbase's css_click_selector.
Extraction API
AI-powered data extraction with pre-built models for products, articles, jobs, and more. No CSS selectors required.
Smart Caching
Cache responses to reduce costs and improve response times. Set custom TTL and clear cache on demand.
Crawler API
Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules.
Webhooks
Async processing with delivery guarantees. Get notified when scrapes complete without polling.
Frequently Asked Questions
I have separate Normal and JS tokens. Do I need multiple Scrapfly keys?
No! Scrapfly uses a single API key for all features. Just add render_js=True when you need JavaScript rendering:
# Crawlbase: Different tokens
NORMAL_TOKEN = "abc123" # For static pages
JS_TOKEN = "xyz789" # For dynamic pages
# Scrapfly: One key, parameters control features
SCRAPFLY_KEY = "your_key"
# Static: default
# Dynamic: render_js=True
How do I handle Crawlbase's premium proxies?
Use proxy_pool="public_residential_pool" in Scrapfly:
# Crawlbase
premium = "true"
# Scrapfly
proxy_pool = "public_residential_pool"
What about Crawlbase's css_click_selector feature?
Scrapfly's JS Scenario is far more powerful:
# Crawlbase: Simple click only
css_click_selector = "#button"
# Scrapfly: Full browser automation
js_scenario = [
{"click": {"selector": "#button"}},
{"wait": 1000},
{"fill": {"selector": "#search", "value": "query"}},
{"click": {"selector": "#submit"}}
]
Does Scrapfly have anti-bot bypass like Crawlbase?
Scrapfly's anti-bot bypass (ASP) is significantly more advanced. Crawlbase has limited anti-bot capabilities compared to Scrapfly's industry-leading technology that works on Cloudflare, PerimeterX, DataDome, and more.
# Scrapfly ASP
asp = True # Enables anti-bot bypass
How do I test my migration?
- Sign up for free: Get 1,000 API credits with no credit card required
- Run parallel testing: Keep Crawlbase running while testing Scrapfly
- Compare results: Verify that Scrapfly returns the same (or better) data
- Gradual migration: Switch traffic gradually (e.g., 10% β 50% β 100%)
Start Your Migration Today
Test Scrapfly on your targets with 1,000 free API credits. No credit card required.
- 1,000 free API credits
- Full API access
- Migration support
- Same-day response from our team
Need help with migration? Contact our team