Scrape.do to Scrapfly Migration Guide
Complete parameter mapping and code examples for migrating from Scrape.do to Scrapfly. Most teams complete migration in under 2 hours with zero downtime.
Complete Parameter Mapping
Scrape.do and Scrapfly use different parameter names. This table shows exact mappings for all features.
| Scrape.do Parameter | Scrapfly Parameter | Notes |
|---|---|---|
token |
key |
API authentication key |
url |
url |
Target URL to scrape (same) |
render |
render_js |
Enable JavaScript rendering (render=true becomes render_js=true) |
geoCode |
country |
2-letter ISO country code |
customWait |
rendering_wait |
Wait time before returning response |
waitSelector |
wait_for_selector |
Wait for CSS selector to appear |
playWithBrowser |
js_scenario |
Browser interaction scenarios (Scrapfly's JS Scenarios are more powerful) |
extraHeaders |
headers |
Pass headers directly |
output |
format |
Response format (markdown, text, clean_html, json) |
| N/A (no equivalent) | asp |
Anti-bot bypass (Scrapfly exclusive) |
| N/A | session |
Session name for persistent cookies/state (Scrapfly exclusive) |
| N/A | extraction_prompt |
AI-powered data extraction (Scrapfly exclusive) |
| N/A | cache |
Enable response caching (Scrapfly exclusive) |
| N/A | auto_scroll |
Automatically scroll page to load lazy content (Scrapfly exclusive) |
| N/A | tags |
Custom tags for request tracking (Scrapfly exclusive) |
| N/A | webhook |
Webhook name for async notifications (Scrapfly exclusive) |
Migration Code Examples
Side-by-side code examples showing how to migrate from Scrape.do to Scrapfly. Select your language below.
Scrape.do
import requests
url = "https://api.scrape.do"
params = {
"token": "YOUR_SCRAPEDO_TOKEN",
"url": "https://web-scraping.dev/products",
"render": "true",
"super": "true",
"geoCode": "us"
}
response = requests.get(url, params=params)
print(response.text)
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig
client = ScrapflyClient(key="YOUR_SCRAPFLY_KEY")
result = client.scrape(ScrapeConfig(
url="https://web-scraping.dev/products",
render_js=True,
asp=True, # Anti-bot bypass
country="us",
proxy_pool="public_residential_pool"
))
print(result.content)
Scrape.do
const axios = require('axios');
const apiUrl = 'https://api.scrape.do';
const params = {
token: 'YOUR_SCRAPEDO_TOKEN',
url: 'https://web-scraping.dev/products',
render: 'true',
super: 'true',
geoCode: 'us'
};
axios.get(apiUrl, { params })
.then(response => {
console.log(response.data);
});
Scrapfly
const { ScrapflyClient } = require('scrapfly-sdk');
const client = new ScrapflyClient({
key: 'YOUR_SCRAPFLY_KEY'
});
const result = await client.scrape({
url: 'https://web-scraping.dev/products',
render_js: true,
asp: true, // Anti-bot bypass
country: 'us',
proxy_pool: 'public_residential_pool'
});
console.log(result.result.content);
Scrape.do
curl "https://api.scrape.do\
?token=YOUR_SCRAPEDO_TOKEN\
&url=https%3A%2F%2Fexample.com\
&render=true\
&super=true\
&geoCode=us"
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&asp=true\
&country=us\
&proxy_pool=public_residential_pool"
AI Migration Assistant
Use Claude or ChatGPT to automatically convert your Scrape.do code to Scrapfly. Copy this prompt and paste it along with your existing code.
Copy This Prompt
I'm migrating from Scrape.do to Scrapfly. Here's my current code using Scrape.do's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).
Key parameter mappings:
- token → key
- render=true → render_js=True
- super=true → asp=True (Scrapfly's ASP is much more capable)
- geoCode → country (2-letter ISO code)
- waitUntil → rendering_wait
- sessionId → session
- customHeaders → headers (pass directly)
- output → format
- playWithBrowser → js_scenario (Scrapfly's JS Scenarios)
Important additions for Scrapfly:
- Add asp=True for anti-bot bypass (Scrapfly's key feature)
- Use proxy_pool="public_residential_pool" for residential proxies
- Use wait_for_selector for waiting on specific elements
Scrapfly SDK Docs (markdown for LLM): https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs (markdown for LLM): https://scrapfly.io/docs/scrape-api/getting-started?view=markdown
My current Scrape.do code:
[PASTE YOUR CODE HERE]
- Copy the prompt above
- Open Claude or ChatGPT
- Paste the prompt and replace
[PASTE YOUR CODE HERE]with your Scrape.do code - Review the generated Scrapfly code and test it with your free 1,000 credits
Developer Tools: Use our cURL to Python converter and selector tester to speed up development.
Scrapfly Exclusive Features
Features available in Scrapfly that aren't available in Scrape.do.
Anti-Scraping Protection (ASP)
Industry-leading anti-bot bypass for Cloudflare, DataDome, PerimeterX, and more. 98% success rate on protected sites with default settings.
Extraction API
AI-powered data extraction with pre-built models for products, articles, jobs, and more. Use LLM prompts for custom extraction without CSS selectors.
Proxy Saver
Bandwidth optimization that cuts residential proxy costs by 50%. Blocks junk traffic, stubs images/CSS, and caches responses.
Official SDKs
First-class SDKs for Python, JavaScript/TypeScript, Go, and Scrapy. No manual HTTP client setup required.
Advanced JS Scenarios
More powerful browser interactions than Scrape.do's playWithBrowser: clicks, form fills, scrolls, and conditional logic.
Crawler API
Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules.
Frequently Asked Questions
How do I handle Scrape.do's super=true parameter?
super=true parameter?Scrape.do's "super" mode uses premium proxies for anti-bot bypass. In Scrapfly, anti-bot bypass and proxy selection are separate concerns:
# Scrape.do
super = "true"
# Scrapfly
asp = True # Anti-bot bypass (much more capable)
proxy_pool = "public_residential_pool" # For residential proxies
Scrapfly's ASP technology goes beyond proxy rotation — it matches real browser TLS fingerprints, HTTP/2 signatures, and JavaScript environments for much higher success rates.
How do I migrate Scrape.do's playWithBrowser interactions?
playWithBrowser interactions?Scrapfly's JS Scenarios are more powerful than Scrape.do's playWithBrowser:
# Scrapfly JS Scenario example - login form automation
result: ScrapeApiResponse = client.scrape(ScrapeConfig(
url="https://web-scraping.dev/login",
render_js=True,
screenshots={"test": "fullpage"},
js_scenario=[
{"fill": {"selector": "input[name='username']", "clear": True, "value": "user123"}},
{"fill": {"selector": "input[name='password']", "clear": True, "value": "password"}},
{"click": {"selector": "form > button[type='submit']"}},
{"wait_for_navigation": {"timeout": 5000}}
],
headers={"cookie": "cookiesAccepted=true"}
))
JS Scenarios support clicks, form fills, scrolls, keyboard input, navigation waits, screenshots, and more.
Does Scrapfly support Scrape.do's geoCode targeting?
geoCode targeting?Yes. Scrapfly uses the country parameter with the same 2-letter ISO codes:
# Scrape.do
geoCode = "us"
# Scrapfly
country = "us" # Same 2-letter ISO code
Scrapfly supports 50+ countries with both residential and datacenter proxies included in all plans.
How do I test my migration?
- Sign up for free: Get 1,000 API credits with no credit card required
- Run parallel testing: Keep Scrape.do running while testing Scrapfly
- Compare results: Verify that Scrapfly returns the same data (likely with higher success rate)
- Gradual migration: Switch traffic gradually (e.g., 10% → 50% → 100%)
Start Your Migration Today
Test Scrapfly on your targets with 1,000 free API credits. No credit card required.
- 1,000 free API credits
- Full API access
- Official SDKs included
- Same-day response from our team
Need help with migration? Contact our team