Scrapingdog to Scrapfly Migration Guide
Complete parameter mapping and code examples for migrating from Scrapingdog to Scrapfly. Most teams complete migration in under 2 hours with zero downtime.
Complete Parameter Mapping
Scrapingdog and Scrapfly use different parameter names. This table shows exact mappings for all features.
| Scrapingdog Parameter | Scrapfly Parameter | Notes |
|---|---|---|
api_key |
key |
API authentication key |
url |
url |
Target URL to scrape (same) |
dynamic |
render_js |
Enable JavaScript rendering (dynamic=true becomes render_js=true) |
premium |
proxy_pool |
Use public_residential_pool for residential proxies |
country |
country |
2-letter ISO country code (same parameter name) |
wait |
rendering_wait |
Wait time in milliseconds before returning response |
session_number |
session |
Session name for persistent cookies/state |
custom_headers + headers |
headers |
Pass headers directly (no boolean flag needed) |
markdown |
format |
Use format=markdown for markdown output |
ai_query |
extraction_prompt |
AI extraction with natural language prompts |
ai_extract_rules |
extraction_template |
Use Scrapfly's Extraction API for structured rules |
| N/A (no equivalent) | asp |
Scrapfly exclusive: Anti-Scraping Protection for bypassing anti-bot systems |
| N/A | wait_for_selector |
Wait for CSS selector to appear (Scrapfly exclusive) |
| N/A | cache |
Enable response caching (Scrapfly exclusive) |
| N/A | cache_ttl |
Cache time-to-live in seconds (Scrapfly exclusive) |
| N/A | auto_scroll |
Automatically scroll page to load lazy content (Scrapfly exclusive) |
| N/A | tags |
Custom tags for request tracking (Scrapfly exclusive) |
| N/A | webhook |
Webhook name for async notifications (Scrapfly exclusive) |
Migration Code Examples
Side-by-side code examples showing how to migrate from Scrapingdog to Scrapfly. Select your language below.
Scrapingdog
import requests
url = "https://api.scrapingdog.com/scrape"
params = {
"api_key": "YOUR_SCRAPINGDOG_KEY",
"url": "https://example.com",
"dynamic": "true",
"premium": "true",
"country": "us",
"wait": 5000
}
response = requests.get(url, params=params)
print(response.text)
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig
client = ScrapflyClient(key="YOUR_SCRAPFLY_KEY")
result = client.scrape(ScrapeConfig(
url="https://example.com",
render_js=True,
asp=True, # Anti-bot bypass
proxy_pool="public_residential_pool",
country="us",
rendering_wait=5000
))
print(result.content)
Scrapingdog
const axios = require('axios');
const apiUrl = 'https://api.scrapingdog.com/scrape';
const params = {
api_key: 'YOUR_SCRAPINGDOG_KEY',
url: 'https://example.com',
dynamic: 'true',
premium: 'true',
country: 'us',
wait: 5000
};
axios.get(apiUrl, { params })
.then(response => {
console.log(response.data);
});
Scrapfly
const { ScrapflyClient } = require('scrapfly-sdk');
const client = new ScrapflyClient({
key: 'YOUR_SCRAPFLY_KEY'
});
const result = await client.scrape({
url: 'https://example.com',
render_js: true,
asp: true, // Anti-bot bypass
proxy_pool: 'public_residential_pool',
country: 'us',
rendering_wait: 5000
});
console.log(result.result.content);
Scrapingdog
curl "https://api.scrapingdog.com/scrape\
?api_key=YOUR_SCRAPINGDOG_KEY\
&url=https%3A%2F%2Fexample.com\
&dynamic=true\
&premium=true\
&country=us\
&wait=5000"
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&asp=true\
&proxy_pool=public_residential_pool\
&country=us\
&rendering_wait=5000"
AI Migration Assistant
Use Claude or ChatGPT to automatically convert your Scrapingdog code to Scrapfly. Copy this prompt and paste it along with your existing code.
Copy This Prompt
I'm migrating from Scrapingdog to Scrapfly. Here's my current code using Scrapingdog's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).
Key parameter mappings:
- api_key → key
- dynamic=true → render_js=True
- premium=true → proxy_pool="public_residential_pool"
- country → country (same name)
- wait → rendering_wait
- session_number → session
- custom_headers + headers → headers (pass directly)
- markdown=true → format="markdown"
- ai_query → extraction_prompt (use Scrapfly Extraction API)
- ai_extract_rules → extraction_template (use Scrapfly Extraction API)
Important additions for Scrapfly:
- Add asp=True for anti-bot bypass (Scrapfly's key feature, no Scrapingdog equivalent)
- Use wait_for_selector instead of just wait for dynamic content
Scrapfly SDK Docs (markdown for LLM): https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs (markdown for LLM): https://scrapfly.io/docs/scrape-api/getting-started?view=markdown
My current Scrapingdog code:
[PASTE YOUR CODE HERE]
- Copy the prompt above
- Open Claude or ChatGPT
- Paste the prompt and replace
[PASTE YOUR CODE HERE]with your Scrapingdog code - Review the generated Scrapfly code and test it with your free 1,000 credits
Developer Tools: Use our cURL to Python converter and selector tester to speed up development.
Scrapfly Exclusive Features
Features available in Scrapfly that aren't available in Scrapingdog.
Anti-Scraping Protection (ASP)
Industry-leading anti-bot bypass for Cloudflare, DataDome, PerimeterX, and more. 98% success rate on protected sites with default settings.
Official SDKs
First-class SDKs for Python, JavaScript/TypeScript, Go, and Scrapy. No manual HTTP client setup required.
JS Scenarios
Automate browser interactions: clicks, form fills, scrolls, and conditional logic. Execute complex workflows without writing browser automation code.
Extraction API
AI-powered data extraction with pre-built models for products, articles, jobs, and more. Use LLM prompts for custom extraction without CSS selectors.
Proxy Saver
Bandwidth optimization that cuts residential proxy costs by 50%. Blocks junk traffic, stubs images/CSS, and caches responses.
Crawler API
Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules.
Frequently Asked Questions
How do I handle Scrapingdog's dynamic=true and premium=true?
dynamic=true and premium=true?In Scrapfly, these are separate parameters:
# Scrapingdog
dynamic = "true"
premium = "true"
# Scrapfly
render_js = True # For JavaScript rendering
proxy_pool = "public_residential_pool" # For residential proxies
asp = True # For anti-bot bypass (recommended!)
The key addition is asp=True, which enables Scrapfly's Anti-Scraping Protection for much higher success rates on protected sites.
What if I'm using Scrapingdog's ai_query or ai_extract_rules?
ai_query or ai_extract_rules?Scrapfly's Extraction API is more powerful:
- extraction_prompt: Similar to ai_query, but more capable
- extraction_template: Similar to ai_extract_rules, but with more features
- extraction_model: Pre-built models for products, articles, jobs, etc.
Scrapingdog doesn't have SDKs. How do I use Scrapfly's SDKs?
Install the SDK for your language:
# Python
pip install scrapfly-sdk
# JavaScript/TypeScript
npm install scrapfly-sdk
# Go
go get github.com/scrapfly/scrapfly-go
SDKs handle authentication, retries, and error handling automatically. View SDK documentation
How do I test my migration?
- Sign up for free: Get 1,000 API credits with no credit card required
- Run parallel testing: Keep Scrapingdog running while testing Scrapfly
- Compare results: Verify that Scrapfly returns the same data (likely with higher success rate)
- Gradual migration: Switch traffic gradually (e.g., 10% → 50% → 100%)
What about Scrapingdog's dedicated APIs (Google, Amazon, LinkedIn)?
Scrapfly takes a unified approach: one API works for all websites. Instead of learning different APIs for each target:
- ASP technology: Handles anti-bot protection on any site
- Extraction API: Extracts structured data from any page (products, articles, jobs, etc.)
- Higher success rates: 98% vs 39% on protected sites
This means less code to maintain and consistent behavior across all targets.
Start Your Migration Today
Test Scrapfly on your targets with 1,000 free API credits. No credit card required.
- 1,000 free API credits
- Full API access
- Official SDKs included
- Same-day response from our team
Need help with migration? Contact our team