ScraperAPI to Scrapfly Migration Guide
Complete parameter mapping and code examples for migrating from ScraperAPI to Scrapfly. Most teams complete migration in under 2 hours with zero downtime.
Complete Parameter Mapping
ScraperAPI and Scrapfly use different parameter names. This table shows exact mappings for all features.
| ScraperAPI Parameter | Scrapfly Parameter | Notes |
|---|---|---|
api_key |
key |
API authentication key |
url |
url |
Target URL to scrape (same) |
render=true |
render_js=true |
Enable JavaScript rendering with headless browser |
ultra_premium=true |
asp=true |
Anti-bot bypass (Anti-Scraping Protection) |
premium=true |
proxy_pool=public_residential_pool |
Use residential proxy pool |
country_code |
country |
2-letter ISO country code (e.g., "us", "gb") |
session_number |
session |
Session name for persistent cookies/state (use named string) |
keep_headers=true |
headers[Name]=Value |
Custom HTTP headers (passed directly) |
device_type=desktop |
(default behavior) | Desktop is default in Scrapfly |
screenshot=true |
screenshots[main]=fullpage |
Capture screenshot (or use Screenshot API) |
autoparse=true |
extraction_model=... |
Use Scrapfly's AI-powered Extraction API |
output_format=markdown |
format=markdown |
Response format: raw, markdown, text, clean_html |
binary_target=true |
format=raw |
Return raw binary content (images, PDFs) |
wait (seconds) |
rendering_wait (milliseconds) |
Wait time after page load (multiply by 1000) |
| N/A | wait_for_selector |
Wait for CSS selector to appear (Scrapfly exclusive) |
| N/A | js_scenario |
Browser automation: clicks, form fills, scrolls (Scrapfly exclusive) |
| N/A | cache |
Enable response caching (Scrapfly exclusive) |
| N/A | cache_ttl |
Cache time-to-live in seconds (Scrapfly exclusive) |
| N/A | auto_scroll |
Automatically scroll page to load lazy content (Scrapfly exclusive) |
| N/A | tags |
Custom tags for request tracking and analytics (Scrapfly exclusive) |
| N/A | correlation_id |
Custom ID for request tracking (Scrapfly exclusive) |
| N/A | webhook_name |
Async webhook notifications (Scrapfly exclusive) |
Migration Code Examples
Side-by-side code examples showing how to migrate from ScraperAPI to Scrapfly. Select your language below.
ScraperAPI
import requests
API_KEY = 'YOUR_SCRAPERAPI_KEY'
url = 'https://example.com'
response = requests.get(
'http://api.scraperapi.com',
params={
'api_key': API_KEY,
'url': url,
'render': 'true',
'ultra_premium': 'true',
'country_code': 'us',
'session_number': 123
}
)
print(response.text)
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig
client = ScrapflyClient(key="YOUR_SCRAPFLY_API_KEY")
result = client.scrape(ScrapeConfig(
url="https://example.com",
render_js=True,
asp=True,
country="us",
session="my_session_123"
))
print(result.content)
ScraperAPI
const axios = require('axios');
const API_KEY = 'YOUR_SCRAPERAPI_KEY';
const url = 'https://example.com';
const response = await axios.get(
'http://api.scraperapi.com', {
params: {
api_key: API_KEY,
url: url,
render: 'true',
ultra_premium: 'true',
country_code: 'us',
session_number: 123
}
});
console.log(response.data);
Scrapfly
const { ScrapflyClient } = require('scrapfly-sdk');
const client = new ScrapflyClient({
key: 'YOUR_SCRAPFLY_API_KEY'
});
const result = await client.scrape({
url: 'https://example.com',
render_js: true,
asp: true,
country: 'us',
session: 'my_session_123'
});
console.log(result.result.content);
ScraperAPI
curl "http://api.scraperapi.com\
?api_key=YOUR_SCRAPERAPI_KEY\
&url=https%3A%2F%2Fexample.com\
&render=true\
&ultra_premium=true\
&country_code=us"
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_API_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&asp=true\
&country=us"
π€ AI Migration Assistant
Use Claude or ChatGPT to automatically convert your ScraperAPI code to Scrapfly. Copy this prompt and paste it along with your existing code.
Copy This Prompt
I'm migrating from ScraperAPI to Scrapfly. Here's my current code using ScraperAPI's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).
Key parameter mappings:
- api_key β key
- render=true β render_js=True
- ultra_premium=true β asp=True
- premium=true β proxy_pool="public_residential_pool"
- country_code β country
- session_number β session (use named string like "session_123")
- keep_headers β headers (pass directly)
- autoparse=true β extraction_model="..." (see Extraction API docs)
- output_format β format (values: raw, markdown, text, clean_html)
- screenshot=true β screenshots={"main": "fullpage"}
- wait (seconds) β rendering_wait (milliseconds, multiply by 1000)
Scrapfly SDK Docs (markdown for LLM): https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs (markdown for LLM): https://scrapfly.io/docs/scrape-api/getting-started?view=markdown
My current ScraperAPI code:
[PASTE YOUR CODE HERE]
- Copy the prompt above
- Open Claude or ChatGPT
- Paste the prompt and replace
[PASTE YOUR CODE HERE]with your ScraperAPI code - Review the generated Scrapfly code and test it with your free 1,000 credits
Developer Tools: Use our cURL to Python converter and selector tester to speed up development.
Common Migration Scenarios
Geotargeting
Both use 2-letter ISO country codes. Just change country_code to country.
country_code=us β country=us
Data Extraction
ScraperAPI's autoparse maps to Scrapfly's Extraction API with AI models or LLM prompts.
autoparse=true β extraction_model="product"
Screenshots
Scrapfly offers more options via the Screenshot API: fullpage, viewport, CSS selector captures.
screenshot=true β screenshots[main]=fullpage
Wait Conditions
Scrapfly offers wait_for_selector to wait for specific elements (not available in ScraperAPI).
N/A β wait_for_selector=".product-price"
Session Management
ScraperAPI uses numeric session_number, Scrapfly uses named session strings.
session_number=123 β session="session_123"
Scrapfly Exclusive Features
Features available in Scrapfly that aren't available in ScraperAPI.
JS Scenarios
Automate browser interactions: clicks, form fills, scrolls, and conditional logic. Execute complex workflows without writing browser automation code.
Extraction API
AI-powered data extraction with pre-built models for products, articles, jobs, and more. Use LLM prompts for custom extraction without CSS selectors.
Smart Caching
Cache responses to reduce costs and improve response times. Set custom TTL and clear cache on demand.
Crawler API
Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules.
Auto Scroll
Automatically scroll pages to trigger lazy-loaded content. Essential for infinite scroll pages like social media feeds.
Webhooks
Async processing with delivery guarantees. Get notified when scrapes complete without polling.
Frequently Asked Questions
How do I handle ScraperAPI's ultra_premium=true?
ultra_premium=true?In Scrapfly, use asp=True (Anti-Scraping Protection):
# ScraperAPI
ultra_premium = "true"
# Scrapfly
asp = True # Enables anti-bot bypass
What if I'm using ScraperAPI's Structured Data APIs?
Scrapfly's Extraction API is more flexible:
- AI Models: Pre-built models for products, articles, jobs, real estate, and more
- LLM Prompts: Natural language instructions for custom extraction
- Templates: CSS/XPath selectors for precise extraction
ScraperAPI uses seconds for wait, Scrapfly uses milliseconds?
Yes, multiply ScraperAPI's wait value by 1000:
# ScraperAPI: wait=5 (5 seconds)
# Scrapfly: rendering_wait=5000 (5000 milliseconds)
How do I convert session_number to session?
ScraperAPI uses numeric IDs, Scrapfly uses named strings:
# ScraperAPI
session_number = 12345
# Scrapfly - use a named string
session = "session_12345"
How do I test my migration?
- Sign up for free: Get 1,000 API credits with no credit card required
- Run parallel testing: Keep ScraperAPI running while testing Scrapfly
- Compare results: Verify that Scrapfly returns the same data
- Gradual migration: Switch traffic gradually (e.g., 10% β 50% β 100%)
Start Your Migration Today
Test Scrapfly on your targets with 1,000 free API credits. No credit card required.
- 1,000 free API credits
- Full API access
- Migration support
- Same-day response from our team
Need help with migration? Contact our team