Decodo to Scrapfly Migration Guide
Complete parameter mapping and code examples for migrating from Decodo Web Scraping API to Scrapfly. Most teams complete migration in under 2 hours with zero downtime.
Complete Parameter Mapping
Decodo and Scrapfly use different parameter names. This table shows exact mappings for all features.
| Decodo Parameter | Scrapfly Parameter | Notes |
|---|---|---|
url |
url |
Target URL to scrape (same) |
target |
N/A | Decodo uses target templates (google_search, amazon_product). Scrapfly uses direct URLs instead |
headless=html |
render_js=true |
Enable JavaScript rendering with headless browser |
geo |
country |
2-letter ISO country code (e.g., "us", "gb"). Scrapfly supports 50+ countries on all plans |
domain |
N/A | Decodo uses domain for TLD selection. Include in URL directly with Scrapfly |
locale |
lang |
Accept-Language header for language preference |
headers |
headers |
Custom HTTP headers (JSON format) |
cookies |
cookies |
Custom cookies (via headers or cookies parameter) |
force_cookies |
cookies |
Force forward cookies. In Scrapfly, use the cookies parameter directly |
force_headers |
headers |
Force forward headers. In Scrapfly, use the headers parameter directly |
device_type |
os |
Device type: "desktop", "mobile", "android", "ios" |
parse=true |
extraction_template |
Use Scrapfly's Extraction API for structured data output |
session_id |
session |
Session name for persistent cookies/state |
http_method=POST |
method=POST |
HTTP method for the request |
payload |
body |
Request body for POST requests |
successful_status_codes |
N/A | Scrapfly handles status codes automatically. Use error handling in your code if needed |
markdown=true |
format=markdown |
Output response as Markdown |
xhr=true |
debug=true |
Capture XHR/fetch requests (use debug mode) |
| N/A | asp=true |
Anti-Scraping Protection for protected sites (Scrapfly exclusive) |
| N/A | proxy_pool |
Choose datacenter or residential proxy pool (Scrapfly exclusive) |
| N/A | cache |
Enable response caching (Scrapfly exclusive) |
| N/A | cache_ttl |
Cache time-to-live in seconds (Scrapfly exclusive) |
| N/A | auto_scroll |
Automatically scroll page to load lazy content (Scrapfly exclusive) |
| N/A | rendering_wait |
Wait time in milliseconds after page load (Scrapfly exclusive) |
| N/A | wait_for_selector |
Wait for CSS selector before returning (Scrapfly exclusive) |
| N/A | webhook |
Webhook for async notifications (Scrapfly exclusive) |
Migration Code Examples
Side-by-side code examples showing how to migrate from Decodo to Scrapfly. Select your language below.
Decodo
import requests
# Decodo Web Scraping API
url = "https://scraper-api.decodo.com/v2/scrape"
headers = {
"Authorization": "Basic YOUR_DECODO_API_KEY",
"Content-Type": "application/json"
}
payload = {
"url": "https://example.com",
"headless": "html",
"geo": "United States",
"device_type": "desktop"
}
response = requests.post(url, json=payload, headers=headers)
print(response.text)
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig
# Scrapfly Web Scraping API
client = ScrapflyClient(key="YOUR_SCRAPFLY_API_KEY")
result = client.scrape(ScrapeConfig(
url="https://example.com",
render_js=True,
asp=True, # Anti-bot bypass
country="us",
os="desktop"
))
print(result.content)
Decodo
const axios = require('axios');
// Decodo Web Scraping API
const response = await axios.post(
'https://scraper-api.decodo.com/v2/scrape',
{
url: 'https://example.com',
headless: 'html',
geo: 'United States',
device_type: 'desktop'
},
{
headers: {
'Authorization': 'Basic YOUR_DECODO_API_KEY',
'Content-Type': 'application/json'
}
}
);
console.log(response.data);
Scrapfly
const { ScrapflyClient } = require('scrapfly-sdk');
// Scrapfly Web Scraping API
const client = new ScrapflyClient({
key: 'YOUR_SCRAPFLY_API_KEY'
});
const result = await client.scrape({
url: 'https://example.com',
render_js: true,
asp: true, // Anti-bot bypass
country: 'us',
os: 'desktop'
});
console.log(result.result.content);
Decodo
curl -X POST \
"https://scraper-api.decodo.com/v2/scrape" \
-H "Authorization: Basic YOUR_DECODO_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"headless": "html",
"geo": "United States",
"device_type": "desktop"
}'
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_API_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&asp=true\
&country=us\
&os=desktop"
🤖 AI Migration Assistant
Use Claude or ChatGPT to automatically convert your Decodo code to Scrapfly. Copy this prompt and paste it along with your existing code.
Copy This Prompt
I'm migrating from Decodo Web Scraping API to Scrapfly. Here's my current code using Decodo's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).
Key parameter mappings:
- url → url (same)
- headless=html → render_js=true
- geo → country (use 2-letter ISO code like "us" instead of "United States")
- device_type → os
- session_id → session
- http_method → method
- payload → body
- headers → headers
- cookies → cookies
- force_cookies → cookies (use directly)
- force_headers → headers (use directly)
- locale → lang
- markdown=true → format="markdown"
- parse=true → extraction_template (use Scrapfly Extraction API)
Important Scrapfly additions:
- Add asp=true for protected sites (Anti-Scraping Protection)
- Add proxy_pool="public_residential_pool" for residential proxies
- Scrapfly uses direct URLs instead of target templates
Scrapfly SDK Docs (markdown for LLM): https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs (markdown for LLM): https://scrapfly.io/docs/scrape-api/getting-started?view=markdown
My current Decodo code:
[PASTE YOUR CODE HERE]
- Copy the prompt above
- Open Claude or ChatGPT
- Paste the prompt and replace
[PASTE YOUR CODE HERE]with your Decodo code - Review the generated Scrapfly code and test it with your free 1,000 credits
Developer Tools: Use our cURL to Python converter and selector tester to speed up development.
Migrating Target Templates
Decodo uses target templates (like google_search, amazon_product) for structured scraping. Scrapfly uses direct URLs with Extraction API for the same functionality.
Google Search
Decodo: target: "google_search"
Scrapfly:
result = client.scrape(ScrapeConfig(
url="https://www.google.com/search?q=your+query",
asp=True,
country="us"
))
Use Extraction API with extraction_model="search_engine_results" for structured output.
Amazon Product
Decodo: target: "amazon_product"
Scrapfly:
result = client.scrape(ScrapeConfig(
url="https://www.amazon.com/dp/B08N5WRWNW",
asp=True,
country="us"
))
Use Extraction API with extraction_model="product" for structured output.
Scrapfly Exclusive Features
Features available in Scrapfly that aren't available in Decodo.
Anti-Scraping Protection (ASP)
Industry-leading anti-bot bypass for Cloudflare, DataDome, PerimeterX, and more. Enable with asp=true.
JS Scenarios
Automate browser interactions: clicks, form fills, scrolls, and conditional logic without writing browser automation code.
Extraction API
AI-powered data extraction with pre-built models for products, articles, jobs, and more. Use LLM prompts for custom extraction.
Smart Caching
Cache responses to reduce costs and improve response times. Set custom TTL and clear cache on demand.
Crawler API
Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules.
Proxy Saver
Reduce bandwidth costs by up to 50% by blocking junk traffic, stubbing images/CSS, and caching responses.
Frequently Asked Questions
How do I handle Decodo's headless=html parameter?
headless=html parameter?In Scrapfly, use render_js=true to enable JavaScript rendering:
# Decodo
headless = "html"
# Scrapfly
render_js = True
Unlike Decodo's Core plan, Scrapfly includes JS rendering on all plans.
What about Decodo's geo-targeting?
Decodo uses full country names (e.g., "United States"), while Scrapfly uses 2-letter ISO codes:
# Decodo
geo = "United States"
# Scrapfly
country = "us" # ISO 3166-1 alpha-2 code
Scrapfly supports 50+ countries on all plans. Decodo's Core plan is limited to 8 countries.
How do I migrate Decodo's parsing functionality?
Decodo's parse=true returns structured JSON for certain targets. In Scrapfly, use the Extraction API:
# Scrapfly Extraction API
result = client.scrape(ScrapeConfig(
url="https://www.amazon.com/dp/B08N5WRWNW",
asp=True,
extraction_model="product" # Auto-extract product data
))
Available models: product, article, job_posting, search_engine_results, and more.
What if I'm using Decodo's session management?
Decodo uses session_id, Scrapfly uses session. The functionality is the same:
# Decodo
session_id = "my-session-123"
# Scrapfly
session = "my-session-123"
How do I test my migration?
- Sign up for free: Get 1,000 API credits with no credit card required
- Run parallel testing: Keep Decodo running while testing Scrapfly
- Compare results: Verify that Scrapfly returns the same data
- Gradual migration: Switch traffic gradually (e.g., 10% → 50% → 100%)
Start Your Migration Today
Test Scrapfly on your targets with 1,000 free API credits. No credit card required.
- 1,000 free API credits
- Full API access
- Migration support
- Same-day response from our team
Need help with migration? Contact our team