ScrapeOps to Scrapfly Migration Guide
Complete parameter mapping and code examples for migrating from ScrapeOps to Scrapfly. Most teams complete migration in under 2 hours with zero downtime.
Complete Parameter Mapping
ScrapeOps and Scrapfly use different parameter names. This table shows exact mappings for all features.
| ScrapeOps Parameter | Scrapfly Parameter | Notes |
|---|---|---|
api_key |
key |
API authentication key |
url |
url |
Target URL to scrape (same) |
render_js |
render_js |
Enable JavaScript rendering (same parameter name) |
residential |
proxy_pool |
Use public_residential_pool for residential proxies |
country |
country |
2-letter ISO country code (same parameter name) |
wait |
rendering_wait |
Wait time in milliseconds before returning response |
keep_headers + headers |
headers |
Pass headers directly (no boolean flag needed) |
session_number |
session |
Session name for persistent cookies/state |
bypass |
asp |
Anti-bot bypass — Scrapfly's ASP is far more capable |
wait_for |
wait_for_selector |
Wait for CSS selector to appear |
| N/A | cache |
Enable response caching (Scrapfly exclusive) |
| N/A | cache_ttl |
Cache time-to-live in seconds (Scrapfly exclusive) |
| N/A | auto_scroll |
Automatically scroll page to load lazy content (Scrapfly exclusive) |
| N/A | tags |
Custom tags for request tracking (Scrapfly exclusive) |
| N/A | webhook |
Webhook name for async notifications (Scrapfly exclusive) |
Migration Code Examples
Side-by-side code examples showing how to migrate from ScrapeOps to Scrapfly. Select your language below.
ScrapeOps
import requests
url = "https://proxy.scrapeops.io/v1/"
params = {
"api_key": "YOUR_SCRAPEOPS_KEY",
"url": "https://web-scraping.dev/products",
"render_js": "true",
"residential": "true",
"country": "us",
"wait": 5000,
"bypass": "generic_level_3"
}
response = requests.get(url, params=params)
print(response.text)
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig
client = ScrapflyClient(key="YOUR_SCRAPFLY_KEY")
result = client.scrape(ScrapeConfig(
url="https://web-scraping.dev/products",
render_js=True,
asp=True, # Anti-bot bypass
proxy_pool="public_residential_pool",
country="us",
rendering_wait=5000
))
print(result.content)
ScrapeOps
const axios = require('axios');
const apiUrl = 'https://proxy.scrapeops.io/v1/';
const params = {
api_key: 'YOUR_SCRAPEOPS_KEY',
url: 'https://web-scraping.dev/products',
render_js: 'true',
residential: 'true',
country: 'us',
wait: 5000,
bypass: 'generic_level_3'
};
axios.get(apiUrl, { params })
.then(response => {
console.log(response.data);
});
Scrapfly
const { ScrapflyClient } = require('scrapfly-sdk');
const client = new ScrapflyClient({
key: 'YOUR_SCRAPFLY_KEY'
});
const result = await client.scrape({
url: 'https://web-scraping.dev/products',
render_js: true,
asp: true, // Anti-bot bypass
proxy_pool: 'public_residential_pool',
country: 'us',
rendering_wait: 5000
});
console.log(result.result.content);
ScrapeOps
curl "https://proxy.scrapeops.io/v1/\
?api_key=YOUR_SCRAPEOPS_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&residential=true\
&country=us\
&wait=5000\
&bypass=generic_level_3"
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&asp=true\
&proxy_pool=public_residential_pool\
&country=us\
&rendering_wait=5000"
AI Migration Assistant
Use Claude or ChatGPT to automatically convert your ScrapeOps code to Scrapfly. Copy this prompt and paste it along with your existing code.
Copy This Prompt
I'm migrating from ScrapeOps to Scrapfly. Here's my current code using ScrapeOps' API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).
Key parameter mappings:
- api_key → key
- render_js=true → render_js=True (same name)
- residential=true → proxy_pool="public_residential_pool"
- country → country (same name)
- wait → rendering_wait
- session → session (same name)
- keep_headers + headers → headers (pass directly)
- bypass → asp=True (Scrapfly's ASP is much more capable)
Important additions for Scrapfly:
- Add asp=True for anti-bot bypass (Scrapfly's key feature)
- Use wait_for_selector instead of just wait for dynamic content
Scrapfly SDK Docs (markdown for LLM): https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs (markdown for LLM): https://scrapfly.io/docs/scrape-api/getting-started?view=markdown
My current ScrapeOps code:
[PASTE YOUR CODE HERE]
- Copy the prompt above
- Open Claude or ChatGPT
- Paste the prompt and replace
[PASTE YOUR CODE HERE]with your ScrapeOps code - Review the generated Scrapfly code and test it with your free 1,000 credits
Developer Tools: Use our cURL to Python converter and selector tester to speed up development.
Scrapfly Exclusive Features
Features available in Scrapfly that aren't available in ScrapeOps.
Anti-Scraping Protection (ASP)
Industry-leading anti-bot bypass for Cloudflare, DataDome, PerimeterX, and more. 98% success rate on protected sites with default settings.
Extraction API
AI-powered data extraction with pre-built models for products, articles, jobs, and more. Use LLM prompts for custom extraction without CSS selectors.
Proxy Saver
Bandwidth optimization that cuts residential proxy costs by 50%. Blocks junk traffic, stubs images/CSS, and caches responses.
JS Scenarios
Automate browser interactions: clicks, form fills, scrolls, and conditional logic. Execute complex workflows without writing browser automation code.
Official SDKs
First-class SDKs for Python, JavaScript/TypeScript, Go, and Scrapy. No manual HTTP client setup required.
Crawler API
Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules.
Frequently Asked Questions
How do I handle ScrapeOps' bypass parameter?
bypass parameter?ScrapeOps uses bypass=generic_level_1/2/3 for different anti-bot bypass levels. In Scrapfly, you simply use:
# ScrapeOps
bypass = "generic_level_3"
# Scrapfly
asp = True # One parameter handles all anti-bot bypass
# ASP automatically adapts to the target's protection level
Scrapfly's ASP technology automatically detects the protection system and applies the appropriate bypass strategy. No manual level selection needed.
How do I migrate from ScrapeOps' residential=true?
residential=true?In Scrapfly, proxy selection uses the proxy_pool parameter:
# ScrapeOps
residential = "true"
# Scrapfly
proxy_pool = "public_residential_pool" # Residential proxies
# Or omit for datacenter proxies (default)
All proxies are included in your API credits — no separate proxy fees. Learn more about proxy options
Does Scrapfly have proxy monitoring like ScrapeOps?
Scrapfly provides comprehensive request monitoring and analytics through its dashboard:
- Real-time request logs with full details (status, timing, cost)
- Success rate tracking per target domain
- Cost analysis and usage breakdown
- No charge for failed requests
Unlike ScrapeOps' proxy comparison approach, Scrapfly eliminates the need to compare providers — our full-stack technology handles proxy selection automatically.
How do I test my migration?
- Sign up for free: Get 1,000 API credits with no credit card required
- Run parallel testing: Keep ScrapeOps running while testing Scrapfly
- Compare results: Verify that Scrapfly returns the same data (likely with higher success rate)
- Gradual migration: Switch traffic gradually (e.g., 10% → 50% → 100%)
Start Your Migration Today
Test Scrapfly on your targets with 1,000 free API credits. No credit card required.
- 1,000 free API credits
- Full API access
- Official SDKs included
- Same-day response from our team
Need help with migration? Contact our team