🚀 We are hiring! See open positions

Scrapingdog to Scrapfly Migration Guide

Complete parameter mapping and code examples for migrating from Scrapingdog to Scrapfly. Most teams complete migration in under 2 hours with zero downtime.

Complete Parameter Mapping

Scrapingdog and Scrapfly use different parameter names. This table shows exact mappings for all features.

Scrapingdog Parameter Scrapfly Parameter Notes
api_key key API authentication key
url url Target URL to scrape (same)
dynamic render_js Enable JavaScript rendering (dynamic=true becomes render_js=true)
premium proxy_pool Use public_residential_pool for residential proxies
country country 2-letter ISO country code (same parameter name)
wait rendering_wait Wait time in milliseconds before returning response
session_number session Session name for persistent cookies/state
custom_headers + headers headers Pass headers directly (no boolean flag needed)
markdown format Use format=markdown for markdown output
ai_query extraction_prompt AI extraction with natural language prompts
ai_extract_rules extraction_template Use Scrapfly's Extraction API for structured rules
N/A (no equivalent) asp Scrapfly exclusive: Anti-Scraping Protection for bypassing anti-bot systems
N/A wait_for_selector Wait for CSS selector to appear (Scrapfly exclusive)
N/A cache Enable response caching (Scrapfly exclusive)
N/A cache_ttl Cache time-to-live in seconds (Scrapfly exclusive)
N/A auto_scroll Automatically scroll page to load lazy content (Scrapfly exclusive)
N/A tags Custom tags for request tracking (Scrapfly exclusive)
N/A webhook Webhook name for async notifications (Scrapfly exclusive)

Migration Code Examples

Side-by-side code examples showing how to migrate from Scrapingdog to Scrapfly. Select your language below.

Scrapingdog
import requests

url = "https://api.scrapingdog.com/scrape"

params = {
    "api_key": "YOUR_SCRAPINGDOG_KEY",
    "url": "https://example.com",
    "dynamic": "true",
    "premium": "true",
    "country": "us",
    "wait": 5000
}

response = requests.get(url, params=params)

print(response.text)
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig

client = ScrapflyClient(key="YOUR_SCRAPFLY_KEY")

result = client.scrape(ScrapeConfig(
    url="https://example.com",
    render_js=True,
    asp=True,  # Anti-bot bypass
    proxy_pool="public_residential_pool",
    country="us",
    rendering_wait=5000
))

print(result.content)
Scrapingdog
const axios = require('axios');

const apiUrl = 'https://api.scrapingdog.com/scrape';
const params = {
  api_key: 'YOUR_SCRAPINGDOG_KEY',
  url: 'https://example.com',
  dynamic: 'true',
  premium: 'true',
  country: 'us',
  wait: 5000
};

axios.get(apiUrl, { params })
  .then(response => {
    console.log(response.data);
  });
Scrapfly
const { ScrapflyClient } = require('scrapfly-sdk');

const client = new ScrapflyClient({
    key: 'YOUR_SCRAPFLY_KEY'
});

const result = await client.scrape({
    url: 'https://example.com',
    render_js: true,
    asp: true,  // Anti-bot bypass
    proxy_pool: 'public_residential_pool',
    country: 'us',
    rendering_wait: 5000
});

console.log(result.result.content);
Scrapingdog
curl "https://api.scrapingdog.com/scrape\
?api_key=YOUR_SCRAPINGDOG_KEY\
&url=https%3A%2F%2Fexample.com\
&dynamic=true\
&premium=true\
&country=us\
&wait=5000"
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&asp=true\
&proxy_pool=public_residential_pool\
&country=us\
&rendering_wait=5000"

AI Migration Assistant

Use Claude or ChatGPT to automatically convert your Scrapingdog code to Scrapfly. Copy this prompt and paste it along with your existing code.

Copy This Prompt

I'm migrating from Scrapingdog to Scrapfly. Here's my current code using Scrapingdog's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).

Key parameter mappings:
- api_key → key
- dynamic=true → render_js=True
- premium=true → proxy_pool="public_residential_pool"
- country → country (same name)
- wait → rendering_wait
- session_number → session
- custom_headers + headers → headers (pass directly)
- markdown=true → format="markdown"
- ai_query → extraction_prompt (use Scrapfly Extraction API)
- ai_extract_rules → extraction_template (use Scrapfly Extraction API)

Important additions for Scrapfly:
- Add asp=True for anti-bot bypass (Scrapfly's key feature, no Scrapingdog equivalent)
- Use wait_for_selector instead of just wait for dynamic content

Scrapfly SDK Docs (markdown for LLM): https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs (markdown for LLM): https://scrapfly.io/docs/scrape-api/getting-started?view=markdown

My current Scrapingdog code:
[PASTE YOUR CODE HERE]
How to Use:
  1. Copy the prompt above
  2. Open Claude or ChatGPT
  3. Paste the prompt and replace [PASTE YOUR CODE HERE] with your Scrapingdog code
  4. Review the generated Scrapfly code and test it with your free 1,000 credits

Developer Tools: Use our cURL to Python converter and selector tester to speed up development.

Frequently Asked Questions

How do I handle Scrapingdog's dynamic=true and premium=true?

In Scrapfly, these are separate parameters:

# Scrapingdog
dynamic = "true"
premium = "true"

# Scrapfly
render_js = True  # For JavaScript rendering
proxy_pool = "public_residential_pool"  # For residential proxies
asp = True  # For anti-bot bypass (recommended!)

The key addition is asp=True, which enables Scrapfly's Anti-Scraping Protection for much higher success rates on protected sites.

What if I'm using Scrapingdog's ai_query or ai_extract_rules?

Scrapfly's Extraction API is more powerful:

  • extraction_prompt: Similar to ai_query, but more capable
  • extraction_template: Similar to ai_extract_rules, but with more features
  • extraction_model: Pre-built models for products, articles, jobs, etc.

Learn more about the Extraction API

Scrapingdog doesn't have SDKs. How do I use Scrapfly's SDKs?

Install the SDK for your language:

# Python
pip install scrapfly-sdk

# JavaScript/TypeScript
npm install scrapfly-sdk

# Go
go get github.com/scrapfly/scrapfly-go

SDKs handle authentication, retries, and error handling automatically. View SDK documentation

How do I test my migration?

  1. Sign up for free: Get 1,000 API credits with no credit card required
  2. Run parallel testing: Keep Scrapingdog running while testing Scrapfly
  3. Compare results: Verify that Scrapfly returns the same data (likely with higher success rate)
  4. Gradual migration: Switch traffic gradually (e.g., 10% → 50% → 100%)

What about Scrapingdog's dedicated APIs (Google, Amazon, LinkedIn)?

Scrapfly takes a unified approach: one API works for all websites. Instead of learning different APIs for each target:

  • ASP technology: Handles anti-bot protection on any site
  • Extraction API: Extracts structured data from any page (products, articles, jobs, etc.)
  • Higher success rates: 98% vs 39% on protected sites

This means less code to maintain and consistent behavior across all targets.

Start Your Migration Today

Test Scrapfly on your targets with 1,000 free API credits. No credit card required.

  • 1,000 free API credits
  • Full API access
  • Official SDKs included
  • Same-day response from our team
Start For Free

Need help with migration? Contact our team