Firecrawl to Scrapfly Migration Guide
Complete parameter mapping and code examples for migrating from Firecrawl to Scrapfly. Get the same LLM-ready output with more reliable anti-bot bypass. Most teams complete migration in under 2 hours.
Complete Parameter Mapping
Firecrawl and Scrapfly have similar capabilities with different parameter names. This table shows exact mappings for all features.
| Firecrawl Parameter | Scrapfly Parameter | Notes |
|---|---|---|
api_key |
key |
API authentication key |
url |
url |
Target URL to scrape (same) |
formats: ["markdown"] |
format=markdown |
Get clean markdown output for LLMs |
formats: ["html"] |
format=clean_html |
Clean HTML without noise |
formats: ["rawHtml"] |
format=raw |
Raw HTML with no modifications |
formats: ["screenshot"] |
screenshots |
Capture page screenshots |
formats: ["json"] + schema |
extraction_template |
Structured data extraction (use Extraction API) |
| Default JS rendering (always on) | render_js=true |
Scrapfly: explicitly enable JS rendering |
stealth (Stealth Mode) |
asp=true |
Anti-bot bypass (ASP is more reliable) |
location.country |
country |
2-letter ISO country code (e.g., "us", "gb") |
location.languages |
lang |
Accept-Language header preference |
actions |
js_scenario |
Browser automation (clicks, fills, waits) |
waitFor |
wait_for_selector |
Wait for CSS selector before scraping |
timeout |
timeout |
Request timeout in milliseconds |
maxAge |
cache=true + cache_ttl |
Response caching with TTL |
onlyMainContent |
format_options=only_content |
Extract main content only (for markdown/text) |
includeTags / excludeTags |
Extraction templates | Use Extraction API for selective content |
headers |
headers |
Custom HTTP headers |
mobile |
os=android |
Mobile device emulation. Use os=android or os=ios |
proxy (stealth/basic) |
asp + proxy_pool |
Firecrawl's stealth proxy → asp=true; basic → default datacenter pool |
blockAds |
N/A (default behavior) | Scrapfly auto-optimizes by default. Use format_options for content filtering |
| N/A | proxy_pool |
Choose proxy type (datacenter/residential) (Scrapfly exclusive) |
| N/A | Proxy Saver | Bandwidth optimization (Scrapfly exclusive) |
| N/A | session |
Persistent sessions (Scrapfly exclusive) |
| N/A | auto_scroll |
Auto-scroll for lazy content (Scrapfly exclusive) |
Migration Code Examples
Side-by-side code examples showing how to migrate from Firecrawl to Scrapfly. Select your language below.
Firecrawl
from firecrawl import Firecrawl
app = Firecrawl(api_key="fc-YOUR_API_KEY")
# Basic scrape with markdown
result = app.scrape(
"https://example.com",
formats=["markdown", "html"]
)
print(result["markdown"])
print(result["html"])
Scrapfly
from scrapfly import ScrapflyClient, ScrapeConfig
client = ScrapflyClient(key="YOUR_SCRAPFLY_API_KEY")
# Basic scrape with markdown
result = client.scrape(ScrapeConfig(
url="https://example.com",
render_js=True,
format="markdown"
))
print(result.content)
Advanced Example: Protected Site with Location
# Firecrawl with stealth and location
result = app.scrape(
"https://protected-site.com",
formats=["markdown"],
location={
"country": "US",
"languages": ["en"]
}
)
# Scrapfly with ASP and location
result = client.scrape(ScrapeConfig(
url="https://protected-site.com",
render_js=True,
asp=True, # More reliable than stealth
format="markdown",
country="us",
lang=["en"]
))
Firecrawl
import Firecrawl from '@mendable/firecrawl-js';
const app = new Firecrawl({
apiKey: 'fc-YOUR_API_KEY'
});
const result = await app.scrapeUrl(
'https://example.com',
{ formats: ['markdown'] }
);
console.log(result.markdown);
Scrapfly
import { ScrapflyClient } from 'scrapfly-sdk';
const client = new ScrapflyClient({
key: 'YOUR_SCRAPFLY_API_KEY'
});
const result = await client.scrape({
url: 'https://example.com',
render_js: true,
format: 'markdown'
});
console.log(result.result.content);
Firecrawl
curl -X POST \
https://api.firecrawl.dev/v1/scrape \
-H "Authorization: Bearer fc-YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"url": "https://example.com",
"formats": ["markdown"]
}'
Scrapfly
curl "https://api.scrapfly.io/scrape\
?key=YOUR_SCRAPFLY_API_KEY\
&url=https%3A%2F%2Fexample.com\
&render_js=true\
&format=markdown"
Migrating JSON Extraction
Firecrawl's JSON mode with schemas maps to Scrapfly's Extraction API. Here's how to migrate structured data extraction.
Firecrawl JSON Mode
from firecrawl import Firecrawl
from pydantic import BaseModel
app = Firecrawl(api_key="fc-YOUR_API_KEY")
class ProductInfo(BaseModel):
name: str
price: float
description: str
result = app.scrape(
'https://example.com/product',
formats=[{
"type": "json",
"schema": ProductInfo.model_json_schema()
}]
)
print(result["json"])
Scrapfly Extraction API
from scrapfly import ScrapflyClient, ScrapeConfig
client = ScrapflyClient(key="YOUR_SCRAPFLY_API_KEY")
# Option 1: LLM prompt extraction
result = client.scrape(ScrapeConfig(
url='https://example.com/product',
render_js=True,
asp=True,
extraction_prompt="Extract product name, price, description"
))
# Option 2: Auto-extraction for products
result = client.scrape(ScrapeConfig(
url='https://example.com/product',
render_js=True,
asp=True,
extraction_model="product"
))
- No schema required: Use natural language prompts instead of defining JSON schemas
- Auto-detection: Pre-built models for products, articles, jobs, real estate, and more
- Template extraction: Define reusable extraction rules with CSS/XPath selectors
See the Extraction API documentation for full details.
Migrating Browser Actions
Firecrawl's "actions" feature maps to Scrapfly's "js_scenario" for browser automation.
Firecrawl Actions
result = app.scrape(
url="https://example.com/login",
formats=["markdown"],
actions=[
{"type": "write", "text": "user@example.com"},
{"type": "press", "key": "Tab"},
{"type": "write", "text": "password123"},
{"type": "click", "selector": 'button[type="submit"]'},
{"type": "wait", "milliseconds": 2000},
{"type": "screenshot", "fullPage": True}
]
)
Scrapfly JS Scenario
result = client.scrape(ScrapeConfig(
url="https://example.com/login",
render_js=True,
asp=True,
format="markdown",
js_scenario=[
{"fill": {"selector": "input[type='email']",
"value": "user@example.com"}},
{"fill": {"selector": "input[type='password']",
"value": "password123"}},
{"click": {"selector": 'button[type="submit"]'}},
{"wait": 2000}
],
screenshots={"fullpage": "login_result"}
))
click: Click elementsfill: Fill form inputswait: Wait for millisecondswait_for_selector: Wait for element
scroll: Scroll page/elementexecute: Run custom JavaScriptwait_for_navigation: Wait for page load
See JS Scenario documentation for full reference.
🤖 AI Migration Assistant
Use Claude or ChatGPT to automatically convert your Firecrawl code to Scrapfly. Copy this prompt and paste it along with your existing code.
Copy This Prompt
I'm migrating from Firecrawl to Scrapfly. Here's my current code using Firecrawl's API.
Please convert it to use Scrapfly's Python SDK (or JavaScript SDK if my code is in JavaScript).
Key parameter mappings:
- `api_key` → `key`
- `formats: ["markdown"]` → `format="markdown"`
- `formats: ["html"]` → `format="clean_html"`
- Firecrawl enables JS rendering by default → use `render_js=True` in Scrapfly
- Stealth mode → `asp=True` (more reliable anti-bot bypass)
- `location.country` → `country` (lowercase, e.g., "us")
- `location.languages` → `lang`
- `actions` array → `js_scenario` (different syntax, see docs)
- `waitFor` → `wait_for_selector`
- `timeout` → `timeout` (same)
- `maxAge` → `cache=True` + `cache_ttl` (seconds)
- `onlyMainContent` → `format_options=["only_content"]`
- `mobile` → `os="android"` or `os="ios"`
- `proxy` (stealth) → `asp=True`
- `proxy` (basic) → `proxy_pool="public_datacenter_pool"` (default)
- `blockAds` → N/A (Scrapfly auto-optimizes)
- JSON extraction with schema → use `extraction_prompt` or `extraction_model`
Scrapfly SDK Docs (markdown for LLM): https://scrapfly.io/docs/sdk/python?view=markdown
Scrapfly API Docs (markdown for LLM): https://scrapfly.io/docs/scrape-api/getting-started?view=markdown
Extraction API Docs: https://scrapfly.io/docs/extraction-api/getting-started?view=markdown
My current Firecrawl code:
[PASTE YOUR CODE HERE]
- Copy the prompt above
- Open Claude or ChatGPT
- Paste the prompt and replace
[PASTE YOUR CODE HERE]with your Firecrawl code - Review the generated Scrapfly code and test it with your free 1,000 credits
Developer Tools: Use our cURL to Python converter and selector tester to speed up development.
Scrapfly Exclusive Features
Features available in Scrapfly that enhance your AI scraping workflow beyond Firecrawl's capabilities.
Advanced Anti-Bot Bypass (ASP)
Full technology ownership means 98% success on protected sites. When anti-bot systems update, we restore bypasses in days, not weeks. More reliable than Firecrawl's Stealth Mode.
MCP Cloud for AI Agents
Native Model Context Protocol integration for Claude, LangChain, and AI agents. No SDK needed - connect your AI directly to web scraping.
Proxy Saver (Bandwidth Optimization)
Reduce residential proxy bandwidth costs by 50%. Blocks junk traffic, stubs images/CSS, and caches responses, saving ~$1,500 per million requests.
Crawler API
Automated multi-page crawling with intelligent link discovery, sitemap support, and per-URL extraction rules.
Auto Scroll
Automatically scroll pages to trigger lazy-loaded content. Essential for infinite scroll pages like social media feeds.
Webhooks
Async processing with delivery guarantees. Get notified when scrapes complete without polling.
Frequently Asked Questions
How do I get markdown output like Firecrawl?
Use the format parameter:
# Firecrawl
formats=["markdown"]
# Scrapfly
format="markdown"
Scrapfly also supports format="text" for plain text and format="clean_html" for structured HTML without noise.
Firecrawl renders JavaScript by default. How do I enable it in Scrapfly?
Scrapfly gives you explicit control. Add render_js=True to enable JavaScript rendering:
result = client.scrape(ScrapeConfig(
url="https://example.com",
render_js=True, # Enable JS rendering
asp=True # Enable anti-bot bypass
))
This gives you cost control. Simple pages without JavaScript are cheaper to scrape.
How do I replicate Firecrawl's JSON extraction?
Scrapfly's Extraction API offers three approaches:
- LLM prompt:
extraction_prompt="Extract product name, price, description" - Auto-extraction:
extraction_model="product"(or article, job, etc.) - Template: Define CSS/XPath selectors in a reusable template
Unlike Firecrawl's schema-based approach, LLM prompts work on any page structure without predefined schemas.
How do I migrate Firecrawl's caching (maxAge)?
Scrapfly uses cache and cache_ttl parameters:
# Firecrawl (maxAge in milliseconds)
maxAge=600000 # 10 minutes
# Scrapfly (cache_ttl in seconds)
cache=True
cache_ttl=600 # 10 minutes
See caching documentation for more options.
How do I test my migration?
- Sign up for free: Get 1,000 API credits with no credit card required
- Run parallel testing: Keep Firecrawl running while testing Scrapfly on the same URLs
- Compare results: Verify that Scrapfly returns the same data quality
- Test protected sites: Try Scrapfly's ASP on sites where Firecrawl's Stealth Mode struggles
- Gradual migration: Switch traffic gradually (e.g., 10% → 50% → 100%)
Start Your Migration Today
Get reliable anti-bot bypass with LLM-ready output. Test Scrapfly with 1,000 free API credits.
- 1,000 free API credits
- Full API access
- Same markdown output as Firecrawl
- 98% success on protected sites
Need help with migration? Contact our team