Web Scraping API
Unlock Reliable & Scalable Scraping
Highest scraping success rates with battle-tested API that effortlessly scale with your needs!
- Overcome scraping challenges with advanced anti-blocking techniques.
- Utilize cloud-based browsers for scraping complex, dynamic pages effortlesly.
- Leverage AI and LLMs for precision and extract data automatically.
- Seamlessly integrate with popular platforms, Python and Typescript for smooth workflows.
What Can it Do?
-
Automatically Bypass Scraper BlockingSeamlessly bypass anti-bot challenges and solve JS-based obstacles for uninterrupted scraping.
-
Customize Any Part of the RequestCustomize every aspect of your requests. From headers, cookies to data payloads and even operating system — you're in full control of your scrapers.
-
Automatic Proxy Rotation: 130M+ Proxies From 120+ CountriesScrapfly rotates proxies automatically, giving you access to a collossal IP pool of either residential or datacenter proxies for complete coverage.
-
Use Sessions for Persistent ProxiesMaintain a persistent sessions for both residential and datacenter proxies to ensure uninterrupted data scraping.
-
Use Real Web BrowsersScrape javascript-powered websites and load all elements automatically using real web browsers with thousands of configurable fingerprints.
-
Send Real Browser CommandsExecute real-time browser commands to interact with dynamic web content. Fill in forms, click buttons and scroll to reach the desired pages.
-
Access Browser DataDirectly access browser data to capture background requests, hidden data and delayed elements.
-
Switch Sessions Between Browser and Non-Browser RequestsSeamlessly switch between browser and non-browser sessions for flexible scraping.
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://httpbin.dev/html',
# bypass anti-scraping protection
asp=True
)
)
print(api_response.result)
Output
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';h
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://web-scraping.dev/product/1',
})
);
console.log(api_result.result);
Output
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/html \
asp==true
Output
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key=os.environ("SCRAPFLY_KEY"))
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='http://httpbin.dev/post',
# change method to POST, GET, PUT etc.
method="POST",
# POST data
data={
"key": "value"
},
# send custom headers
headers={
"Authorization": "Basic ABC",
"Content-Type": "application/json",
}
)
)
print(api_response.result)
Output
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://httpbin.dev/post',
// change method to POST, GET, PUT etc.
method: "POST",
// POST data
data: {
"key": "value"
},
// send custom headers
headers: {
"Authorization": "Basic ABC",
"Content-Type": "application/json",
}
})
);
console.log(api_result.result);
Output
http POST https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/post \
headers[myheader]==myvalue \
key=value
Output
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://httpbin.dev/html',
# choose proxy countries
country="US,CA",
# residential or datacenter proxies
proxy_pool="public_residential_pool"
)
)
print(api_response.result)
Output
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://web-scraping.dev/product/1',
// choose proxy countries
country: "US,CA",
// residential or datacenter proxies
proxy_pool: "public_residential_pool"
})
);
console.log(api_result.result);
Output
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://httpbin.dev/html \
country=="US,CA" \
proxy_pool=="public_residential_pool"
Output
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://web-scraping.dev/product/1',
# add unique identifier to start a session
session="mysession123",
)
)
# resume session
api_response2: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://web-scraping.dev/product/1',
session="mysession123",
# sessions can be shared between browser and http requests
# render_js = True, # enable browser for this session
)
)
print(api_response2.result))
Output
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://web-scraping.dev/product/1',
// add unique identifier to start a session
session: "mysession123",
})
);
// resume session
let api_result2 = await client.scrape(
new ScrapeConfig({
url: 'https://web-scraping.dev/product/1',
session: "mysession123",
// sessions can be shared between browser and http requests
// render_js: true, // enable browser for this session
})
);
console.log(JSON.stringify(api_result2.result));
Output
# start session
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
session=mysession123
# resume session
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/product/1 \
session=mysession123
Output
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://web-scraping.dev/reviews',
# enable the use of cloud browers
render_js=True,
# wait for specific element to appear
wait_for_selector=".review",
# or wait set amount of time
rendering_wait=3_000, # 3 seconds
)
)
print(api_response.result)
Output
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://web-scraping.dev/reviews',
// enable the use of cloud browers
render_js: true,
// wait for specific element to appear
wait_for_selector: ".review",
// or wait set amount of time
rendering_wait: 3_000, // 3 seconds
})
);
console.log(JSON.stringify(api_result.result));
Output
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/reviews \
render_js==true \
wait_for_selector==.review \
rendering_wait==3000
Output
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://web-scraping.dev/login',
# enable browsers for this request
render_js = True,
# describe your control flow
js_scenario = [
{"fill": {"selector": "input[name=username]", "value":"user123"}},
{"fill": {"selector": "input[name=password]", "value":"password"}},
{"click": {"selector": "button[type='submit']"}},
{"wait_for_navigation": {"timeout": 5000}}
]
)
)
print(api_response.result)
Output
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://web-scraping.dev/reviews',
// enable browsers for this request
render_js: true,
// describe your control flow
js_scenario: [
{"fill": {"selector": "input[name=username]", "value":"user123"}},
{"fill": {"selector": "input[name=password]", "value":"password"}},
{"click": {"selector": "button[type='submit']"}},
{"wait_for_navigation": {"timeout": 5000}}
]
})
);
console.log(JSON.stringify(api_result.result));
Output
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/login \
render_js==true \
js_scenario==Ww0KCXsiZmlsbCI6IHsic2VsZWN0b3IiOiAiaW5wdXRbbmFtZT11c2VybmFtZV0iLCAidmFsdWUiOiJ1c2VyMTIzIn19LA0KCXsiZmlsbCI6IHsic2VsZWN0b3IiOiAiaW5wdXRbbmFtZT1wYXNzd29yZF0iLCAidmFsdWUiOiJwYXNzd29yZCJ9fSwNCgl7ImNsaWNrIjogeyJzZWxlY3RvciI6ICJidXR0b25bdHlwZT0nc3VibWl0J10ifX0sDQoJeyJ3YWl0X2Zvcl9uYXZpZ2F0aW9uIjogeyJ0aW1lb3V0IjogNTAwMH19DQpd
# note: js scenario has to be base64 encoded
Output
from scrapfly import ScrapeConfig, ScrapflyClient, ScrapeApiResponse
client = ScrapflyClient(key="API KEY")
api_response: ScrapeApiResponse = client.scrape(
ScrapeConfig(
url='https://web-scraping.dev/reviews',
render_js=True,
rendering_wait=3_000,
)
)
# see the browser_data field
print(api_response.result['browser_data'])
Output
import {
ScrapflyClient, ScrapeConfig
} from 'jsr:@scrapfly/scrapfly-sdk';
const client = new ScrapflyClient({ key: "API KEY" });
let api_result = await client.scrape(
new ScrapeConfig({
url: 'https://web-scraping.dev/reviews',
render_js: true,
rendering_wait: 3_000,
})
);
// see the browser_data field
console.log(JSON.stringify(api_result.result.browser_data));
Output
http https://api.scrapfly.io/scrape \
key==$SCRAPFLY_KEY \
url==https://web-scraping.dev/reviews \
render_js==true | jq .result.browser_data
Output
We got Your Industry Covered!
Real Estate
Scrape property listings, agent info, sale history to enhance your real estate decisions.
eCommerce
Scrape products, reviews and more to enhance your eCommerce and brand awareness.
Travel
Scrape hotel listings, reviews, prices, and more to enhance your hospitality business.
Social Media
Scrape profiles, posts, comments to enhance your social media presence and find leads.
SERP & SEO
Scrape search engine results, keywords, and more to enhance your SEO strategy.
Market Research
Scrape company profiles, reviews, and more to enhance your market research.
Lead Generation
Scrape online profiles and contact details to enhance your lead generation.
Financial Service
Scrape the latest stock, shipping and financial data to enhance your finance datasets.
More
Developer-First Experience
We made Scrapfly for ourselves in 2017 and opened it to public in 2020. In that time, we focused on working on the best developer experience possible.
Master Web Data with our Docs and Tools
Access a complete ecosystem of documentation, tools, and resources designed to accelerate your data journey and help you get the most out of Scrapfly.
-
Learn with Scrapfly Academy
Learn everything about data retrieval and web scraping with our interactive courses.
-
Explore Open-Source Scrapfly Scrapers
Explore our open-source repository of powerful, ready-to-use scrapers with coverage for over 40 most popular targets.
-
Develop with Scrapfly Tools
Streamline your web data development with our web tools designed to enhance every step of the process.
-
Stay Up-To-Date with our Newsletter and Blog
Stay updated with the latest trends and insights in web data with our monthly newsletter weekly blog posts.
Seamlessly Integrate with Frameworks & Platforms
Easily integrate Scrapfly with your favorite tools and platforms, or customize workflows with our Python and TypeScript SDKs.
Powerful Web UI
One-stop shop to configure, control and observe all of your Scrapfly activity.
-
Experiment with Web API Player
Use our Web API player for easy testing, experimenting and sharing for collaboration and seamless integration.
-
Monitor, Debug & Replay
Use the real-time monitoring dashboard to review, debug and replay API activities — making debugging faster than ever.
-
Manage Multiple Projects
Manage multiple projects with ease — complete with built-in testing environements for full control and flexibility.
-
Attach Webhooks & Throttlers
Upgrade your API calls with webhooks for true asynchronous architecture and throttlers to control your usage.
Predictable & Fair Pricing
How Many Scrapes per Month?
∞ /mo
Custom
|
$500/mo
Enterprise
|
$250/mo
Startup
|
$100/mo
Pro
|
$30/mo
Discovery
|
|
---|---|---|---|---|---|
Included API Credits | ∞ | 5,500,000 | 2,500,000 | 1,000,000 | 200,000 |
Extra API Credits | ∞ per 10k | $1.20 per 10k | $2.00 per 10k | $3.50 per 10k | ✖ |
Concurrent Request | ∞ | 100 | 50 | 20 | 5 |
Log Retention | ∞ weeks | 4 weeks | 3 weeks | 2 weeks | 1 weeks |
Anti Scraping Protection | ✓ | ✓ | ✓ | ✓ | ✓ |
Residential Proxy | ✓ | ✓ | ✓ | ✓ | ✓ |
Geo targeting | ✓ | ✓ | ✓ | ✓ | ✓ |
Javascript Rendering | ✓ | ✓ | ✓ | ✓ | ✓ |
Support | Premium Support | Premium Support | Standard Support | Standard Support | Basic Support |
* Price may vary with tax
What Do Our Users Say?
"Scrapfly’s Web Scraping API has completely transformed our data collection process. The automatic proxy rotation and anti-bot bypass are game-changers. We no longer have to worry about scraping blocks, and the setup was incredibly easy. Within minutes, we had a reliable scraping system pulling the data we needed. I highly recommend this API for any serious developer!"
John M. – Senior Data Engineer
"We’ve tried multiple scraping tools, but Scrapfly’s Web Scraping API stands out for its reliability and speed. With the cloud browser functionality, we were able to scrape dynamic content from JavaScript-heavy websites seamlessly. The real-time data collection helped us make faster, more informed decisions, and the 99.99% uptime is just unmatched."
Samantha C. – CTO
"Scalability was a major concern for us as our data scraping needs grew. Scrapfly’s Web Scraping API not only handled our increased requests but did so without a hitch. The proxy rotation across 120+ countries ensured we could access data from any region, and their comprehensive documentation made implementation a breeze. It's the most robust API we’ve used."
Alex T. – Founder
Frequently Asked Questions
What is a Web Scraping API?
Web Scrpaing API is a service that abstracts away the complexities and challenges of web scraping and data extraction. This allows developers to focus on creating software rather than dealing with issues like web scraping blocking and other data access challenges.
How can I access Web Scraping API?
Web Scraping HTTP API can be accessed in any http client like curl, httpie or any http client library in any programming language. For first-class support we offer Python and Typescript SDKs.
Is web scraping legal?
Yes, generally web scraping is legal in most places around the world. For more see our in-depth web scraping laws article.
Does Web Scraping API use AI for web scraping?
Yes, Scrapfly incorporate some of AI and machine learning algorithms to successfully retrieve web pages. Scrapfly cloud browsers are configured with real browser fingerprints that ensure all collected data is retrieved as it appears on the web.
How long does it take to get results from the Web Scraping API?
Scrape duration varies between one second up to 160 seconds as Scrapfly provides a execution budget for running your own browser actions with each request. So, it entirely depends on used feature set though Scrapfly always has a browser pool ready to perform scrape requests without any warmup.
How do I debug my web scrapers?
The Web Scraping API returns detailed error messages in case of misconfiguration and scrape failure. Failed scrape requests are very rare and not charged for. For additional debugging options a debug paramater can be used which will collect additional page data. Each request is logged and stored in Web scraping API dashboard allowing for easy inspection and replay of scrape commands.