Web Scraping with Scrapfly and Typescript

Scrapfly Typescript SDK is powerful but intuitive and on this onboarding page we'll take a look at how to install, use it and some examples.

To start, take a look at our introduction and overview video:

If you're not ready to code yet check out Scrapfly's Visual API Player or the no-code Zapier integration.

SDK Setup

Source code of Typescript SDK is available on Github and the scrapfly-sdk package is available in all Javascript and Typescript runtimes:

Deno is a modern and secure runtime for JavaScript and TypeScript that uses V8 and is built in Rust. It's incredibly easy to use and runs Typescript natively as well as being backwards compatible with NodeJS. This makes Deno a great option for web-scraping related development.

To setup Scrapfly SDK with Deno, first install the SDK through jsr.io package index:

Try out the following code snippet for Web Scraping API to get started:

Bun is a modern runtime for JavaScript and TypeScript that is fully interchangeable with NodeJS. It's incredibly easy to use and runs Typescript natively which makes it a great option for web-scraping related development.

To setup Scrapfly SDK with Bun, first install the SDK through jsr.io package index:

Try out the following code snippet for Web Scraping API to get started:

NodeJS is the classic Javascript server runtime and is supported by the SDK through both CommonJS and ESM modules.

To setup Scrapfly SDK with Node, first install the SDK through NPM package index:

Try out the following code snippet for Web Scraping API to get started:

Serverless platforms like Cloudflare Workers, AWS Lambda etc. are also supported by Scrapfly SDK.

Most serverless platforms can run full NodeJS, Python or other runtimes though there are a few exceptions and differences in runtime implementations.

For the best experience see our recommended use through Denoflare 👇

All SDK examples can be found on SDK's Github repository:
github.com/scrapfly/typescript-scrapfly/tree/main/examples

Web Scraping API

In this section, we'll walk through the most important web scraping features step-by-step. After completing this walk through you should be proficient enough to conquer any website scraping with Scrapfly, so let's dive in!

First Scrape

To start let's take a look at a basic scrape of this simple product page web-scraping.dev/product/1.

We'll scrape the page, see some optional parameters and then extract the product details using CSS selectors.


Above, we first requested Scrapfly API to scrape the product page for us. Then, we used the selector attribute to parse the product details using CSS Selectors.

This example is very easy but what if we need more complex request configurations? Next, let's take a look at available scraping request options.

Request Customization

All SDK requests are being configured through ScrapeConfig object attributes. Most attributes mirror API parameters. For more see More information request customization

Here's a quick demo example:

Using ScrapeConfig we can not only configure the outgoing scrape requests but we can also enable Scrapfly specific features.

Developer Features

There are a few important developer features that can be enabled to make the onboarding process a bit easier.

The debug parameter can be enabled to produce more details in the web log output and the cache parameters are great for exploring the APIs while onboarding:

By enabling debug we can see that the monitoring dashboard produces more details and even captures screenshots for reviewing!

see the debug tab

See Your Monitoring Dashboard

The next feature set allows us to super charge our scrapers with web browsers, let's take a look.

Using Web Browsers

Scrapfly can scrape using real web browsers and to enable that the render_js parameter is used. When enabled instead of using HTTP request Scrapfly will:

  1. Start a real web browser
  2. Load the page
  3. Optionally wait for page to load through rendering_wait or wait_for_selector options
  4. Optionally execute custom Javascript code through js or javascript_scenario
  5. Return the rendered page content and browser data like captured background requests and database contents.

This makes Scrapfly scrapers incredibly powerful and customizable! Let's take a look at some examples.

To illustrate this let's take a look at this example page web-scraping.dev/reviews which requires javascript to load:

js disabled
js enabled

To scrape this we can use scrapfly's web browsers and we can approach this in two ways:

Rendering Javascript

The first approach is to simply wait for the page to load and scrape the content:

This approach is quite simple as we get exactly what we see in our own web browser making the development process easier.

XHR Capture

The second approach is to capture the background requests that generate this data on load directly:

The advantage of this approach is that we can capture direct JSON data and we don't need to parse anything! Though it is a bit more complex and requires some web development knowledge.

Browser Control

Finally, we can fully control the entire browser. For example, we can use Javascript Scenarios to enter username and password and click the login button to authenticate on web-scraping.dev/login:

  1. We'll go to web-scraping.dev/login
  2. Wait for page to load
  3. Enter username to Username input
  4. Enter password to Password input
  5. click login
  6. Wait for page to load

Here's how that would look like visually:

To achieve this using javascript scenarios all we have to do is describe this as a JSON template:

Javascript scenarios really simplify the browser automation process though we can take this even further!

Javascript Execution

For more experienced web developers there's a full javascript environment access available through the js parameter. For example let's execute some javascript parsing code using querySelector() method:

Here the browser executed the requested snippet of javascript and returned the results.


With custom request options and cloud browsers you're really in control of every web scraping step! Next, let's see the feature that allow to access any web page without being blocked through proxies and ASP.

Bypass Blocking

Scraper blocking can be very difficult to understand so Scrapfly provides one setting that simplifies the scraper blocking bypass. The Anti Scraping Protection (asp) bypass parameter will automatically configure requests and bypass most anti-scraping protection systems:

While ASP can bypass most anti-scraping protection systems like Cloudflare, Datadome etc. some blocking techniques are based on geographic location or proxy type.

Proxy Country

All Scrapfly requests go through a Proxy from over millions of IPs available from over 50+ countries. Some websites, however, are only available in specific region or simply block less connections from specific countries.

For that the country parameter can be used to define what country proxies are used.

Here we can see what proxy country scrapfly used when we query Scrapfly's IP analysis API tool.

Proxy Type

Further, Scrapfly offers two types of IPs: datacenter and residential. For targets that are harder to reach residential proxies can perform much better. Setting proxy_pool parameter to residential pool type we can switch to these stronger proxies:

See Your Proxy Dashboard

Concurrency Helper

The Typescript SDK is asynchronous each API call can be run concurrently natively and batched using native batching tools like Promise.all(). However, there's an additional concurrency helper that can simplify scrape batching.

The concurrentScrape()

See this example implementation:

Here we used the asynchronous generator to scrape multiple pages concurrently. We can either set the concurrency parameter to a desired limit (here we used 5) or if omitted your account's max concurrency limit will be used.


This covers the core functionalities of Scrapfly's Web Scraping API though there are many more features available. For more see the full API specification

If you're having any issues see the FAQ and Troubleshoot pages.

Extraction API

Now that we know how to scrape data using Scrapfly's web scraping API we can start parsing it for information and for that Scrapfly's Extraction API is an ideal choice.

Extraction API offers 3 ways to parse data: LLM prompts, Auto AI and custom extraction rules. All of which are available through the extract() method and ExtractionConfig object of Typescript SDK. Let's take a look at some examples.

LLM Prompts

Extraction API allows to prompt any text content using LLM prompts. The prompts can be used to summarize content, answer questions about the content or generate structured data like JSON or CSV.

As an example see this freeform prompt use with Python SDK:

LLMs are great for freeform or creative questions but for extracting known data types like products, reviews etc. there's a better option - AI Auto Extraction. Let's take a look at that next.

Auto Extraction

Scrapfly's Extraction API also includes a number of predefined models that can be used to automatically extract common objects like products, reviews, articles etc. without the need to write custom extraction rules.

The predefined models are available through the extraction_model parameter of the ExtractionConfig object. For example, let's use the product model:

See the result
For all available types see Auto Extract Models documentation.

Auto Extraction is powerful but can be limited for unique niche scenarios where manual extraction can be more fit. For that, let's take a look at Extraction Templates next which let you define your own extraction rules through JSON schema.

Extraction Templates

For more specific data extraction Scrapfly Extraction API allows to define custom extraction rules.

This is being done through a JSON schema which defines how data is selected through XPath or CSS selectors and how data is being processed through pre-defined processors and formatters.

This is a great tool for developers who are already familiar with data parsing in web scraping. See this example:

For all available selectors, formatters and extractors see Templates documentation.

Above we define a template that selects review dates using CSS selectors and then re-formats them to a new date format using datetime formatters.


With this we can now scrape any page and extract any data we need! To wrap this up let's take a look at another data capture format next - Screenshot API.

Screenshot API

While it's possible to capture screenshots using web scraping API Scrapfly also includes a dedicated screenshot API that significantly streamlines the screenshot scraping process.

The Screenshot API can be accessed through the SDK's screenshot() method and configured through the ScreenshotConfig configuration object. Here's a basic example:

The screenshot API also inherits many features from web-scraping API like cache , webhook and cache that are fully functional.

Here all we did is provide an url to capture and the API has returned us a screenshot.

Resolution

Next, we can heavily customize how the screenshot is being captured. For example, we can change the viewport size from the default 1920x1080 to any other resolution like 540x1200 to simulate mobile views:

Further, we can tell Scrapfly to capture the entire page rather than just the viewport.

Full Page

Using the capture parameter we can tell scrapfly to capture fullpage which will capture everything that is visible on the page.

Here by setting the capture parameter to fullpage we've captured the entire page. Though, if page requires scrolling to load more content we can also capture that using another parameter.

Auto Scroll

Just like with the Web Scraping API we can force automatic scroll on the page to load dynamic elements that load on scrolling. In this example, we're capturing a screenshot of web-scraping.dev/testimonials which loads new testimonial entries when the user scrolls the page:

Here the auto scrolled to the very bottom and loaded all of the testimonials before screenshot capture.

Next, we can capture only specific areas of the page. Let's take a look how.

Capture Areas

To capture specific areas we can use XPath or CSS selectors to define what to capture. For this, the capture parameter is used with the selector for an element to capture.

For example, we can capture only the reviews section of web-scraping.dev/product/1 page:

Here using a CSS selector we can restrict our capturing only to areas that are relevant to us.

Finally, for more capture configurations we can use screenshot options, let's take a look at that next

Capture Options

Capture options can apply various page modifications to capture the page in a specific way. For example, using block_banners option we can block cookies banners and using the dark_mode we can apply a custom dark theme to the scraped page.

In this example we capture web-scraping.dev/login?cookies page and disable cookie pop while also applying a dark theme.


What's next?

This concludes our onboarding tutorial though Scrapfly has many more features and options available. For that explore the getting started pages and api specification of each API as each of these features are available in all of Scrapfly SDKs and packages!

As for more on web scraping techniques and educational material see the Scrapfly Web Scraping Academy.

Summary