     [Blog](https://scrapfly.io/blog)   /  [python](https://scrapfly.io/blog/tag/python)   /  [How to Scrape AutoScout24: API, Anti-Bot Bypass, and Python Guide](https://scrapfly.io/blog/posts/how-to-scrape-autoscout24)   # How to Scrape AutoScout24: API, Anti-Bot Bypass, and Python Guide

 by [Ziad Shamndy](https://scrapfly.io/blog/author/ziad) May 13, 2026 19 min read [\#python](https://scrapfly.io/blog/tag/python) [\#requests](https://scrapfly.io/blog/tag/requests) [\#scrapeguide](https://scrapfly.io/blog/tag/scrapeguide) 

 [  ](https://www.linkedin.com/sharing/share-offsite/?url=https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-autoscout24 "Share on LinkedIn")    

 

 

         

   **Web Scraping API**Scrape any website with anti-bot bypass, proxy rotation, and JS rendering.

 

 [ Learn More  ](https://scrapfly.io/products/web-scraping-api) [  Docs ](https://scrapfly.io/docs/scrape-api/getting-started) 

 

 

To scrape AutoScout24, you need to handle Akamai Bot Manager and work around an official API that's limited to listing creation. With over 30 million monthly users and 2 million vehicle listings across 19 countries, AutoScout24 is Europe's largest car marketplace and one of the richest sources of automotive pricing, specs, and inventory data.

In this guide, you'll learn how to scrape AutoScout24 in 2026 with Python using \_\_NEXT\_DATA\_\_ JSON extraction. We'll cover the official API situation, anti-bot bypass, search and detail page scraping, pagination, and Scrapfly as a managed alternative. Let's get started.



## Key Takeaways

AutoScout24's official API is limited to listing creation for dealers, so scraping is the practical path to car pricing, specs, and inventory data across Europe's largest marketplace.

- AutoScout24 runs on Next.js, so every page embeds a \_\_NEXT\_DATA\_\_ script tag with all listing data as structured JSON. This approach is more stable than CSS selectors, which Next.js rehashes on each deploy.
- The official Listing Creation API is dealer-facing and write-only. There's no endpoint for searching, browsing, or extracting vehicle data.
- Akamai Bot Manager protects the site with TLS fingerprinting, JavaScript challenges, and behavior tracking. A few requests may succeed, but production-scale scraping triggers detection.
- AutoScout24 operates country-specific domains (.de, .it, .nl, .at, .be, .fr, .es, .ch), and listings can be geo-restricted. Residential proxies matched to the target country are essential.
- For projects that need Akamai bypass and proxy rotation handled upstream, Scrapfly's [Anti-Scraping Protection](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) returns full page HTML ready for \_\_NEXT\_DATA\_\_ parsing.

**Get web scraping tips in your inbox**Trusted by 100K+ developers and 30K+ enterprises. Unsubscribe anytime.







Scrapfly

#### Scale your web scraping effortlessly

Scrapfly handles proxies, browsers, and anti-bot bypass — so you can focus on data.

[Try Free →](https://scrapfly.io/register)## Why Scrape AutoScout24?

AutoScout24 covers 19 countries from Germany and Italy to Switzerland and Spain. Car dealers use AutoScout24 data to watch prices and inventory across markets. Manufacturers and resellers track competitor pricing. Researchers follow availability by body type, fuel type, and region.

Each listing has rich data: title, price, mileage, registration year, technical specs, seller details, and high-resolution images. You can compare a Volkswagen Golf's price in Germany versus Italy versus Spain from one source.

## Does AutoScout24 Have a Public API?

AutoScout24 does have an official API, the Listing Creation API. But this API lets dealers create and manage vehicle listings, not extract car data.

### What the Official Listing Creation API Does

The official API serves registered dealers and listing management platforms. The API handles the listing lifecycle: creating a new listing, uploading images, publishing, updating, and removing listings. AutoScout24 documents the API through an OpenAPI/Swagger spec.

The Listing Creation API is write-only. There's no search endpoint, no browse endpoint, and no way to query vehicle data. You can push listings into AutoScout24, but you can't pull data out.

### Why the Official API Doesn't Help for Data Extraction

If you need car prices, specs, or seller details from AutoScout24, the official API won't help. The API has no read endpoints for vehicle listings. There's no way to search for cars, filter by make or model, or retrieve pricing data through official channels.

This gap between what the API offers and what data users need is why scraping and third-party tools exist.

### Third-Party AutoScout24 APIs and Scraping Tools

Several categories of tools fill the official API gap:

- **No-code scraping tools**: pre-built scrapers you configure through a web interface. You set up the search URL, choose parameters, and export results as CSV or JSON
- **Dedicated automotive data APIs**: REST APIs that maintain their own AutoScout24 parsers and offer search, filters, and daily data exports
- **Managed scraping APIs**: services that handle anti-bot bypass, proxy rotation, and JS rendering, returning raw page data for you to parse

The tradeoff is clear. Managed tools are faster to start but charge per result. Custom scraping gives full control but needs anti-bot handling and proxy setup.

## How Does AutoScout24 Block Scrapers?

AutoScout24 uses Akamai Bot Manager, one of the most aggressive anti-bot systems. A single request with proper headers may return 200. But Akamai blocks once patterns emerge: repeated requests from one IP, datacenter IPs, or missing browser fingerprints.

### Akamai Bot Manager Protection

Akamai Bot Manager combines multiple detection methods:

- **TLS fingerprinting (JA3/JA4)**: Akamai checks the TLS handshake to tell real browsers from HTTP libraries like `requests`
- **HTTP/2 fingerprinting**: the order and values of HTTP/2 settings frames reveal the client type
- **JavaScript challenges**: Akamai injects JavaScript that collects browser data and generates validation cookies (`_abck`, `ak_bmsc`)
- **Behavior tracking**: request timing, navigation patterns, and mouse movement data feed into Akamai's bot scoring model

When blocked, you'll see a 403 Forbidden status, an "Access Denied" message, or an empty response body. Here's what a basic request looks like:

```
import requests

response = requests.get(
    "https://www.autoscout24.com/lst?atype=C&cy=D&sort=standard&ustate=N%2CU",
    headers={"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64) Chrome/125.0.0.0"}
)
print(f"Status: {response.status_code}")
# First request often returns 200, but try 50 pages and you'll hit 403
```



A single request often succeeds because Akamai's heavier checks (JS challenges, behavior tracking) kick in after repeated requests. Don't mistake that first 200 for reliable access at scale.

For the full technical deep-dive on Akamai's detection, see our [Akamai bypass guide](https://scrapfly.io/blog/posts/how-to-bypass-akamai-anti-scraping).

### Geo-Restrictions and Country-Specific Domains

AutoScout24 runs country-specific domains: `.de`, `.it`, `.nl`, `.at`, `.be`, `.fr`, `.es`, `.ch`, and `.com`. Some listings only show from specific countries, and IP geolocation affects search results. Scraping `.de` needs a German IP. Scraping `.it` needs an Italian proxy.

## How to Scrape AutoScout24 Car Listings with Python

To scrape AutoScout24 listings, extract the `__NEXT_DATA__` JSON from each page. Don't parse HTML selectors. AutoScout24 runs on Next.js, so every page has a `__NEXT_DATA__` script tag with all page data as JSON. Next.js rehashes class names on each deploy, but the JSON stays stable.

### Project Setup and Dependencies

You need Python 3.7+, `requests` for HTTP calls, and `beautifulsoup4` to find the script tag. Install them:

bash```bash
$ pip install requests beautifulsoup4
```



Then set up the session with realistic headers and retry logic:

```
import requests
import json
import random
import time
from bs4 import BeautifulSoup
from requests.adapters import HTTPAdapter
from urllib3.util.retry import Retry

# Updated Chrome user agents
user_agents = [
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/124.0.0.0 Safari/537.36",
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36",
]

def create_session():
    """Create a requests session with retry logic and realistic headers."""
    session = requests.Session()
    # Add automatic retries for server errors
    retry_strategy = Retry(
        total=3,
        backoff_factor=1,
        status_forcelist=[429, 500, 502, 503, 504],
    )
    adapter = HTTPAdapter(max_retries=retry_strategy)
    session.mount("http://", adapter)
    session.mount("https://", adapter)
    # Set realistic browser headers
    session.headers.update({
        "User-Agent": random.choice(user_agents),
        "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8",
        "Accept-Language": "en-US,en;q=0.5",
        "Accept-Encoding": "gzip, deflate, br",
        "Connection": "keep-alive",
        "Upgrade-Insecure-Requests": "1",
    })
    return session

def make_request(session, url):
    """Fetch a page with delay and 403 detection."""
    time.sleep(random.uniform(1, 3))
    try:
        response = session.get(url, timeout=15)
        if response.status_code == 403:
            print(f"  Blocked (403) on {url}")
            return None
        if response.status_code == 200:
            return response
        print(f"  Error: status {response.status_code}")
        return None
    except Exception as e:
        print(f"  Request failed: {e}")
        return None

session = create_session()
```



`create_session()` builds a `requests.Session` with automatic retries for server errors and realistic browser headers. `make_request()` adds a random 1-3 second delay before each request and checks for 403 blocks.

For more on building resilient scraping setups, see our guide to [scraping without getting blocked](https://scrapfly.io/blog/posts/how-to-scrape-without-getting-blocked-tutorial).

### Understanding AutoScout24's Page Architecture

AutoScout24 uses Next.js for server-side rendering. Every page includes a `<script id="__NEXT_DATA__">` tag with the full page data as JSON. On search pages, the JSON payload is around 255 KB. On detail pages, around 134 KB.



The listings page combines a filter sidebar (make, body type, registration, fuel type, price, location, mileage) with a result feed showing the offer count, price, year, mileage, and engine power per listing.

Earlier versions of this guide used CSS class selectors like `ListItem_title__ndA4s` and `Price_price__APlgs` to parse listings. Next.js hashes those class names on each deploy, so CSS-based scrapers break regularly. The `__NEXT_DATA__` approach pulls structured JSON directly, keeping the scraper stable.

AutoScout24 also uses `data-testid` attributes on some elements (like `VehicleDetails-mileage_road` and `VehicleDetails-calendar`). These are more stable than CSS classes because they exist for QA purposes. The `data-testid` attributes are a good fallback if `__NEXT_DATA__` ever changes.

The JSON paths on search pages:

- `data['props']['pageProps']['listings']`: array of 20 listing objects per page
- `data['props']['pageProps']['numberOfResults']`: total result count
- `data['props']['pageProps']['numberOfPages']`: max page number (capped at 200)

Each listing object has nested data for vehicle, price, seller, location, images, and tracking fields. AutoScout24 renders the same data on the frontend, but here you get it pre-structured.

### Extracting Listing Data from Search Pages

Search pages follow the URL pattern `https://www.autoscout24.com/lst?atype=C&cy=D&sort=standard&ustate=N%2CU&page=1`. Here's how to extract listings from `__NEXT_DATA__`:

```
def extract_next_data(html_content):
    """Parse __NEXT_DATA__ JSON from an AutoScout24 page."""
    soup = BeautifulSoup(html_content, "html.parser")
    script_tag = soup.find("script", id="__NEXT_DATA__")
    if not script_tag:
        print("  __NEXT_DATA__ script tag not found")
        return None
    return json.loads(script_tag.string)

def scrape_search_page(session, url):
    """Scrape car listings from a single AutoScout24 search page."""
    response = make_request(session, url)
    if not response:
        return []

    data = extract_next_data(response.text)
    if not data:
        return []

    page_props = data["props"]["pageProps"]
    raw_listings = page_props.get("listings", [])
    total_results = page_props.get("numberOfResults", 0)
    total_pages = page_props.get("numberOfPages", 0)
    print(f"  Found {len(raw_listings)} listings (total: {total_results}, pages: {total_pages})")

    listings = []
    for item in raw_listings:
        vehicle = item.get("vehicle", {})
        price = item.get("price", {})
        seller = item.get("seller", {})
        location = item.get("location", {})
        tracking = item.get("tracking", {})
        # Get vehicle detail chips (mileage, transmission, date, fuel, power)
        details = item.get("vehicleDetails", [])
        detail_map = {d.get("ariaLabel", ""): d.get("data", "") for d in details}

        listing = {
            "title": f"{vehicle.get('make', '')} {vehicle.get('model', '')} {vehicle.get('modelVersionInput', '')}".strip(),
            "price": price.get("priceFormatted", "N/A"),
            "price_raw": tracking.get("price"),
            "mileage": vehicle.get("mileageInKm", "N/A"),
            "mileage_raw": tracking.get("mileage"),
            "first_registration": detail_map.get("First registration", "N/A"),
            "fuel": vehicle.get("fuel", "N/A"),
            "transmission": vehicle.get("transmission", "N/A"),
            "power": detail_map.get("Power", "N/A"),
            "seller_name": seller.get("companyName") or seller.get("contactName", "Private"),
            "city": location.get("city", "N/A"),
            "country": location.get("countryCode", "N/A"),
            "url": "https://www.autoscout24.com" + item.get("url", ""),
        }
        listings.append(listing)

    return listings

# Scrape the first page
url = "https://www.autoscout24.com/lst?atype=C&cy=D&sort=standard&ustate=N%2CU&page=1"
listings = scrape_search_page(session, url)

# Print a sample
for car in listings[:3]:
    print(f"  {car['title']} | {car['price']} | {car['mileage']} | {car['fuel']}")
```



`extract_next_data()` uses BeautifulSoup to find the `__NEXT_DATA__` script tag and parse the JSON. `scrape_search_page()` fetches a search URL, extracts the JSON, and maps each listing into a flat dictionary. The `tracking` object gives raw integer values for price and mileage, which help with sorting and filtering.

 Example Outputtext```text

  Found 20 listings (total: 771404, pages: 200)
  Opel Astra Cabrio*Allwetter* | € 600 | 157,762 km | Gasoline
  Kia Stonic Dream Team Edition | € 13,490 | 31,436 km | Gasoline
  BMW X5 xDrive 30d M Sport | € 74,980 | 23,900 km | Diesel

```



### Handling Pagination

AutoScout24 caps search results at 200 pages with 20 listings each, so 4,000 listings per query. To get more, narrow your search filters (make, model, year, price range) and run multiple queries. Here's the pagination loop:

```
def scrape_all_pages(session, base_url, max_pages=5):
    """Scrape multiple pages of AutoScout24 search results."""
    all_listings = []

    for page in range(1, max_pages + 1):
        # Add page parameter to URL
        separator = "&" if "?" in base_url else "?"
        page_url = f"{base_url}{separator}page={page}"
        print(f"Scraping page {page}...")

        page_listings = scrape_search_page(session, page_url)
        if not page_listings:
            print(f"  No listings on page {page}, stopping.")
            break

        all_listings.extend(page_listings)
        print(f"  Total collected: {len(all_listings)} listings")

    return all_listings

# Example: scrape first 3 pages
base_url = "https://www.autoscout24.com/lst?atype=C&cy=D&sort=standard&ustate=N%2CU"
all_cars = scrape_all_pages(session, base_url, max_pages=3)
print(f"\nScraped {len(all_cars)} total listings")
```



The pagination function loops through pages and appends results to one list. `make_request()` adds a random delay on each page to avoid triggering Akamai's rate limits. If a page returns nothing (or gets blocked), the loop stops early.

## How to Scrape Individual AutoScout24 Car Details

Each vehicle detail page has the same `__NEXT_DATA__` JSON pattern with the full spec sheet, all images, seller contact info, and pricing as structured data. Detail pages have more fields than search page cards.

### Extracting Vehicle Specifications and Pricing

Detail page URLs follow the pattern `https://www.autoscout24.com/offers/{slug}`. The JSON lives at `data['props']['pageProps']['listingDetails']` with full specs, pricing (including raw integers), equipment lists, and high-res image URLs (1280x960 vs 250x188 on search).

```
def scrape_car_details(session, url):
    """Scrape detailed information from a single AutoScout24 car page."""
    response = make_request(session, url)
    if not response:
        return None

    data = extract_next_data(response.text)
    if not data:
        return None

    listing = data["props"]["pageProps"].get("listingDetails", {})
    vehicle = listing.get("vehicle", {})
    prices = listing.get("prices", {})
    public_price = prices.get("public", {})
    seller = listing.get("seller", {})
    images = listing.get("images", [])

    # Fuel fields on detail pages are dicts with "raw" and "formatted" keys
    fuel_cat = vehicle.get("fuelCategory", {})
    fuel_name = fuel_cat.get("formatted", "N/A") if isinstance(fuel_cat, dict) else fuel_cat

    car = {
        "title": f"{vehicle.get('make', '')} {vehicle.get('model', '')} {vehicle.get('modelVersionInput', '')}".strip(),
        "price": public_price.get("price", "N/A"),
        "price_raw": public_price.get("priceRaw"),
        "negotiable": public_price.get("negotiable", False),
        "mileage": vehicle.get("mileageInKm", "N/A"),
        "mileage_raw": vehicle.get("mileageInKmRaw"),
        "first_registration": vehicle.get("firstRegistrationDate", "N/A"),
        "first_registration_raw": vehicle.get("firstRegistrationDateRaw"),
        "body_type": vehicle.get("bodyType"),
        "body_color": vehicle.get("bodyColor"),
        "paint_type": vehicle.get("paintType"),
        "fuel": fuel_name,
        "transmission": vehicle.get("transmissionType"),
        "drivetrain": vehicle.get("driveTrain"),
        "power_kw": vehicle.get("rawPowerInKw"),
        "power_hp": vehicle.get("rawPowerInHp"),
        "displacement_ccm": vehicle.get("rawDisplacementInCCM"),
        "cylinders": vehicle.get("cylinders"),
        "number_of_seats": vehicle.get("numberOfSeats"),
        "number_of_doors": vehicle.get("numberOfDoors"),
        "co2_emission": vehicle.get("co2emissionInGramPerKmWithFallback"),
        "description": listing.get("description", ""),
        "seller_name": seller.get("companyName") or seller.get("contactName", "Private"),
        "seller_type": seller.get("type"),
        "seller_id": seller.get("id"),
        "images": images[:5],  # First 5 high-res image URLs
        "url": url,
    }
    return car

# Example: scrape a car detail page
detail_url = listings[0]["url"]
car = scrape_car_details(session, detail_url)
if car:
    print(json.dumps(car, indent=2, ensure_ascii=False))
```



`scrape_car_details()` extracts 20+ fields from a single detail page. The raw integer values for price, mileage, and engine power save you from parsing formatted strings. Note that fuel fields on detail pages are dicts with `"raw"` and `"formatted"` keys, so the code extracts `fuelCategory["formatted"]` for a clean string like `"Gasoline"`.

 Example Outputjson```json

{
  "title": "Opel Astra Cabrio*Allwetter*",
  "price": "€ 600",
  "price_raw": 600,
  "negotiable": false,
  "mileage": "157,762 km",
  "mileage_raw": 157762,
  "first_registration": "06/2003",
  "body_type": "Convertible",
  "body_color": "Silver",
  "fuel": "Gasoline",
  "transmission": "Manual",
  "power_kw": 108,
  "power_hp": 147,
  "displacement_ccm": 2198,
  "seller_name": "Autohaus Panke",
  "seller_type": "Dealer",
  "url": "https://www.autoscout24.com/offers/opel-astra-..."
}

```



### Extracting Seller Information

Seller data on detail pages includes dealer name, phone numbers, address, and a link to the dealer's profile. The `seller` object contains:

- `companyName`: the dealership name (for dealer listings)
- `contactName`: the contact person's name
- `phones`: an array of phone objects with `formattedNumber` and `callTo` fields
- `type`: either `"Dealer"` or `"Private"`
- `links.infoPage`: URL to the dealer's full profile

For private sellers, `companyName` is empty and only `contactName` plus a limited phone object are available. Be mindful of data sensitivity when collecting private seller details.

## How to Handle AutoScout24's Anti-Bot Protection

Even with `__NEXT_DATA__` extraction, your requests will hit Akamai's bot detection at scale. Here are the AutoScout24-specific strategies that work.

### Session Management and Cookie Handling

Akamai sets `_abck` and `ak_bmsc` cookies after the initial JavaScript challenge. A pure `requests`-based approach skips JS execution, so these cookies never get set. For light scraping (under 20 pages), you may not need the cookies. For larger jobs, you need a strategy.

One approach uses [curl\_cffi](https://github.com/lexiforest/curl_cffi) to match a real browser's TLS fingerprint. `curl_cffi` sends the same JA3 fingerprint as Chrome, which stops Akamai from flagging the connection at the TLS level. Combined with session cookie persistence:

```
# Conceptual example: cookie patterns may change
from curl_cffi import requests as curl_requests

# Create a session that impersonates Chrome's TLS fingerprint
session = curl_requests.Session(impersonate="chrome")

# First request passes Akamai's TLS check and collects cookies
response = session.get("https://www.autoscout24.com/lst?atype=C&cy=D")
print(f"Status: {response.status_code}")

# Subsequent requests reuse the session cookies
# Akamai cookies (_abck, ak_bmsc) persist across requests
for page in range(2, 6):
    time.sleep(random.uniform(2, 4))
    response = session.get(f"https://www.autoscout24.com/lst?atype=C&cy=D&page={page}")
    print(f"Page {page}: {response.status_code}")
    # Refresh the session if cookies expire (typically after ~30 minutes)
```



`curl_cffi` matches Chrome's TLS fingerprint at the connection level, which is the first check Akamai runs. Without TLS matching, Akamai spots a Python HTTP library before even reading the request headers. For a walkthrough of TLS fingerprinting and JA3/JA4 signatures, see our [TLS blocking guide](https://scrapfly.io/blog/posts/how-to-avoid-web-scraping-blocking-tls).

### Proxy Rotation and Geo-Targeting for European Markets

Residential proxies are key for scraping AutoScout24 at scale. Akamai's IP reputation database flags datacenter addresses right away. Match the proxy country to the target domain:

```
# Conceptual example: proxy configuration for European markets
proxy_pools = {
    "de": "http://user:pass@de-residential-proxy:port",  # autoscout24.de
    "it": "http://user:pass@it-residential-proxy:port",  # autoscout24.it
    "nl": "http://user:pass@nl-residential-proxy:port",  # autoscout24.nl
    "at": "http://user:pass@at-residential-proxy:port",  # autoscout24.at
    "be": "http://user:pass@be-residential-proxy:port",  # autoscout24.be
    "fr": "http://user:pass@fr-residential-proxy:port",  # autoscout24.fr
    "es": "http://user:pass@es-residential-proxy:port",  # autoscout24.es
}

def scrape_country(country_code, search_url):
    """Scrape AutoScout24 with a country-matched proxy."""
    proxy = proxy_pools.get(country_code)
    if not proxy:
        print(f"No proxy configured for {country_code}")
        return None
    proxies = {"http": proxy, "https": proxy}
    session = create_session()
    # Rotate IP per page, not per request (maintain session consistency)
    time.sleep(random.uniform(2, 5))
    response = session.get(search_url, proxies=proxies, timeout=15)
    return response
```



Rotate the IP between pages, not between individual requests on the same page. Akamai tracks session consistency, so switching IPs mid-session raises a red flag. Keep the same IP for a full page load, then rotate.

For more on proxy strategies, see our [proxy detection bypass](https://scrapfly.io/blog/posts/bypass-proxy-detection-with-browser-fingerprint-impersonation) guide. Also check our guide on [hiding your IP while scraping](https://scrapfly.io/blog/posts/how-to-hide-your-ip-address-while-scraping).

## How to Scrape AutoScout24 with Scrapfly



For teams that want to skip anti-bot engineering and go straight to data extraction, Scrapfly handles AutoScout24's [Akamai protection](https://scrapfly.io/bypass/akamai) automatically.

Key features for AutoScout24 scraping:

- [Anti-Scraping Protection (ASP)](https://scrapfly.io/docs/scrape-api/anti-scraping-protection) bypasses Akamai Bot Manager without manual cookie or TLS handling
- Rotating residential proxies across 100+ countries to target `.de`, `.it`, `.nl`, or any AutoScout24 domain
- Built-in JavaScript rendering through headless browsers
- [Python SDK](https://scrapfly.io/docs/sdk/python) for quick integration

Here's how to scrape AutoScout24 with Scrapfly:

python```python
from scrapfly import ScrapflyClient, ScrapeConfig, ScrapeApiResponse
import json
from bs4 import BeautifulSoup

scrapfly = ScrapflyClient(key="YOUR-SCRAPFLY-KEY")

# Scrape car listings
result: ScrapeApiResponse = scrapfly.scrape(ScrapeConfig(
    url="https://www.autoscout24.com/lst?atype=C&cy=D&sort=standard&ustate=N%2CU",
    tags=["autoscout24", "car-listings"],
    asp=True,          # Turn on Anti-Scraping Protection for Akamai bypass
    render_js=True,    # Render JavaScript for full page data
    country="DE",      # Use a German proxy for .de listings
))
# Parse __NEXT_DATA__ from the response
soup = BeautifulSoup(result.content, "html.parser")
data = json.loads(soup.find("script", id="__NEXT_DATA__").string)
listings = data["props"]["pageProps"]["listings"]
print(f"Found {len(listings)} listings via Scrapfly")

# Scrape individual car details
car_result: ScrapeApiResponse = scrapfly.scrape(ScrapeConfig(
    url="https://www.autoscout24.com/offers/opel-astra-cabrio-allwetter-gasoline-silver-ef373324-f78f-46e8-9cbc-641ad9af197d",
    tags=["autoscout24", "car-details"],
    asp=True,
    render_js=True,
    country="DE",
))
print(f"Detail page status: {car_result.status_code}")
```



`asp=True` turns on Anti-Scraping Protection, which handles Akamai's TLS fingerprinting, JS challenges, and cookie management. `country="DE"` routes the request through a German residential proxy for geo-restricted listings.

Self-hosted scraping stays viable for teams with existing proxy setups. Scrapfly is one managed option for teams that want speed over full control.



### Web Scraping API

Scrape any website with our powerful API. Anti-bot bypass, JavaScript rendering, and rotating proxies built-in.



[Try Web Scraping API](https://scrapfly.io/docs/scrape-api/getting-started)



## FAQ

Does AutoScout24 use Akamai bot protection?Yes. AutoScout24 uses Akamai Bot Manager to detect and block automated access. You'll get 403 responses when scraping at scale. The anti-bot section earlier in this article covers bypass strategies.







How much does it cost to access AutoScout24 data through an API?The official Listing Creation API is free for registered dealers but only supports publishing listings, not data extraction. Third-party scraping tools charge per result. Building your own scraper is free, but it needs a proxy setup. Check each provider's pricing page for current rates.







Can I scrape AutoScout24 without writing Python code?Yes. Several no-code scraping tools offer pre-built AutoScout24 scrapers through a web interface. You configure the search URL, set parameters, and export results in CSV or JSON format.









## Conclusion

AutoScout24 is one of the richest automotive data sources in Europe. The gap between the official API (listing creation for dealers) and what users need (data extraction) means scraping is the practical path. `__NEXT_DATA__` JSON extraction stays stable across deploys and gives you structured data without fragile CSS selectors.

You have three paths for getting AutoScout24 data. The official API works only for dealers publishing listings. DIY scraping with `__NEXT_DATA__` plus anti-bot handling gives full control. Managed tools offer faster setup with per-result costs. For teams that want Akamai bypass without building proxy setups, [Scrapfly's Anti-Scraping Protection](https://scrapfly.io) handles it automatically.

Legal Disclaimer and PrecautionsThis tutorial covers popular web scraping techniques for education. Interacting with public servers requires diligence and respect:

- Do not scrape at rates that could damage the website.
- Do not scrape data that's not available publicly.
- Do not store PII of EU citizens protected by GDPR.
- Do not repurpose *entire* public datasets which can be illegal in some countries.

Scrapfly does not offer legal advice but these are good general rules to follow. For more you should consult a lawyer.



 

   Table of Contents















 

  Table of Contents- [Key Takeaways](#key-takeaways)
- [Why Scrape AutoScout24?](#why-scrape-autoscout24)
- [Does AutoScout24 Have a Public API?](#does-autoscout24-have-a-public-api)
- [What the Official Listing Creation API Does](#what-the-official-listing-creation-api-does)
- [Why the Official API Doesn't Help for Data Extraction](#why-the-official-api-doesn-t-help-for-data-extraction)
- [Third-Party AutoScout24 APIs and Scraping Tools](#third-party-autoscout24-apis-and-scraping-tools)
- [How Does AutoScout24 Block Scrapers?](#how-does-autoscout24-block-scrapers)
- [Akamai Bot Manager Protection](#akamai-bot-manager-protection)
- [Geo-Restrictions and Country-Specific Domains](#geo-restrictions-and-country-specific-domains)
- [How to Scrape AutoScout24 Car Listings with Python](#how-to-scrape-autoscout24-car-listings-with-python)
- [Project Setup and Dependencies](#project-setup-and-dependencies)
- [Understanding AutoScout24's Page Architecture](#understanding-autoscout24-s-page-architecture)
- [Extracting Listing Data from Search Pages](#extracting-listing-data-from-search-pages)
- [Handling Pagination](#handling-pagination)
- [How to Scrape Individual AutoScout24 Car Details](#how-to-scrape-individual-autoscout24-car-details)
- [Extracting Vehicle Specifications and Pricing](#extracting-vehicle-specifications-and-pricing)
- [Extracting Seller Information](#extracting-seller-information)
- [How to Handle AutoScout24's Anti-Bot Protection](#how-to-handle-autoscout24-s-anti-bot-protection)
- [Session Management and Cookie Handling](#session-management-and-cookie-handling)
- [Proxy Rotation and Geo-Targeting for European Markets](#proxy-rotation-and-geo-targeting-for-european-markets)
- [How to Scrape AutoScout24 with Scrapfly](#how-to-scrape-autoscout24-with-scrapfly)
- [FAQ](#faq)
- [Conclusion](#conclusion)
 
    Join the Newsletter  Get monthly web scraping insights 

 

  



Scale Your Web Scraping

Anti-bot bypass, browser rendering, and rotating proxies, all in one API. Start with 1,000 free credits.

  No credit card required  1,000 free API credits  Anti-bot bypass included 

 [Start Free](https://scrapfly.io/register) [View Docs](https://scrapfly.io/docs/onboarding) 

 Not ready? Get our newsletter instead. 

 

## Explore this Article with AI

 [ ChatGPT ](https://chat.openai.com/?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-autoscout24) [ Gemini ](https://www.google.com/search?udm=50&aep=11&q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-autoscout24) [ Grok ](https://x.com/i/grok?text=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-autoscout24) [ Perplexity ](https://www.perplexity.ai/search/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-autoscout24) [ Claude ](https://claude.ai/new?q=Summarize%20this%20page%3A%20https%3A%2F%2Fscrapfly.io%2Fblog%2Fposts%2Fhow-to-scrape-autoscout24) 



 ## Related Articles

 [  

 blocking 

### How to Bypass Akamai when Web Scraping in 2026

In this article we'll take a look at a popular anti bot service Akamai Bot Manager. How does it detect web scrapers and ...

 

 ](https://scrapfly.io/blog/posts/how-to-bypass-akamai-anti-scraping) [  

 blocking 

### How to Bypass Cloudflare When Web Scraping in 2026

Cloudflare offers one of the most popular anti scraping service, so in this article we'll take a look how it works and h...

 

 ](https://scrapfly.io/blog/posts/how-to-bypass-cloudflare-anti-scraping) [  

 blocking 

### How to Bypass Imperva Incapsula when Web Scraping in 2026

In this article we'll take a look at a popular anti bot service Imperva Incapsula anti bot WAF. How does it detect web s...

 

 ](https://scrapfly.io/blog/posts/how-to-bypass-imperva-incapsula-anti-scraping) 

  



   



 Scale your web scraping effortlessly, **1,000 free credits** [Start Free](https://scrapfly.io/register)