// WORK WITH US

Working at Scrapfly

Small team. Big impact.
Remote-first. No meetings.

We're building the future of web scraping and data extraction — at more than a billion requests per month. We look for talented people who are passionate about hard technical problems and helping developers worldwide access the data they need.

Apply by email with your LinkedIn, GitHub, and the position you're interested in: job[at]scrapfly.io

1B+

requests per month

UTC±3

EU-aligned timezone

100%

remote, bootstrapped

0

standing meetings


// WHY US

Why Work With Us

Remote-first, meeting-free, and optimized for engineers who want to ship, not manage process.

Remote-First Culture

Work from anywhere in the EU timezone (UTC±3). Slack for async chat, Notion for knowledge, GitHub for code. We trust you to manage your own time — no standups, no status theatre.

async-first
UTC±3
no standups

No-Meeting Policy

Focus on what matters. No unnecessary meetings, no agile ceremonies, no status performance. Clear tickets, direct communication, working code.

zerostandups
writtenspecs
deepwork

Developer Resources

Pre-configured Linux remote workstation (8-core CPU, 32 GB RAM, 240 GB SSD) reachable via our internal VPN. Works with JetBrains, VS Code, and Cursor out of the box.

8-core / 32 GB
JetBrains
VS Code / Cursor
internal VPN

Contractor Model

Paid in USD with 20 paid leave days per year. You'll need a registered company (freelance, umbrella company, or similar) for invoicing — one that accepts international USD payments. Due to banking restrictions, we cannot work with contractors from certain countries; our remote policy lists the full exclusion set.


// TECH STACK

What You'll Work With

Polyglot by design — the right tool for each job, not whatever was trending when we started.

Infrastructure

  • Kubernetes (k8s) with Helm on Google Kubernetes Engine (GKE)
  • Docker for containerization
  • Terraform for infrastructure as code
  • Traefik for load balancing
  • k3d/k3s for local development

Databases & Messaging

  • MongoDB for log storage and analytics
  • MariaDB with ProxySQL for application data
  • Redis for rate limiting, distributed locks, and caching
  • ClickHouse for analytics
  • Google Cloud Storage for file storage
  • RabbitMQ for messaging and task distribution

Development Stack

  • PHP for the dashboard and user interface
  • Python for the scraping engine and test automation
  • Golang for the API gateway and proxy/network layer
  • C++ for Chromium browser automation
  • Linux-based development environment

Monitoring & Tools

  • Sentry for error tracking
  • Slack for team communication
  • Stripe for payment processing

// HIRING

Our Recruitment Process

Three stages. Fast turnaround. No trick questions, no whiteboard arithmetic.

STEP 01

Initial Interview

Technical profiles meet with our CTO, non-technical with our CEO, followed by Q&A with team members.

STEP 02

Competency Assessment

A real-world problem or exercise task to solve, demonstrating your skills and your approach to open-ended work.

STEP 03

Final Decision

Team review and final decision on your application — usually within a week.


// OPEN POSITIONS

Roles We're Hiring For

Apply by email with your LinkedIn, GitHub, and the role you want: job[at]scrapfly.io

Automation Developer / Advocate

Build, maintain, and improve integrations with automation platforms (n8n, Zapier, Make.com, CrewAI, LangChain, and more), and showcase Scrapfly's capabilities through real-world automation use cases, tutorials, and demos. This role sits at the intersection of development and advocacy: hands-on with code, and a visible guide helping our community succeed with automation.

See details

Key Responsibilities

  • Design, maintain, and improve integrations for platforms such as n8n, Zapier, Make.com, LangChain, CrewAI, MCP
  • Debug and resolve issues in integrations to ensure smooth developer experience
  • Collaborate with our platform team to address automation-driven use cases
  • Create clear documentation, tutorials, and showcase projects that illustrate Scrapfly in action
  • Publish content (guides, blog posts, videos, talks) to engage the automation community
  • Act as a developer advocate — listening to community feedback, bringing insights back into the product, and helping shape our automation strategy
  • Contribute to open-source SDKs and client libraries (mainly Python and JavaScript)

Qualifications

  • 2+ years of professional software development experience (Python and JavaScript)
  • Proven experience building integrations, plugins, or SDKs for third-party platforms
  • Strong communication skills: able to write clear docs, tutorials, or technical blog posts
  • Already familiar with the automation landscape and comfortable with most solutions
  • Past experience automating business processes in companies
  • Enjoy building integrations and have a strong eye for developer experience
  • Comfortable switching between writing code and writing content
  • Engage with developer communities, helping others succeed, and making complex tools feel simple
  • Thrive in a small team where you have ownership and impact from day one
Nice to Have
  • Previous experience as a Developer Advocate or in developer-facing roles
  • Contributions to open-source automation platforms (e.g., n8n nodes, Zapier apps, LangChain tools)
  • Familiarity with APIs, webhooks, and event-driven architectures
  • Experience creating educational content (articles, videos, workshops, talks)

View Full Job Description

Python Senior Engineer

Join a small team to architect, build, and maintain the core of our high-scale web scraping engine, handling over 1 billion monthly requests. You'll directly influence our technology decisions while tackling complex, exciting challenges in browser automation, advanced proxy management, distributed computing, and scalable microservices architecture.

See details

Key Responsibilities

  • Architect, develop, and optimize our Python-based distributed scraping engine
  • Introspect, profile, and identify code bottlenecks
  • Debug and resolve complex production issues that arise only under load or specific setups
  • Work on new features and product development

Qualifications

  • 7+ years of professional Python development in production environments
  • Deep mastery of Python internals: memory management, the GIL, asyncio vs blocking operations, and uvloop/libuv optimizations
  • Expertise in concurrent programming, including multiprocessing and threading for CPU-bound and I/O-bound workloads
  • Extensive experience building and interfacing with C libraries via cFFI
  • Proficiency in accelerating Python code with Cython
  • Solid competence in C or C++
  • Strong familiarity with Unix systems
  • In-depth expertise in HTTP protocols, networking fundamentals, and asynchronous programming
  • Strong written and verbal communication, with a knack for clear, concise documentation
  • Proven ability to dive deep into complex problems — autonomously debugging and delivering robust solutions
  • Skilled at exploring and comprehending unfamiliar codebases, driving continuous code quality improvements
Bonus Skills
  • Golang knowledge (all networking parts that communicate with the scrape engine are written in Go)
  • Previous experience with web scraping at high scale in real production workloads
  • Kubernetes internals

View Full Job Description

Technical Support Specialist

You will be instrumental in maintaining and improving our industry-leading web scraping capabilities. Your work will involve tackling complex extraction challenges, optimizing HTML parsing techniques, and enhancing our fingerprinting and HTTP handling strategies.

See details

Key Responsibilities

  • Technical Client Support: Guide developers through bug resolution and configuration adjustments
  • Web Scraping & Extraction: Develop and optimize Python-based solutions
  • Feature Development & Maintenance: Enhance key scraping features
  • CDP Browser Maintenance: Improve our browser automation stack
  • Blocked Target Analysis: Investigate and resolve blocking issues

Qualifications

  • 3+ years in Python development with web scraping focus
  • Expertise in HTML parsing, HTTP mechanics, and bot protection
  • Experience with CDP and browser automation tools
  • Familiarity with Kubernetes and Linux environments
  • Excellent written communication skills

View Full Job Description

Important Note

Due to banking restrictions, we cannot work with contractors from certain countries. Please check our remote policy for the complete list of excluded countries.

Don't see your role?

We're always open to exceptional engineers and contributors. Email us with your CV, GitHub, and what you'd want to build — we'll respond whether or not there's a formal opening.