Data Engineer - Web Scraping Specialist

Abu Dhabi, United Arab Emirates

Creed Media

With over 2000 successful campaigns we are trusted by global brands and well-known labels all over the world.

View all jobs at Creed Media

Apply now Apply later

About Creed Media

We are Creed Media, a digital marketing agency working with global clients and artists to create culturally relevant and data-driven campaigns. We're currently building a new product within the company — a real-world platform that will be used by businesses and end-users around the globe.

As part of this initiative, we're expanding our team and looking for a Data Engineer specializing in web scraping to help power one of our product's most critical data systems. We focus on talent, not location — whether you're in Abu Dhabi, anywhere in the MENA region, or globally, we offer flexible work arrangements and UAE residency sponsorship for the right candidates.

The Opportunity

We're looking for a Data Engineer - Web Scraping Specialist (D3) to own and scale our data acquisition infrastructure. At the D3 level, you'll design scalable data solutions, optimize our data platform, and enable data-driven decisions across the company.

"I design scalable data solutions, optimize our data platform, and enable data-driven decisions across the company."

The foundation is already in place — we've built custom scrapers using NestJS, proxy services, and data pipelines — and now we need a data engineer who can own, maintain, and scale these systems to handle serious growth. You'll build complex data pipelines that power our entire product ecosystem.

What You'll Do

As a Data Engineer (D3) specializing in web scraping, you'll own significant parts of our data infrastructure, building complex acquisition pipelines and enabling analytics at scale.

Core D3 Responsibilities:

  • Design and implement complex data pipelines for web scraping and data acquisition

  • Optimize data platform performance — ensuring our scraping infrastructure scales smoothly

  • Build real-time streaming solutions for continuous data collection and processing

  • Implement data governance frameworks for scraped data quality and compliance

  • Partner with product and engineering teams to prioritize data acquisition needs

  • Lead data infrastructure projects that power business decisions

Specialized Scraping Focus:

  • Own and maintain existing scraper infrastructure built in NestJS

  • Proactively monitor scraping targets and adapt to platform changes, rate limits, API behavior

  • Implement proxy rotation strategies and anti-bot measures for operational continuity

  • Design robust, resilient systems ready to handle millions of scraping targets

  • Ensure data freshness and quality across all acquisition pipelines

Our Tech Stack

Data Infrastructure:

  • Backend: NestJS (TypeScript) for scraping services

  • Data Processing: Real-time streaming and batch processing

  • Storage: Data warehousing and optimization

  • Monitoring: Pipeline observability and alerting

Scraping Technology:

  • Automation: Puppeteer, Playwright, Cheerio

  • Infrastructure: Docker containerization, proxy services

  • Anti-detection: Advanced evasion and rotation strategies

What We're Looking For

D3 Technical Requirements:

  • Advanced data pipeline design — you can architect scalable data acquisition systems

  • Strong JavaScript/TypeScript skills particularly for browser automation and data processing

  • Distributed computing experience with tools like Spark or similar frameworks

  • Data modeling and architecture expertise for large-scale scraping operations

  • Query optimization and performance tuning capabilities

Scraping Specialization:

  • Expert-level experience with Puppeteer, Playwright, or Cheerio for web scraping

  • Debugging and reverse-engineering sites using Chrome DevTools

  • Proxy services and anti-bot measures implementation and management

  • Complex scraping workflows at scale — handling millions of targets

  • Containerization experience (Docker) and microservices architecture

Data Engineering Leadership:

  • Experience leading data infrastructure projects

  • Ability to optimize platforms for 10x growth

  • Strong collaboration with data scientists, analysts, and product teams

  • Focus on data quality, governance, and compliance

Your Growth Timeline

After 1 Month:

  • Fully onboarded to current scraping systems and data infrastructure

  • Understanding of platforms we scrape and how data flows through our product

  • Begin performing regular maintenance and optimization on scraper fleet

After 2 Months:

  • Taking full ownership of scraping infrastructure and data pipelines

  • Implementing structural improvements — optimizations and scaling features

  • Defining proxy handling strategies and anti-bot evasion at scale

  • Building data governance frameworks for quality and compliance

After 3-5 Months:

  • Leading complex data projects — robust systems handling millions of targets

  • Optimizing platform performance — dramatic improvements in speed and cost

  • Architecting new data solutions for additional platforms and use cases

  • Recognized as data acquisition expert — teams rely on your infrastructure

D4 Growth Path:

  • Architect data platform components across multiple domains

  • Lead multi-quarter data initiatives

  • Influence data strategy company-wide

  • Expand expertise beyond scraping into broader data engineering

Compensation & Benefits

D3 Salary Range: AED 13,000–19,000 monthly, based on experience and performance

Career Growth:

  • Clear path to D4 level (typically 2-3 years)

  • Opportunities to architect data platform components

  • Growth into company-wide data strategy influence

  • Expansion into machine learning pipelines and advanced analytics

Benefits:

  • Flexible work arrangements — Remote, hybrid, or on-site in Abu Dhabi

  • UAE residency sponsorship available for qualified candidates

  • Travel opportunities — Paid collaboration visits to our global offices

  • Professional development — Growth budget and learning opportunities

  • Competitive time off and flexible working hours

  • Creative environment — Work with global brands and innovative campaigns

Interview Process

We believe in a transparent, respectful hiring process designed to evaluate D3-level data engineering capabilities:

  1. Application Review — Share your background and data engineering experience

  2. Intro Call — 30-minute chat with our Recruitment Manager about motivations and career goals

  3. Technical Challenge — Real-world data acquisition and pipeline problem (take-home, reasonable scope)

  4. Technical Deep Dive — Review your challenge, discuss data architecture, and explore scraping strategies

  5. Team & Collaboration — Meet the engineering team, discuss cross-functional work and data partnerships

  6. Leadership Alignment — Final conversation about D3 expectations and growth path to D4

Timeline: Typically 1-2 weeks from application to offer

Focus Areas: Data modeling and architecture, distributed systems, performance optimization, and real-world problem solving

Ready to Build Scalable Data Solutions?

If you're excited about designing data infrastructure that powers real business decisions and user experiences, we'd love to hear from you.

Apply now or reach out with questions — we're happy to discuss how this role fits your data engineering career goals.

Creed Media is committed to building a diverse, inclusive team and welcomes applications from all qualified candidates regardless of background or location.


Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs Architecture D3 Data governance Data pipelines Data quality Data strategy Data Warehousing Distributed Systems Docker Engineering JavaScript Machine Learning Microservices Pipelines Playwright Spark Streaming TypeScript

Perks/benefits: Career development Competitive pay Flex hours Flex vacation Team events Travel

Region: Middle East

More jobs like this