SDE - Infrastructure Engineer (Remote)

Arizona, Phoenix, United States of America

Bungee Tech

Simplify your pricing and category operations with a retail price optimization platform fueled by competitive intelligence. Book a demo.

View all jobs at Bungee Tech

Apply now Apply later

Company Description:

ClearDemand is the leader in AI-driven price and promotion optimization for retailers. Our platform transforms pricing from a challenge to a competitive advantage, helping retailers make smarter, data-backed decisions across the entire pricing lifecycle. By integrating competitive intelligence, pricing rules, and demand modeling, we enable retailers to maximize profit, drive growth, and enhance customer loyalty — all while maintaining pricing compliance and brand integrity. With ClearDemand, retailers stay ahead of the market, automate complex pricing decisions, and unlock new opportunities for growth.


Why This Role Matters:

Data is the foundation of our business, and your work will ensure that we continue to deliver high-quality competitive intelligence at scale. Web platforms are constantly evolving, deploying sophisticated anti-bot measures—your job is to stay ahead of them. If you thrive on solving complex technical challenges and enjoy working with real-world data at an immense scale, this role is for you.


We seek a Software Development Engineer with expertise in cloud infrastructure, Big Data and web crawling technologies. This role bridges system reliability engineering with scalable data extraction solutions, ensuring our infrastructure remains robust and capable of handling high-volume data collection. You will design resilient systems, optimize automation pipelines, and tackle challenges posed by advanced bot-detection mechanisms.



Key Responsibilities:
  • Architect, deploy, and manage scalable cloud environments (AWS/GCP/DO) to support distributed data processing solutions to handle terabyte-scale datasets and billions of records efficiently
  • Automate infrastructure provisioning, monitoring, and disaster recovery using tools like Terraform, Kubernetes, and Prometheus.
  • Optimize CI/CD pipelines to ensure seamless deployment of web scraping workflows and infrastructure updates.
  • Develop and maintain stealthy web scrapers using Puppeteer, Playwright, and headless chromium browsers.
  • Reverse-engineer bot-detection mechanisms (e.g., TLS fingerprinting, CAPTCHA solving) and implement evasion strategies.
  • Monitor system health, troubleshoot bottlenecks, and ensure 99.99% uptime for data collection and processing pipelines.
  • Implement security best practices for cloud infrastructure, including intrusion detection, data encryption, and compliance audits.
  • Partner with data collection, ML and SaaS teams to align infrastructure scalability with evolving data needs
  • Research emerging technologies to stay ahead of anti-bot trends including technologies like Kasada, PerimeterX, Akamai, Cloudflare, and more.
Required Skills:
  • 4–6 years of experience in site reliability engineering and cloud infrastructure management . 
  • Proficiency in Python, JavaScript for scripting and automation . 
  • Hands-on experience with Puppeteer/Playwright, headless browsers, and anti-bot evasion techniques . 
  • Knowledge of networking protocols, TLS fingerprinting, and CAPTCHA-solving frameworks . 
  • Experience with monitoring and observability tools such as Grafana, Prometheus, Elasticsearch, and familiarity with monitoring and optimizing resource utilization in distributed systems.
  • Experience with data lake architectures and optimizing storage using formats such as Parquet, Avro, or ORC.
  • Strong proficiency in cloud platforms (AWS, GCP, or Azure) and containerization/orchestration (Docker, Kubernetes).
  • Deep understanding of infrastructure-as-code tools (Terraform, Ansible) . 
  • Deep experience in designing resilient data systems with a focus on fault tolerance, data replication, and disaster recovery strategies in distributed environments.
  • Experience implementing observability frameworks, distributed tracing, and real-time monitoring tools.
  • Excellent problem-solving abilities, with a collaborative mindset and strong communication skills.
Why Join Us:
At ClearDemand, you’ll be at the forefront of innovation in the data engineering space, working with cutting-edge technologies and a talented team. If you're passionate about building scalable systems, handling large-scale distributed data, and solving complex data challenges, we’d love to have you on board.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  2  0  0
Category: Engineering Jobs

Tags: Ansible Architecture Avro AWS Azure Big Data CI/CD Distributed Systems Docker Elasticsearch Engineering GCP Grafana JavaScript Kubernetes Machine Learning Parquet Pipelines Playwright Python Research Security Terraform

Perks/benefits: Startup environment

Regions: Remote/Anywhere North America
Country: United States

More jobs like this