Sr. Data Engineer - Hybrid CDMX

Mexico City, CDMX, Mexico

Nearshore Cyber

Nearshore Cyber

View all jobs at Nearshore Cyber

Apply now Apply later

About the Role

Our client is looking for a Staff Data Engineer to be the steward of our data layer, ensuring that our AI and ML models have clean, structured, and high-quality data. This is an opportunity for a high-performing engineer to take ownership of our data platformdesigning and building scalable ingestion, transformation, and storage solutions for a fast-growing AI-driven sales intelligence product.

You'll build and optimize data pipelines that ingest, transform, and correlate structured and unstructured data from multiple sources (CRM, public datasets, web scraping). You'll work closely with ML and AI teams to ensure that our models are powered by the right data at the right time.

Why This Role?

  • High ownership  You'll be responsible for designing, maintaining, and evolving our data platform.
  • Be the expert  You'll shape how data is structured, transformed, and optimized for ML models.
  • Direct impact  Your work will power AI-driven sales recommendations for enterprise users.

Responsibilities

  • Own and maintain scalable data pipelines using Python, SQL, Airflow, and Spark (Databricks).
  • Develop data ingestion strategies using APIs, Airbyte, and web scraping.
  • Transform and clean data for ML models using Databricks (or Spark-based systems).
  • Optimize storage layers using a Medallion architecture (Bronze/Silver/Gold) approach.
  • Ensure data quality, governance, and observability across all pipelines.
  • Collaborate with ML, AI, and backend teams to integrate data into AI models.
  • Continuously refine and improve how data is structured, stored, and served.

What Were Looking For

  • 5+ years of experience in data engineering with strong Python & SQL expertise.
  • Hands-on experience with Airflow, ETL pipelines, and Spark (Databricks preferred).
  • Experience integrating structured & unstructured data from APIs, CRMs, and web sources.
  • Ability to own and scale data infrastructure in a fast-growing AI-driven company.
  • Strong problem-solving skills and a desire to improve how data is structured for ML.

Bonus Points

  • Exposure to Golang for API development (not required, but helpful).
  • Experience with MLOps (feature stores, model data versioning, SageMaker, ClearML).
  • Familiarity with Terraform, Kubernetes, or data pipeline automation.
  • Experience in database design to support customer-facing access patterns
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow API Development APIs Architecture ClearML Databricks Data pipelines Data quality Engineering ETL Golang Kubernetes Machine Learning ML models MLOps Pipelines Python SageMaker Spark SQL Terraform Unstructured data

Region: North America
Country: Mexico

More jobs like this