Senior Data Engineer
Mexico City, Mexico City, Mexico
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Arkham Technologies
Arkham transforms operations of enterprises in the Americas through exceptional Data & AI software.About Arkham
Arkham is a Data & AI Platform—a suite of powerful tools designed to help you unify your data and use the best Machine Learning and Generative AI models to solve your most complex operational challenges.
Today, industry leaders like Circle K, Mexico Infrastructure Partners, and Televisa Editorial rely on our platform to simplify access to data and insights, automate complex processes, and optimize operations. With our platform and implementation service, our customers save time, reduce costs, and build a strong foundation for lasting Data and AI transformation.
About the Role
We are looking for a Senior Data Engineer to own our high-performance Data Platform based on the Lakehouse architecture. In this role, you will work with cutting-edge technologies such as Apache Spark, Trino, and Delta Lake, ensuring data governance and interoperability across platforms. You'll play a key role in shaping our data infrastructure, working across the entire data lifecycle—from ingestion to transformation and activation.
Requirements
Key Responsibilities
- Lead the next phase of our Data Platform – Develop and enhance Arkham’s Data Platform, following Lakehouse architecture principles and ensuring data governance.
- Data Ingestion Pipelines – Design and implement pipelines to extract data from structured, semi-structured, and unstructured sources.
- Data Pipeline Orchestration – Create, monitor, and optimize multiple data extraction and transformation pipelines.
- Data Catalog Integration – Ensure interoperability between data catalogs and various query engines.
- Cluster Management & Observability – Oversee cluster performance and implement observability solutions to maintain optimal execution of data pipelines.
End-to-End Data Lifecycle Management – Maintain high data quality and usability across integration, transformation, and activation stages.
Qualifications
- Experience: 5+ years in data engineering, data architecture, or a related field.
- Technical Expertise: Proficiency in Apache Spark, Delta Lake, and Trino.
- Programming Skills: Strong experience with Python for scripting and automation.
- Cloud Knowledge: Hands-on experience with AWS services, including Glue, S3, and EMR.
- Big Data: Understanding of distributed data systems and query engines.
Problem-Solving: Excellent analytical and debugging skills.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS Big Data Data governance Data pipelines Data quality Engineering Generative AI Machine Learning Pipelines Python Spark
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.