Sr. Data Platform Engineer
Mexico City - Paseo
Marsh McLennan
Marsh McLennan is the world’s leading professional services firm in risk, strategy and people. We bring together experts from across our four global businesses — Marsh, Guy Carpenter, Mercer and Oliver Wyman — to help make organizations more...Company:
MarshDescription:
What can you expect?
At MercerTech, the DnAi Platform Engineering team is seeking a Sr. Data Platform Engineer with strong experience in ETL, AWS, and PySpark. This key role involves leading the design, implementation, and operation of data platforms in hybrid environments (both on-prem and cloud), using modern technologies aligned with MMC’s architecture and security standards.
You’ll work with cross-functional teams to define robust data solutions, optimize pipelines, and enable a scalable infrastructure that supports data-driven decision-making across the organization.
We will count on you to:
- Design and implement a robust data platform using AWS, PySpark, and ETL tools.
- Manage and integrate cloud and SaaS solutions such as Databricks, Snowflake, and IDMC.
- Optimize the technology stack (PySpark, NiFi, etc.) to maximize performance and scalability.
- Lead the creation of efficient, high-quality data pipelines, providing technical guidance to development teams.
- Identify the best AWS tools and services to meet project-specific requirements.
- Collaborate with business, analytics, architecture, security, and infrastructure teams.
- Support platform modernization and the adoption of new technologies.
What you need to have:
- Degree in Engineering, Computer Science, or a related field.
- Proven experience in data engineering with strong expertise in ETL, AWS, and PySpark.
- Hands-on knowledge of tools like Databricks, Snowflake, IDMC, Glue, among others.
- Ability to design data architectures based on business and technical requirements.
- Strong technical leadership and effective communication skills with diverse teams.
What makes you stand out
- Solid experience with complex ETL pipelines.
- Successful track record of projects deployed on AWS.
- Proficiency in PySpark for processing large-scale datasets.
- Familiarity with data governance and quality tools such as IDMC.
Why join our team:
- We help you be your best through professional development opportunities, interesting work and supportive leaders.
- We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities.
- Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Architecture AWS Computer Science Databricks Data governance Data pipelines Engineering ETL NiFi Pipelines PySpark Security Snowflake
Perks/benefits: Career development Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.