Sr. Data Engineer II

Remote in Mexico

TrueML

TrueML makes financial technology that prioritizes customer experience and revolutionizes the experience of consumers seeking financial health.

View all jobs at TrueML

Apply now Apply later

Why TrueML? TrueML is a mission-driven financial software company that aims to create better customer experiences for distressed borrowers. Consumers today want personal, digital-first experiences that align with their lifestyles, especially when it comes to managing finances. TrueML’s approach uses machine learning to engage each customer digitally and adjust strategies in real time in response to their interactions. The TrueML team includes inspired data scientists, financial services industry experts and customer experience fanatics building technology to serve people in a way that recognizes their unique needs and preferences as human beings and endeavoring toward ensuring nobody gets locked out of the financial system.
About the Role:
As a Senior Data Engineer II, you will play a pivotal role in designing, building, and maintaining our cutting-edge data LakeHouse platform. You will leverage open table formats like Apache Iceberg to create scalable, reliable data solutions that enable optimized query performance across a broad spectrum of analytical workloads and emerging data applications. In this role, you'll develop and operate robust data pipelines, integrating diverse source systems and implementing efficient data transformations for both batch and streaming data.

Work-Life Benefits

  • Unlimited PTO
  • Medical benefit contributions in congruence with local laws and type of employment agreement

What you'll do:

  • Building Data LakeHouse: In the Senior Data Engineer II role, you will design, build, and operate robust data lakehouse solutions utilizing open table formats like Apache Iceberg. Your focus will be on delivering a scalable, reliable data lakehouse with optimized query performance for a wide range of analytical workloads and emerging data applications.
  • Pipeline and Transformation: Integrate with diverse source systems and construct scalable data pipelines. Implement efficient data transformation logic for both batch and streaming data, accommodating various data formats and structures.
  • Data Modeling: Analyze business requirements and profile source data to design, develop, and implement robust data models and curated data products that power reporting, analytics, and machine learning applications.
  • Data Infrastructure: Develop and manage a scalable AWS cloud infrastructure for the data platform, employing Infrastructure as Code (IaC) to reliably support diverse data workloads. Implement CI/CD pipelines for automated, consistent, and scalable infrastructure deployments across all environments, adhering to best practices and company standards.
  • Monitoring and Maintenance: Monitor data workloads for performance and errors, and troubleshoot issues to maintain high levels of data quality, freshness, and adherence to defined SLAs.
  • Collaboration: Collaborate closely with Data Services and Data Science colleagues to drive the evolution of our data platform, focusing on delivering solutions that empower data users and satisfy stakeholder needs throughout the organization.

A successful candidate will have:

  • Bachelor's degree in Computer Science, Engineering, or a related technical field (Master's degree is a plus).
  • 5+ years of hands-on engineering experience (software or data), with a strong emphasis on 3+ years in data-focused roles.
  • Experience implementing data lake and data warehousing platforms.
  • Strong Python and SQL skills applied to data engineering tasks.
  • Proficiency with the AWS data ecosystem, including services like S3, Glue Catalog, IAM, and Secrets Manager.
  • Experience with Terraform and Kubernetes.
  • Track record of successfully building and operationalizing data pipelines.
  • Experience working with diverse data stores, particularly relational databases.

You might also have:

  • Experience with Airflow, DBT, and Snowflake. 
  • Certification in relevant technologies or methodologies.
  • Experience with streaming processing technology, e.g., Flink, Spark Streaming.
  • Familiarity with Domain-Driven Design principles and event-driven architectures.
  • Certification in relevant technologies or methodologies.
This role is only approved to hire within the following LatAm countries: Mexico, Argentina, or Dominican Republic.
We are a dynamic group of people who are subject matter experts with a passion for change. Our teams are crafting solutions to big problems every day. If you’re looking for an opportunity to do impactful work, join TrueML and make a difference.
Our Dedication to Diversity & Inclusion TrueML is an equal-opportunity employer. We promote, value, and thrive with a diverse & inclusive team. Different perspectives contribute to better solutions, and this makes us stronger every day. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow Architecture AWS CI/CD Computer Science CX Data pipelines Data quality Data Warehousing dbt Engineering Flink Kubernetes Machine Learning Pipelines Python RDBMS Snowflake Spark SQL Streaming Terraform

Perks/benefits: Career development Unlimited paid time off

Regions: Remote/Anywhere North America
Country: Mexico

More jobs like this