Senior Data Engineer (English Required)

Mexico - Remote

DaCodes

Empower your company's future with DaCodes' world-class software solutions and expert team. Scale efficiently, innovate, and transform your business today.

View all jobs at DaCodes

Apply now Apply later

Work at DaCodes!

We are a team of experts in software development and high-impact digital transformation.

For over 10 years, we’ve created technology and innovation-driven solutions thanks to our team of 220+ talented #DaCoders, including developers, architects, UX/UI designers, PMs, QA testers, and more. Our team integrates into projects with clients across LATAM and the United States, delivering outstanding results.

At DaCodes, you'll accelerate your professional growth by collaborating on diverse projects across various industries and sectors.

Working with us will make you versatile and agile, giving you the opportunity to work with cutting-edge technologies and collaborate with top-level professionals.

Our DaCoders play a crucial role in the success of our business and that of our clients. You’ll become the expert contributing to our projects while gaining access to disruptive startups and global brands. Does this sound interesting to you?

We’re looking for talent to join our team—let’s work together!

The ideal candidate brings a unique mix of technical experience, curiosity, a logical and analytical mindset, proactivity, ownership, and a passion for teamwork.

Requirements

We are looking for a Senior Data Engineer to join our team and help design, build, and optimize data pipelines for large-scale applications. The ideal candidate has strong experience in data architecture, ETL/ELT processes, cloud platforms, and distributed systems.

This role requires expertise in handling big data, real-time processing, and data lakes while ensuring scalability, performance, and security. The candidate should be comfortable working in a fast-paced, agile environment and collaborating with data scientists, analysts, and software engineers to deliver high-quality data solutions.


Required Qualifications

🔹 5+ years of experience in data engineering, data architecture, or backend development.
🔹 Strong expertise in SQL and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.).
🔹 Cloud expertise with AWS (preferred), GCP, or Azure.
🔹 Proficiency in Python, Java, or Scala for data processing and pipeline development.
🔹 Experience with big data frameworks like Apache Spark, Hadoop, or Flink.
🔹 Hands-on experience with ETL/ELT processes and data pipeline orchestration tools (Apache Airflow, dbt, Luigi, or Prefect).
🔹 Experience with message queues and streaming technologies (Kafka, Kinesis, Pub/Sub, or RabbitMQ).
🔹 Knowledge of containerization and orchestration tools (Docker, Kubernetes).
🔹 Strong problem-solving skills and the ability to optimize performance and scalability.
🔹 English proficiency (B2 or higher) to collaborate with international teams.

Nice-to-Have Skills (Preferred)

Experience with data lakehouse architectures (Delta Lake, Iceberg, Hudi).
✅ Familiarity with Machine Learning (ML) and AI-related data workflows.
✅ Experience with Infrastructure as Code (Terraform, CloudFormation) for managing data environments.
✅ Knowledge of data security and compliance regulations (GDPR, CCPA, HIPAA).

Key Responsibilities

Design, develop, and maintain scalable and efficient data pipelines for batch and real-time processing.
Build and optimize data lakes, warehouses, and analytics solutions on cloud platforms (AWS, GCP, or Azure).
Implement ETL/ELT workflows using tools such as Apache Airflow, dbt, or Prefect.
Ensure data integrity, consistency, and governance through proper architecture and best practices.
Integrate data from various sources (structured and unstructured), including APIs, streaming services, and databases.
Work with data scientists and analysts to ensure high availability and accessibility of data for analytics and machine learning models.
Monitor, troubleshoot, and improve the performance of data pipelines.
Implement security best practices for data access, encryption, and compliance.
Collaborate with software engineers to integrate data pipelines into applications and services.
Stay up to date with the latest trends in big data, cloud technologies, and data engineering best practices.

Benefits

  • Integration with global brands and disruptive startups.
  • Remote work/Home office. *You will be informed from the first session if any positions require a hybrid or on-site format. Don't worry, most are remote!
  • Work schedule aligned with your assigned team/project. (Client's time zone)
  • Monday to Friday work week.
  • Legal benefits.
  • Official holidays according to your assigned team/project.
  • Vacation days *You can use these days after six months with the company.
  • Day off on your birthday.
  • Major medical insurance.
  • Life insurance.
  • Virtual integration events and interest groups.
  • Meetups with special guests from companies, IT professionals, and prestigious universities.
  • Constant feedback and performance tracking.
  • Access to courses and certifications.
  • Multicultural work teams.
  • English classes.
  • Opportunities across our different business lines.

Proudly certified as a Great Place to Work!

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Agile Airflow APIs Architecture AWS Azure Big Data CloudFormation Data pipelines dbt Distributed Systems Docker DynamoDB ELT Engineering ETL Flink GCP Hadoop Java Kafka Kinesis Kubernetes Machine Learning ML models MongoDB MySQL NoSQL Pipelines PostgreSQL Python RabbitMQ Scala Security Spark SQL Streaming Terraform UX

Perks/benefits: Career development Startup environment Team events

Regions: Remote/Anywhere North America
Country: Mexico

More jobs like this