Semi Senior/ Senior Software Engineer - Data Engineer

LATAM (Remote)

Ekumen

We are an international engineering boutique, provider of advanced software development services and technology.

View all jobs at Ekumen

Apply now Apply later

#poweringyouringenuity  🚀

Our mission is to bridge top-level technology companies with engineering talent from across the globe. With presence in LATAM, USA and Europe, we empower companies by providing remote engineering teams of all levels tailored to the needs of each project.

Our teams are passionate about technology and thrive on challenges. We value technical expertise and a willingness to learn new things. Each development is tailored to the needs of each project, so being passionate about learning and using new languages, tools, and frameworks is part of our DNA. Our software engineering teams focus on best coding practices to ensure readability, reusability, and scalability of our systems' designs and developments.

We are looking for a skilled and motivated Software Engineer with a strong focus on data engineering and big data processing to join our team. In this role, you will be instrumental in maintaining and enhancing a critical data ingestion pipeline, supporting the processing of large-scale datasets, and ensuring the performance and reliability of our data platform.

Your Role and Responsibilities:

  • Maintain, troubleshoot, and optimize our existing data ingestion pipeline, primarily developed in Python.

  • Design and implement robust ETL processes using modern big data frameworks such as Apache Beam or Apache Spark.

  • Develop and backend systems written in Python and Java, with data storage on SQL Spanner or similar technologies.

  • Write clean, well-documented, and testable code.

  • Participate in code reviews and contribute to the continuous improvement of our engineering practices.

What We're Looking For:

  • Proven experience as a Full Stack Engineer or Backend/Data Engineer with a strong focus on data-intensive applications.

  • Strong proficiency in Python, particularly for data pipelines and backend development.

  • Hands-on experience with Apache Beam, Apache Spark, or similar big data frameworks.

  • Experience building and maintaining ETL pipelines and working with large datasets.

  • Familiarity with observability tools and concepts (e.g., logging, metrics, tracing).

  • Solid understanding of cloud platforms (GCP, AWS, or Azure).

  • Excellent problem-solving and analytical skills.

  • Proficient English language skills (B2/C1 level), with comfort in regular 1:1 conversations with native English speakers.

  • Strong soft skills, including clear communication, collaboration, and a proactive attitude.

Nice to Have:

  • Experience with Java in backend systems.

  • Familiarity with SQL databases, ideally Google Cloud Spanner.

  • Exposure to frontend technologies like Angular is a plus, though not a core requirement.

Join us to be part of a dynamic community where your skills and contributions truly matter!

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Angular AWS Azure Big Data Data pipelines Engineering ETL GCP Google Cloud Java Pipelines Python Spark SQL

Perks/benefits: Career development

Regions: Remote/Anywhere North America South America

More jobs like this