Senior Data Engineer ( Portfolio Companies: Sitecore)

Colombo, WP, Sri Lanka

IFS

Learn more about global enterprise software solutions from IFS today. Find out how you can deliver amazing moments of service with tailored business software.

View all jobs at IFS

Apply now Apply later

Company Description

About IGT1 Lanka 

IGT1 Lanka is a rapidly growing offshore technology and talent solutions company based in Port City Colombo. We are a fully owned subsidiary of IGT I Holdings Sweden AB, funded by the three of world’s leading private equity firms; EQT Group, Hg, and TA Associates. We’re also proud to be a sister company of IFS, Sri Lanka’s largest and most established technology company. 

At IGT1 Lanka, we partner with global businesses to scale operations, accelerate innovation, and build world-class SaaS platforms through high-quality offshore delivery. Our people-first culture champions diversity, teamwork, and continuous learning, creating an environment where talent thrives. 

With a team of over 300 professionals and counting, we are always looking for passionate, skilled individuals who want to make a global impact while being part of something extraordinary. 

Through our offshore collaboration model, you'll be embedded within the team of one of our esteemed international clients, contributing directly to high-impact, enterprise-level initiatives. 

About Sitecore:

Sitecore delivers a composable digital experience platform that empowers the world’s smartest and largest brands to build lifelong relationships with their customers. A highly decorated industry leader, Sitecore is the leading company bringing together content, commerce, and data into one connected platform that delivers millions of digital experiences every day. Thousands of blue-chip companies including American Express, Porsche, Starbucks, L’Oréal, and Volvo Cars rely on Sitecore to provide more engaging, personalized experiences for their customers.

Job Description

About the position:

We are looking for a Senior Data Engineer to join our Data Platform group. In this position, you will work in a small, dynamic team to build data infrastructure and manage the overall data pipeline. You will be responsible for expanding and optimizing data and data pipeline architecture, as well as optimizing dataflow and collection of data from cross functional teams.

 

Responsibilities

  • Design, develop, and maintain a generic ingestion framework capable of processing various types of data (structured, semi-structured, unstructured) from customer sources.
  • Implement and optimize ETL (Extract, Transform, Load) pipelines to ensure data integrity, quality, and reliability as it flows into the centralized datastore like Elasticsearch.
  • Ensure the ingestion framework is scalable, secure, efficient and capable of handling large volumes of data in real-time or batch processes.
  • Continuously monitor and enhance the data ingestion process to improve performance, reduce latency, and handle new data sources and formats.
  • Develop automated testing and monitoring tools to ensure the framework operates smoothly and can quickly adapt to changes in data sources or requirements.
  • Provide documentation, support, and training to other team members and stakeholders on using the ingestion framework.
  • Implement large-scale near real-time streaming data processing pipelines.
  • Design, support and continuously enhance the project code base, continuous integration pipeline, etc.
  • Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
  • Perform POCs and evaluate different technologies and continue to improve the overall architecture.

Qualifications

Qualifications                                                                                              

  • Experience building and optimizing Big Data data pipelines, architectures and data sets.
  • Strong proficiency in Elasticsearch, its architecture and optimal querying of data.
  • Strong analytic skills related to working with unstructured datasets.
  • Experience supporting and working with cross-functional teams in a dynamic environment.
  • Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data systems.
  • One plus years of experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems.
  • Candidates must have 2-4 years of experience in a Data Engineer role with a Bachelors or Masters Degree in Computer Science or Information Systems or equivalent field. Candidate should have knowledge of using following technologies/tools:
    • Experience working on Big Data processing systems like Hadoop, Spark, Spark Streaming, or Flink Streaming.
    • Experience with SQL systems like Snowflake or Redshift
    • Direct, hands-on experience in two or more of these integration technologies; Java/Python, React, Golang, SQL, NoSQL (Mongo), Restful API.
    • Versed in Agile, APIs, Microservices, Containerization etc.
    • Experience with CI/CD pipeline running on GitHub, Jenkins, Docker, EKS.
    • Knowledge of at least one distributed datastores like MongoDb, DynamoDB, HBase.
    • Experience using batch scheduling frameworks like Airflow (preferred), Luigi, Azkaban etc is a plus.
    • Experience with AWS cloud services: EC2, S3, DynamoDB, Elasticsearch

Additional Information

We believe that coming together as a community, in person, is important for innovation, connection and fostering a sense of belonging. Our roles have the right balance of remote and in-office working to enable flexibility for managing your life along with ensuring a real connection with your colleagues and the broader IFS community. #Li-Hybrid

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Agile Airflow APIs Architecture AWS Azkaban Big Data CI/CD Computer Science Dataflow Data pipelines Docker DynamoDB EC2 Elasticsearch ETL Flink GitHub Golang Hadoop HBase Java Jenkins Microservices MongoDB NoSQL Pipelines Python React Redshift Snowflake Spark SQL Streaming Testing

Region: Asia/Pacific
Country: Sri Lanka

More jobs like this