Senior Data Engineer (Portfolio Companies)
Colombo, Sri Lanka
IFS
Learn more about global enterprise software solutions from IFS today. Find out how you can deliver amazing moments of service with tailored business software.Company Description
IFS is a billion-dollar revenue company with 6000+ employees on all continents. Our leading AI technology is the backbone of our award-winning enterprise software solutions, enabling our customers to be their best when it really matters–at the Moment of Service™. Our commitment to internal AI adoption has allowed us to stay at the forefront of technological advancements, ensuring our colleagues can unlock their creativity and productivity, and our solutions are always cutting-edge.
At IFS, we’re flexible, we’re innovative, and we’re focused not only on how we can engage with our customers but on how we can make a real change and have a worldwide impact. We help solve some of society’s greatest challenges, fostering a better future through our agility, collaboration, and trust.
We celebrate diversity and understand our responsibility to reflect the diverse world we work in. We are committed to promoting an inclusive workforce that fully represents the many different cultures, backgrounds, and viewpoints of our customers, our partners, and our communities. As a truly international company serving people from around the globe, we realize that our success is tantamount to the respect we have for those different points of view.
By joining our team, you will have the opportunity to be part of a global, diverse environment; you will be joining a winning team with a commitment to sustainability; and a company where we get things done so that you can make a positive impact on the world.
We’re looking for innovative and original thinkers to work in an environment where you can #MakeYourMoment so that we can help others make theirs. With the power of our AI-driven solutions, we empower our team to change the status quo and make a real difference.
If you want to change the status quo, we’ll help you make your moment. Join Team Purple. Join IFS.
About Sitecore:
Sitecore delivers a composable digital experience platform that empowers the world’s smartest and largest brands to build lifelong relationships with their customers. A highly decorated industry leader, Sitecore is the leading company bringing together content, commerce, and data into one connected platform that delivers millions of digital experiences every day. Thousands of blue-chip companies including American Express, Porsche, Starbucks, L’Oréal, and Volvo Cars rely on Sitecore to provide more engaging, personalized experiences for their customers.
What is Search?
Search is an AI-powered digital search platform, enable brands to process a large amount of customer and behavioral data, respond in real time with the most relevant content or products transform site search experience that suggests and predicts relevant content and products as users type their search terms. These SAAS native product runs on AWS based Cloud Infrastructure and consist of highly scalable data pipelines, orchestration, microservices, analytics platforms and rich user interfaces
Job Description
Summary
We are looking for a Senior Data Engineer to join our Data Platform group. In this position, you will work in a small, dynamic team to build data infrastructure and manage the overall data pipeline. You will be responsible for expanding and optimizing data and data pipeline architecture, as well as optimizing dataflow and collection of data from cross functional teams.
Responsibilities
- Design, develop, and maintain a generic ingestion framework capable of processing various types of data (structured, semi-structured, unstructured) from customer sources.
- Implement and optimize ETL (Extract, Transform, Load) pipelines to ensure data integrity, quality, and reliability as it flows into the centralized datastore like Elasticsearch.
- Ensure the ingestion framework is scalable, secure, efficient and capable of handling large volumes of data in real-time or batch processes.
- Continuously monitor and enhance the data ingestion process to improve performance, reduce latency, and handle new data sources and formats.
- Develop automated testing and monitoring tools to ensure the framework operates smoothly and can quickly adapt to changes in data sources or requirements.
- Provide documentation, support, and training to other team members and stakeholders on using the ingestion framework.
- Implement large-scale near real-time streaming data processing pipelines.
- Design, support and continuously enhance the project code base, continuous integration pipeline, etc.
- Build analytics tools that utilize the data pipeline to provide actionable insights into key business performance metrics.
- Perform POCs and evaluate different technologies and continue to improve the overall architecture.
Qualifications
- Experience building and optimizing Big Data data pipelines, architectures and data sets.
- Strong proficiency in Elasticsearch, its architecture and optimal querying of data.
- Strong analytic skills related to working with unstructured datasets.
- Experience supporting and working with cross-functional teams in a dynamic environment.
- Working knowledge of message queuing, stream processing, and highly scalable ‘big data’ data systems.
- One plus years of experience contributing to the architecture and design (architecture, design patterns, reliability and scaling) of new and current systems.
- Candidates must have 4 to 6 years of experience in a Data Engineer role with Bachelors or Masters (preferred) in Computer Science or Information Systems or equivalent field. Candidate should have knowledge of using following technologies/tools:
- Experience working on Big Data processing systems like Hadoop, Spark, Spark Streaming, or Flink Streaming.
- Experience with SQL systems like Snowflake or Redshift
- Direct, hands-on experience in two or more of these integration technologies; Java/Python, React, Golang, SQL, NoSQL (Mongo), Restful API.
- Versed in Agile, APIs, Microservices, Containerization etc.
- Experience with CI/CD pipeline running on GitHub, Jenkins, Docker, EKS.
- Knowledge of at least one distributed datastores like MongoDb, DynamoDB, HBase.
- Experience using batch scheduling frameworks like Airflow (preferred), Luigi, Azkaban etc is a plus.
- Experience with AWS cloud services: EC2, S3, DynamoDB, Elasticsearch
Additional Information
We believe that coming together as a community, in person, is important for innovation, connection and fostering a sense of belonging. Our roles have the right balance of remote and in-office working to enable flexibility for managing your life along with ensuring a real connection with your colleagues and the broader IFS community. #Li-Hybrid
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow APIs Architecture AWS Azkaban Big Data CI/CD Computer Science Dataflow Data pipelines Docker DynamoDB EC2 Elasticsearch ETL Flink GitHub Golang Hadoop HBase Java Jenkins Microservices MongoDB NoSQL Pipelines Python React Redshift Snowflake Spark SQL Streaming Testing
Perks/benefits: Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.