Data Engineer
Bengaluru, KA, India
Egis Group
Egis is an end-to-end global engineering and operating firm. We’re working side by side with clients to create a sustainable future for communities everywhere.Company Description
Egis is an international player active in architecture, consulting, construction engineering and mobility services. We create and operate intelligent infrastructures and buildings that respond to the climate emergency and contribute to more balanced, sustainable and resilient territorial development. Operating in 100 countries, Egis puts the expertise of its 19,500 employees at the service of its clients and develops cutting-edge innovations accessible to all projects. Through its wide range of activities, Egis is a key player in the collective organisation of society and the living environment of citizens all over the world.
With 3,500 employees across 8 countries in the Middle East, Egis has delivered over 700 complex development projects, stimulating economic growth and enhancing quality of life. Ranked among the top ten firms in the Middle East by Engineering News Record (ENR), Egis is committed to sustainable development. The Group’s operations in the Middle East are built on strategic acquisitions and a deep understanding of local market conditions. Egis’ long history of providing comprehensive engineering, consulting, and project management services makes it a trusted partner for regional governments, investors, and developers.
Job Description
Job Summary: The Data Engineer will be responsible for leveraging the data platform to create data products for business. This role involves the development of data products, data pipelines, data transformation, data cleansing, data normalization, deployment & support for various functions within EGIS.
Technologies: Azure Data Factory, Data Bricks, SQL, Azure DevOps, Azure Azure Devops, gitlab, Power BI
Key Responsibilities:
· Design and implement Extract, Transform, Load (ETL) processes to move data from various sources (on-premises, cloud, and third-party APIs) to Azure data platforms.
· Integrate data from diverse sources into Azure-based systems like Azure Data Lake/Azure SQL Database.
· Use Azure Data Factory or Data Bricks orchestration tools to automate and schedule data pipelines/Job workflows
· Design efficient data models (e.g., star or snowflake schema) for use in analytical applications and reporting systems.
· Optimize SQL queries and scripts for performance, ensuring low-latency and efficient data processing
· Continuously monitor the health of pipelines, jobs and infrastructure, ensuring they are running efficiently and securely
· Assist in building dashboards and reporting solutions using tools like Power BI, ensuring data is made available in an user-friendly format
· Address and resolve data issues, including failures in ETL pipelines, system performance problems, and data inconsistencies
· Ensure that all data engineering processes comply with industry security standards and best practices, including encryption, access controls, and data masking
· Keep logs of pipeline performance, failures, and system updates to provide visibility for monitoring and troubleshooting
Qualifications
· Bachelor's degree in B.E in Computer Science, Data Management, or a related field.
· Proficiency in Azure, Data Bricks, Azure Data Factory, SQL, Azure DevOps, gitlab and data visualization tools like Power BI.
· Strong understanding of data management and automation.
· Excellent analytical, troubleshooting and problem-solving skills.
· Years of Experience – 6-7 years in a similar role.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Architecture Azure Computer Science Consulting Databricks Data management Data pipelines Data visualization DevOps Engineering ETL GitLab Pipelines Power BI Security Snowflake SQL
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.