Data Engineer
Remote, RO
NTT DATA Romania
Who we are
We are seeking a highly skilled and motivated Data Warehouse & Data Modeling Engineer to design, build, and optimize scalable data solutions using modern cloud technologies. This role involves end-to-end development of data models and data warehouses, with a strong focus on Azure and AWS ecosystems. The ideal candidate will have deep expertise in data modeling, pipeline development, and CI/CD automation, and will collaborate closely with cross-functional teams to deliver high-quality, performant data solutions.
What you’ll be doing
- Design and implement data models using star schema and data model methodologies, implementing market best practices.
- Develop a data warehouse (DWH) from scratch using Azure components such as Azure Data Lake, Azure Data Vault, Azure Synapse SQL DB, and Azure Synapse Analytics. AWS RDS, Oracle DB.
- Build, optimize, and maintain data pipelines within Azure Synapse/K8s for data integration and ETL/ELT processes.
- Implement CI/CD pipelines to automate deployment and integration workflows in Azure, K8s.
- Collaborate with cross-functional teams including data analysts, data scientists, and business stakeholders to define data requirements and ensure the scalability and performance of data solutions.
What you'll bring along
- Bachelor’s degree in Informatics/ or similar field of study/or equivalent working experience is required
- Minimum 5-7 years of experience in a similar role.
- Deep understanding of Model driven data warehousing techniques (forward/reverse engineering).
- Ability to write Python libraries used for optimizing code generation.
- Understanding of Dbt data modeling and release management
- Proficient in understanding data models (e.g., star schema, snowflake schema) using tools like Erwin and optimizing them for analytical purposes.
- AWS Expertise with focus on RDS, EKS containerized workloads, Landing Zone configuration and security
- Oracle PL/SQL understanding and ability to automate code generating frameworks
- Extensive hands-on experience with Azure Data Lake, Azure Data Vault, Azure Synapse SQL DB, and Azure Synapse Analytics, Azure Fabric, Azure Data Bricks
- Proven ability to design, develop, and manage complex data pipelines using Azure Synapse.
- Experience in setting up and maintaining CI/CD pipelines for data-related projects using tools like Azure DevOps, GitHub Actions, or similar.
- Proficient in ANSI SQL, T-SQL, PL/SQL, Python, or other relevant programming languages for data processing.
- Strong analytical and problem-solving skills with the ability to troubleshoot data issues and optimize processes.
- Excellent communication and teamwork skills to work effectively with technical and non-technical stakeholders.
- Agile development, technical design & specification, User Story & Task writing
- Basic knowledge of insurance/asset management/services processes and related data/contents
- Ability to deliver high-quality results under time-constraints and concurrent tasks.
- Excellent command of both spoken and written English
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile AWS Azure CI/CD Databricks Data pipelines Data warehouse Data Warehousing dbt DevOps ELT Engineering ETL GitHub Kubernetes Oracle Pipelines Python Security Snowflake SQL T-SQL
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.