SME Data Engineer
Poznan, PL
Applications have closed
We are looking for additional Data Engineers to join our team working on high-profile projects in the Central Government and Transport sector.
Candidates must be able to demonstrate skills and experience in the following:
Understanding of tools / products / languages which can be used to collect, integrate, store, visualise and govern data and metadata, including:
- Databricks Notebooks
- Azure DevOps and Git source control mechanisms
- Azure Data Factory (including creation of data pipelines)
- Scala and Spark
- Design, development, test and support ETL application components for data collection, data integration and ETL applications to make data available to client stakeholders and technical interfaces
- Ability to model data requirements, data sources and data flows
- Understanding of the application of processes such as data cleansing which add value to data
- Definition and creation of metadata
- Understanding of the lifecycle of corporate data assets
The following are also desirable:
- Design mapping of data from source to target using application components
- Understanding of data analysis and data profiling activities to identify data quality issues
- Data model design for ETL application components
- Proficiency in ETL tools such as Informatica, Ab Initio, Oracle ODI and Big Data / Open Source applications
- Design and development of BI reporting solutions
- Identification and management of reference data
- Knowledge of MSSQL/TSQL
- Knowledge of Python
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Azure Big Data Data analysis Databricks Data pipelines Data quality DevOps ETL Git Informatica Model design MS SQL Open Source Oracle Pipelines Python Scala Spark T-SQL
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.