Data Engineer
Chippenham, Wiltshire, United Kingdom
Wincanton
Overview
As a Data Engineer at Wincanton, you will play a crucial role in designing, building, and maintaining robust data pipelines and infrastructure within our Microsoft Azure environment. Your expertise will ensure the availability, reliability, and scalability of our data platform, empowering the organisation with timely and accurate information to recognise the value of our data.
You will be responsible for designing, developing, and maintaining our data infrastructure, as well as ensuring the smooth flow and transformation of data within our organization. You will collaborate with the Data Science and Analytics teams to implement data models, create data integration pipelines, and optimize data workflows. You should have a strong background in data engineering, with a solid understanding of data management principles and techniques.
You will be involved in the operational management of the data landscape, providing support and advice to help design and develop data solutions for data modelling and warehousing, data integration, and analytics. You will work with data providers and various stakeholders to define requirements and create interfaces, and help integrate new data sources. You will also troubleshoot data feeds, uphold best practice, governance, and security in all data-led projects, and contribute to building ETL pipelines and data warehouse/data lake solutions.
You will understand and help address the problems of various big data platforms and technologies, and support research, analysis, and the implementation of technical approaches for solving challenging and complex development and integration problems. You will assist in developing logical data models and processes to transform, clean, and normalise raw data into high-quality datasets in line with analytical requirements. This will involve working closely with our Business Intelligence (BI) team to deliver data in line with their requirements.
How will you contribute?
- Support our Strategic Data Platform (SDP), ensuring that it is robust, reliable and cost effective in providing high quality data.
- Design and uphold data integration processes, including the development and execution of scalable and effective data pipelines for managing and altering large datasets.
- Optimise data storage and retrieval processes to meet performance and scalability requirements
- Ensure data quality and consistency by implementing data validation and cleansing techniques
- Collaborate with stakeholders to define requirements and integrate new data sources.
- Monitor and troubleshoot data pipeline issues, identify and resolve bottlenecks, and implement performance optimisations
- Collaborate with the Business Intelligence (BI) team to define and implement data models for analysis and reporting purposes
- Maintain best practices, governance, and security in data projects.
- Stay up-to-date with the latest trends and technologies in the field of data engineering, and make recommendations for improvement
What will you bring?
- Bachelor’s degree in Computer Science, Engineering, Mathematics, or a related field (or equivalent experience).
- Strong programming skills in Python, Java, or Scala
- Proficiency in database management systems and excellent SQL skills at both a functional and non-functional level.
- Proven experience in designing and implementing data pipelines and ETL processes
- Experience in data warehousing, semantic layer definitions and scaled data consumption patterns
- Knowledge of data integration and data modelling concepts
- Familiar with data integration, workflow applications, cloud data platforms, and storage technologies, ideally within Azure.
- Proficient in Azure data services such as Azure Synapse Analytics, Azure SQL, Azure Data Factory, Azure Data Lake, Databricks, and Cosmos DB.
- Experience with Microsoft Fabric architecture: Demonstrated ability to design, implement, and manage solutions using Microsoft Fabric, ensuring seamless integration and optimal performance within our data infrastructure.
- Experience in Agile development and understanding of branch-based source control/DevOps concepts.
- Experience in gathering and analysing system requirements and comfortable with leading technical requirements for data engineering projects
- Excellent problem-solving and troubleshooting skills
- Strong communication and collaboration skills
- Attention to detail and a commitment to delivering high-quality work
- Azure Data Engineer Associate/DP-203 certification, or equivalent practical experience preferred.
- Aptitude to learn new technologies and analytical methodologies relating to business intelligence and data analysis
- Experience with data visualization tools such as Power BI is a plus
- Familiarity with machine learning and statistical analysis techniques is a plus
What do we offer?
Our Culture & Benefits
In addition to generous remuneration, we really value our people and offer a friendly, safety-first working environment, along with other benefits including company pension scheme, generous holidays, cycle to work scheme, employee benefits include, online discount platform, onsite parking and much more.
Our people are at the core of our business and what makes Wincanton great. That’s why we also provide significant opportunities for career development and progression, as well as training enrichment and multi-skilling, in a dynamic working environment.
Be a part of our values; We’re Thoughtful, We’re Aiming High and We’re Prepared.
Our Commitment
We are committed to providing equality of opportunity for all employees. We strive for an environment where all colleagues feel included, supported and valued, whilst feeling they can be their whole selves within our workplaces. We are proud that our colleagues represent us and our successes and we welcome your application.
Find out more: Wincanton champions a diverse workforce
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture Azure Big Data Business Intelligence Computer Science Cosmos DB Data analysis Databricks Data management Data pipelines Data quality Data visualization Data warehouse Data Warehousing DevOps Engineering ETL Java Machine Learning Mathematics Pipelines Power BI Python Research Scala Security SQL Statistics
Perks/benefits: Career development
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.