Data Engineer
Paris, Ile-de-France, France
In the frame of the ongoing transformation at Group level, technology adoption and implementation are key success factors, as well as processes transition. As a member of DDPO, Data Engineer will build, deliver and maintain products & data products (data pipelines, services, APIs…) He will work in close collaboration with data product managers to develop new features linked to these products, especially features linked to :
- Data ingestion, transformation, exposition
- Data servicing (through API) or restitution (through apps / dashboards)
- Parallel calculation on big volumes of data
The Data Engineer is responsible for :
- Building, delivering, maintain, documenting data products & features (data pipelines, data services, APIs…) following state-of-the-art patterns (medaillon architecture, gitflow)
- Acting as SME and representative of Data Chapter
The Data Engineer needs to:
- Be problem-solving oriented with strong analytical thinking
- Be autonomous and rigorous in the way he/she approaches technical challenges
- Advise on the architecture of end-to-end data flows
- Collaborate with various stakeholders (product owners, solution owners, data solution analysts, developers, technical leads, architects) to deliver data artefacts in a team spirit
- Commit and bring his/her skills to contribute to the success of the Group
Under the responsibility of the Head of Data Delivery & Engineering, your mission will be to:
- Build, deliver, maintain and document data artefacts or features linked to the products you will be assigned to, under the prioritization of the Data Product Manager you will be working with:
- Develop data pipelines leveraging ELT / ETL techniques to ingest, transform and display data for well-defined purposes following state-of-the art approach (medaillon architecture for data, gitflow for development, unit tests where applicable…)
- Tackle key technical questions linked to data, like parallelization, calculation on big volumes of data, optimization of queries, etc.
- Develop, when relevant, taylor-made services / APIs to expose the data for various means (BI, APIs, services, data science…)
- Enrich SCOR ontology and document data artefacts (code documentation for data pipelines / services / APIs, contribution to the data definition, processes, etc.)
- Reuse when relevant components or assets (code, frameworks, data objects) to leverage as much as possible on the DDPO ecosystem
- Acting as SME and representative of Data Chapter
- Contribute to the design of solutions (end-to-end data flows) in close collaboration with Architects, Data Modelers, Product Owners & Managers and advise on best practices to external stakeholders
- Contribute to the overall data community by sharing good practices, return of experiences, expertise on relevant technology, etc.
- Perform peer review amongst other data engineers to ensure consistency and quality of development
- Be aware of the DDPO ecosystem to best ask advise when relevant to the appropriate experts (Data Modelers, ML Engineers, Architects, etc.)
- Additional activities related to your day-to-day mission
- Ensure a technological watch on Data platform solutions, especially related to data engineering topics
- Participate to Scrum rituals (dailys, sprint planning, sprint reviews, retrospectives, etc.)
- Contribute to ICS
Required experience & competencies
- +5 years of experience as a data engineer, data oriented mindset
- Proven experience in development and maintenance of data pipelines
- Good development practices (gitflow, unit tests, documentation…)
- Proven experience in agile projects (Scrum and/or Kanban)
- Knowledge of (Re)insurance industry and / or Financial Services is a plus
- Awareness on data management and data privacy topics
Technical Skills :
- Strong level in Python and Pyspark, ability to develop data pipelines under various platform (Databricks, Palantir Foundry, …)
- Strong level in SQL (knowledge of ANSI92 SQL, execution plan analysis)
- Good knowledge of parallelization, distributed programming techniques
- Good knowledge of datalakes environments and concepts (delta lakes, medaillon architecture, blob storage vs file shares…)
- Good knowledge in decisional data modeling (Kimball, Inmon, Data Vault concepts…) and associated good practices (slowly changing dimensions, point in time tables, chasm / fan trap management, change data capture management…)
- Knowledge of CI / CD pipelines (AzureDevOps, Jenkins, Artifactory, Azure Container Registry…)
- Knowledge of REST API development (Flask, FastAPI, Django…)
- Knowledge of containers (Docker Compose, Kubernetes) is a plus
- Knowledge of reporting tools is a plus (Tableau, Power BI…)
- Knowledge of streaming technologies is a plus (Kafka)
- Knowledge of typescript is a plus
Behavioral & Management Skills :
- Strong analytical thinking, solution-oriented and force of proposal
- Capacity to navigate in a matrix and international environment
- Autonomy, rigorous mindset, commitment
- Curiosity, interest to challenge
- Team player
Required Education
- Bachelor's degree in computer science, software or computer engineering, applied math, physics, statistics, or a related field or equivalent experience
As a leading global reinsurer, SCOR offers its clients a diversified and innovative range of reinsurance and insurance solutions and services to control and manage risk. Applying “The Art & Science of Risk,” SCOR uses its industry-recognized expertise and cutting-edge financial solutions to serve its clients and contribute to the welfare and resilience of society in around 160 countries worldwide.
Working at SCOR means engaging with some of the best minds in the industry – actuaries, data scientists, underwriters, risk modelers, engineers, and many others – as we work together to find solutions to pressing challenges facing societies.
As an international company, our common culture is defined by “The SCOR Way.” Serving both to build momentum that drives the Group forward and as a compass to guide our actions and choices, The SCOR Way is anchored by five core values, reflecting the input of employees at all levels of the Group. We care about clients, people, and societies. We perform with integrity. We act with courage. We encourage open minds. And we thrive through collaboration.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile API Development APIs Architecture Azure Computer Science Databricks Data management Data pipelines Django Docker ELT Engineering ETL FastAPI Flask Jenkins Kafka Kanban Kubernetes Machine Learning Mathematics Physics Pipelines Power BI Privacy PySpark Python REST API Scrum SQL Statistics Streaming Tableau TypeScript
Perks/benefits: Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.