Data Engineering Analyst
Toronto, Canada
Sanofi
Sanofi pushes scientific boundaries to develop breakthrough medicines and vaccines. We chase the miracles of science to improve people’s lives.Reference No. R2782648
Position Title: Data Engineering Analyst
Department: Data Engineering - RWD
Location: Toronto Ontario, Hybrid 60% at-home, 40% in-office / week
About the Job
Our Hubs are a crucial part of how we innovate, improving performance across every Sanofi department and providing a springboard for the amazing work we do. Build a career and you can be part of transforming our business while helping to change millions of lives.
Sanofi’s Digital Data Organization’s mission is to transform Sanofi into a data and AI first organization by empowering everyone with good data. Through custom developed products built on world class data foundations and platforms, the team builds value and a unique competitive advantage that scales across our markets, R&D and manufacturing sites. The team is in major hubs in Paris, Lyon, Budapest, Barcelona, Cambridge, Bridgewater, Toronto, and Hyderabad. Join a dynamic, fast paced and talented team, with world class mentorship, using AI to chase the miracle of science. The Digital Real World Data Team ensures that a person’s “real” health experience, from diverse data sources like insurance claims, hospital records, fitness trackers, is a key part of every data-driven decision Sanofi makes.
We are on a journey to surface valuable medical insights from the sea of information to enable data-driven prioritization of resources to improve patience impact. We plan to accomplish this by developing a platform that leverages Artificial Intelligence to collate volumes of data from a wide-range of sources and synthesize the information into actionable insights.
Reporting to the Data Engineering Manager, you will be a core member of an Agile development. In this hybrid role, you will build and maintain the server-side web application, including the database management, data integration, data pipelines, and APIs. You enjoy partnering with members of a development team to design and implement features that provide insights from the sea of data. You tend to roll-up your sleeves and look for opportunities to learn and grow your skills all the while focused on providing value to user.
We are an innovative global healthcare company with one purpose: to chase the miracles of science to improve people’s lives. We’re also a company where you can flourish and grow your career, with countless opportunities to explore, make connections with people, and stretch the limits of what you thought was possible. Ready to get started?
Main Responsibilities:
Establish technical designs to meet Sanofi requirements aligned with the architectural and Data standards
Ownership of the entire back end of the application, including the design, implementation, testing, and troubleshooting of the core application logic, databases, data ingestion and transformation, data processing and orchestration of pipelines, APIs, CI/CD integration and other processes
Fine-tune and optimize queries using Snowflake platform and database techniques
Optimize ETL/data pipelines to balance performance, functionality, and other operational requirements
Assess and resolve data pipeline issues to ensure performance and timeliness of execution
Assist with technical solution discovery to ensure technical feasibility
Assist in setting up and managing CI/CD pipelines and development of automated tests
Developing and managing microservices using python
Conduct peer reviews for quality, consistency, and rigor for production-level solution
Design application architecture for efficient concurrent user handling, ensuring optimal performance during high usage periods
Promote best practices and standards for code management, automated testing, and deployments
Own all areas of the product lifecycle: design, development, test, deployment, operation, and support
Create detail documentation on Confluence to be able to support and maintain codebase and its functionality
About You
TECHNICAL SKILLS:
4+ years of relevant experience developing backend, integration, data pipelining, and infrastructure with relevant technologies and tools (Snowflake, AWS, Spark, Informatica/IICS or equivalent).
Bachelor’s degree in computer science, engineering, or similar quantitative field of study
Expertise in database optimization and performance improvement
Expertise in Python, PySpark, and Snowpark
Experience data warehouse and object-relational database (Snowflake and PostgreSQL) and writing efficient SQL queries
Experience in cloud-based data platforms (Snowflake, AWS)
Proficiency in developing robust, reliable APIs using Python and FastAPI Framework
Experience with job scheduling and orchestration (Airflow is a plus)
Expert in ELT and ETL & Experience working with large data sets and performance and query optimization.
Understanding of data structures and algorithms
Experience in modern testing framework (SonarQube, K6 is a plus)
Strong collaboration skills, willingness to work with others to ensure seamless integration of the server-side and client-side
Knowledge of DevOps best practices and associated tools, a plus, especially in the setup, configuration, maintenance, and troubleshooting of associated tools:
Containers and containerization technologies (Kubernetes, Argo, Red Hat OpenShift)
Infrastructure as code (Terraform)
Monitoring and Logging (CloudWatch, Grafana)
CI/CD Pipelines (JFrog Artifactory)
Scripting and automation (Python, GitHub, Github actions)
Why Choose Us?
Bring the miracles of science to life alongside a supportive, future-focused team.
Discover endless opportunities to grow your talent and drive your career, whether it’s through a promotion or lateral move, at home or internationally.
Enjoy a thoughtful, well-crafted rewards package that recognizes your contribution and amplifies your impact.
Take good care of yourself and your family, with a wide range of health and wellbeing benefits including high-quality healthcare, prevention and wellness programs
This is a REPLACEMENT
Sanofi is an equal opportunity employer committed to diversity and inclusion. Our goal is to attract, develop and retain highly talented employees from diverse backgrounds, allowing us to benefit from a wide variety of experiences and perspectives. We welcome and encourage applications from all qualified applicants. Accommodations for persons with disabilities required during the recruitment process are available upon request.
#GD-SP
#LI-SP
#LI-Hybrid
#DBBCA
Pursue progress, discover extraordinaryBetter is out there. Better medications, better outcomes, better science. But progress doesn’t happen without people – people from different backgrounds, in different locations, doing different roles, all united by one thing: a desire to make miracles happen. So, let’s be those people.
At Sanofi, we provide equal opportunities to all regardless of race, colour, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, ability or gender identity.
Watch our ALL IN video and check out our Diversity Equity and Inclusion actions at sanofi.com!
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Airflow APIs Architecture AWS CI/CD Computer Science Confluence Data pipelines Data warehouse DevOps ELT Engineering ETL FastAPI GitHub Grafana Informatica Kubernetes Microservices Pipelines PostgreSQL PySpark Python R R&D RDBMS Snowflake Spark SQL Terraform Testing
Perks/benefits: Career development Equity / stock options Health care Home office stipend Wellness
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.