Senior Data Engineer

Bengaluru, Karnataka, India

Advarra

Advarra accelerates clinical research by uniting patients, sites, sponsors and CROs in a single, connected ecosystem.

View all jobs at Advarra

Apply now Apply later

Company Information

At Advarra, we are passionate about making a difference in the world of clinical research and advancing human health. With a rich history rooted in ethical review services combined with innovative technology solutions and deep industry expertise, we are at the forefront of industry change. A market leader and pioneer, Advarra breaks the silos that impede clinical research, aligning patients, sites, sponsors, and CROs in a connected ecosystem to accelerate trials.

 

Company Culture

Our employees are the heart of Advarra. They are the key to our success and the driving force behind our mission and vision. Our values (Patient-Centric, Ethical, Quality Focused, Collaborative) guide our actions and decisions. Knowing the impact of our work on trial participants and patients, we act with urgency and purpose to advance clinical research so that people can live happier, healthier lives.

 

At Advarra, we seek to foster an inclusive and collaborative environment where everyone is treated with respect and diverse perspectives are embraced. Treating one another, our clients, and clinical trial participants with empathy and care are key tenets of our culture at Advarra; we are committed to creating a workplace where each employee is not only valued but empowered to thrive and make a meaningful impact.

This opportunity will provide you with experience working with cloud data platforms such as Snowflake and data transformation tools like dbt. You will gain exposure to AWS infrastructure for building innovative and efficient data solutions and work with various data sources, including Oracle, MS SQL Server, and Postgres.

Job Overview Summary

This role supports the Data Science team as a Data Engineer III for developing an enterprise Data Platform. In this senior role, you will be responsible for designing and implementing complex data pipelines, optimizing data transformations, and ensuring the scalability and performance of our data infrastructure. The ideal candidate will have a deep technical background and a proven track record in managing data engineering projects and collaborating with cross-functional teams to drive data-driven decisions. This opportunity will provide you with experience working with cloud data platforms such as Snowflake and data transformation tools like dbt. You will gain exposure to AWS infrastructure for building innovative and efficient data solutions and work with various data sources, including Oracle, MS SQL Server, and Postgres.

 

Job Duties & Responsibilities

 

  • Design, develop, and optimize complex data pipelines and transformation processes using Snowflake, dbt, and AWS services.
  • Implement and manage data integration workflows using Fivetran to ensure timely and accurate data ingestion from various sources.
  • Develop and maintain scalable data models and schemas in Snowflake, ensuring they meet performance and business requirements.
  • Monitor and fine-tune the performance of data pipelines, queries, and data models to ensure optimal efficiency and cost-effectiveness.
  • Utilize Snowflake’s features, such as Time Travel, Zero-Copy Cloning, and Data Sharing, to enhance data management and performance.
  • Leverage AWS services, such as AWS Lambda, S3, and Glue, to build and manage serverless data processing workflows and data storage solutions.
  • Implement data security measures and ensure compliance with data privacy regulations and organizational policies.
  • Troubleshoot and resolve complex data issues, including data sync errors, performance bottlenecks, and integration challenges.
  • Provide support for data-related incidents and ensure effective resolution of production issues.
  • Collaborate with data analysts, and other stakeholders to understand data needs and deliver effective solutions.
  • Document data processes, models, and workflows, ensuring clear communication and knowledge sharing across teams.

 

Location

This role is open to candidates working in Bengaluru, Ind (hybrid).

 

Basic Qualifications

 

  • Bachelor’s Degree in Computer Science, Data Science, Information Technology, Engineering, or a related field. Snowflake certification a plus.
  • 5-7 years of experience in data engineering, with a strong focus on data transformation, integration, and cloud-based data solutions
  • Experience in writing complex SQL queries
  • 5 years of experience on Snowflake platform.
  • Experience in writing data transformation with dbt platform.
  • Experience in building connector with Fivetran platform.
  • Experience in writing test automation scripts.
  • Understanding of Snowflake Warehouses, reader accounts, SSO Setup, and data masking policies.
  • Good understanding of Change Data Capture and Change Data Tracking.
  • Understanding challenges of ingesting large volume data.
  • Ability to write complex data transformation logic using Advanced SQL query skills such as complex joins, sub-queries, grouping, aggregation, and stored procedures.
  • Programming languages, especially Python and Java.
  • Knowledge of data modeling and transformation tools like dbt, Tableau data prep etc.
  • Expert in data warehousing concepts, methodologies, and best practices.
  • Must be comfortable independently evaluating a situation, exercising good judgment and discretion, and independently deciding matters of significance.
  • Excellent oral and written communication skills including the ability to speak in front of large groups.
  • Comfort working in a geographically distributed team-based environment.
  • Ability to handle stress and interact with others in a professional manner.

 

Preferred Qualifications

 

  • Data Architecture & Data modeling experience in Clinical trials domain / Life Sciences.
  • Knowledge of data management tools and process; Data governance tools is a plus.
  • Working experience with version control platforms, e.g. Github, and agile methodologies and supporting tools JIRA.

 

 

Physical and Mental Requirements

 

  • Sit or stand for extended periods of time at stationary workstation
  • Regularly carry, raise, and lower objects of up to 10 Lbs.
  • Learn and comprehend basic instructions
  • Focus and attention to tasks and responsibilities
  • Verbal communication; listening and understanding, responding, and speaking

 

Advarra is an equal opportunity employer that is committed to diversity, equity and inclusion and providing a workplace that is free from discrimination and harassment of any kind based on race, color, religion, creed, sex (including pregnancy, childbirth, and related medical conditions, sexual orientation, and gender identity), national origin, age, disability or genetic information or any other status or characteristic protected by central, state, or local law.  Advarra provides equal employment opportunity to all individuals regardless of these protected characteristics. Further, Advarra takes affirmative action to ensure that applicants and employees are treated without regard to any of these protected characteristics in all terms and conditions of employment, including, but not limited to, hiring, training, promotion, discipline, compensation, benefits, and separation from employment.

 

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Architecture AWS Computer Science Data governance Data management Data pipelines Data Warehousing dbt Engineering FiveTran GitHub Java Jira Lambda MS SQL Oracle Pipelines PostgreSQL Privacy Python Research Security Snowflake SQL Tableau

Perks/benefits: Equity / stock options Health care

Region: Asia/Pacific
Country: India

More jobs like this