Data Engineer - Anaplan

India - Hyderabad

Amgen

Amgen is committed to unlocking the potential of biology for patients suffering from serious illnesses by discovering, developing, manufacturing and delivering innovative human therapeutics.

View all jobs at Amgen

Apply now Apply later

Career Category

Information Systems

Job Description

ABOUT AMGEN

Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today.

 ABOUT THE ROLE

Role Description:

The role is responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes

Roles & Responsibilities:

  • Design, develop, and maintain data solutions for data generation, collection, and processing
  • Be a key team member that assists in design and development of the data pipeline
  • Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems
  • Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions
  • Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks
  • Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs
  • Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency
  • Implement data security and privacy measures to protect sensitive data
  • Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions
  • Collaborate and communicate effectively with product teams
  • Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions
  • Adhere to best practices for coding, testing, and designing reusable code/component
  • Explore new tools and technologies that will help to improve ETL platform performance
  • Participate in sprint planning meetings and provide estimations on technical implementation

Basic Qualifications and Experience:

  • Master’s degree and 1 to 3 years of Computer Science, IT or related field experience OR
  • Bachelor’s degree and 3 to 5 years of Computer Science, IT or related field experience OR
  • Diploma and 7 to 9 years of Computer Science, IT or related field experience

Functional Skills:

Must-Have Skills

  • Proficiency in Python, PySpark, and Scala for data processing and ETL (Extract, Transform, Load) workflows, with hands-on experience in using Databricks for building ETL pipelines and handling big data processing
  • Experience with data warehousing platforms such as Amazon Redshift, or Snowflake.
  • Strong knowledge of SQL and experience with relational (e.g., PostgreSQL, MySQL) databases.
  • Familiarity with big data frameworks like Apache Hadoop, Spark, and Kafka for handling large datasets.
  • Experienced with software engineering best-practices, including but not limited to version control (GitLab, Subversion, etc.), CI/CD (Jenkins, GITLab etc.), automated unit testing, and Dev Ops

Good-to-Have Skills:

  • Experience with cloud platforms such as AWS particularly in data services (e.g., EKS, EC2, S3, EMR, RDS, Redshift/Spectrum, Lambda, Glue, Athena)
  • Experience with Anaplan platform, including building, managing, and optimizing models and workflows including scalable data integrations
  • Understanding of machine learning pipelines and frameworks for ML/AI models

Professional Certifications:

  • AWS Certified Data Engineer (preferred)
  • Databricks Certified (preferred)

Soft Skills:

  • Excellent critical-thinking and problem-solving skills
  • Strong communication and collaboration skills
  • Demonstrated awareness of how to function in a team setting
  • Demonstrated presentation skills

EQUAL OPPORTUNITY STATEMENT

Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status.

We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Architecture Athena AWS Big Data Biology CI/CD Computer Science Databricks Data governance Data pipelines Data quality Data Warehousing EC2 ELT Engineering ETL GitLab Hadoop Jenkins Kafka Lambda Machine Learning MySQL Pipelines PostgreSQL Privacy PySpark Python Redshift Scala Security Snowflake Spark SQL Testing

Perks/benefits: Career development

Region: Asia/Pacific
Country: India

More jobs like this