GCP Data Engineer

United States

Emids

Committed to bettering healthcare and empowering wellness, we focus on what matters. Using technology and insights to create transformative outcomes for patients, providers and partners.

View all jobs at Emids

Apply now Apply later

Emids is a leading provider of digital transformation solutions to the healthcare industry, serving payers, providers, life sciences, and technology firms. Headquartered in Nashville, Emids helps bridge critical gaps in providing accessible, affordable, and high-quality healthcare by providing digital transformation services, custom application development, data engineering, business intelligence solutions, and specialized consulting services to all parts of the healthcare ecosystem. With nearly 2,500 professionals globally, Emids leverages strong domain expertise in healthcare-specific platforms, regulations, and standards to provide tailored, cutting-edge solutions and services to its clients.

Job Title: GCP Data Engineer

Location: Dallas, TX (On-site or only Local preferred)

Job Description:

Overview:

We are seeking a GCP Data Engineer to design, build, and maintain large-scale data infrastructure and processing systems. The role involves creating scalable solutions to support data-driven applications, analytics, and business intelligence. A key responsibility is migrating historical data from Teradata to the new GCP system and developing data pipelines to streamline data loading into BigQuery.

 

Key Responsibilities:

  • Integrate data from diverse sources, including databases, APIs, and streaming platforms.
  • Optimize data processing and query performance through fine-tuning data pipelines, database configurations, and partitioning strategies.
  • Implement and monitor data quality checks and validations to ensure reliable data for analytics and applications.
  • Apply security measures to safeguard sensitive data, coordinating with security teams to ensure encryption, access controls, and regulatory compliance.
  • Collaborate with cross-functional teams, including data scientists, analysts, software engineers, and business stakeholders.
  • Design and develop data infrastructure components such as data warehouses, data lakes, and data pipelines.
  • Establish and maintain auditing, monitoring, and alerting mechanisms to ensure data governance and system performance.
  • Explore and implement new frameworks, platforms, or cloud services to enhance data processing capabilities.
  • Utilize DevSecOps practices to incorporate security throughout the development lifecycle.

Position Summary:

  • Acquire a thorough understanding of enterprise data systems and relevant processes for project delivery.
  • Contribute to project estimation and provide insights to technical leads.
  • Participate in Agile scrum activities, project status meetings, and user story grooming/design discussions.
  • Analyze complex data structures from various sources and design large-scale data engineering pipelines.
  • Develop robust ETL pipelines, design database systems, and create data processing tools using programming skills.
  • Perform data engineering tasks including ETL development, testing, and deployment.
  • Collaborate with developers on ETL job/pipeline development and integrate components for automation.
  • Document data engineering processes, workflows, and systems for reference and knowledge sharing.
  • Ensure data accuracy, completeness, and consistency through quality checks and validation processes.
  • Work effectively with team members to deliver business solutions.

Preferred Qualifications:

  • Experience with GCP tools such as BigQuery, Cloud SQL, Python, Cloud Composer/Airflow, Cloud Storage, and Dataflow/Data Fusion.
  • Practical experience with Teradata utilities (BTEQ, TPT, FastLoad) and SQL queries.
  • GCP Data Engineer certification is strongly preferred.
  • Proficiency with multiple tools and programming languages for data analysis and manipulation.
  • Strong problem-solving and critical thinking abilities.
  • Effective communication and collaboration skills within and across teams.
  • Knowledge of Flask, JavaScript, HTML, CSS, and Django.
  • Familiarity with BI tools such as MicroStrategy and Tableau.
  • Understanding of software development methodologies including waterfall and Agile.
  • Experience in the healthcare or PBM domain is preferred.

Required Qualifications:

  • 7+ years of experience in building and executing data engineering pipelines.
  • 6+ years of experience with Python.
  • 7+ years of experience with SQL.
  • 7+ years of hands-on experience with bash shell scripts, UNIX utilities, and UNIX commands.
  • 5+ years of experience with GCP, including BigQuery and Cloud SQL.
  • 5+ years of experience with various databases such as Teradata, DB2, Oracle, and SQL Server.
  • Experience in healthcare and PBM systems is preferred.

Here at Emids we're not scared of differences. It's how we break new ground. As we scale and elevate the experience of our clients in the Healthcare & Life Sciences Space and ultimately have an impact on every patient from every walk of life, the team we build must be reflective of the diversity that we serve. Together, we've built and will continue to grow, a diverse and inclusive culture where everyone has a seat at the table and the space to be their most authentic self. Emids believes in being an Equal Opportunity Employer and we support, celebrate, and cherish all the things that make our teammates who they are.

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  6  1  0
Category: Engineering Jobs

Tags: Agile Airflow APIs BigQuery Business Intelligence Consulting Data analysis Dataflow Data governance Data pipelines Data quality DB2 Django Engineering ETL Flask GCP JavaScript Oracle Pipelines Python Scrum Security SQL Streaming Tableau Teradata Testing

Perks/benefits: Team events

Region: North America
Country: United States

More jobs like this