Data Engineer (ETL / Databricks)
Columbus, OH, USA - 910 John Street (CMM Main Campus) (C317), United States
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
Full Time Mid-level / Intermediate USD 80K - 134K
CoverMyMeds
CoverMyMeds removes medication access and affordability barriers with in-workflow healthcare technology solutions tailored for pharma, providers, health systems, and pharmacies, as well as the payers and PBMs that interact with them.McKesson is an impact-driven, Fortune 10 company that touches virtually every aspect of healthcare. We are known for delivering insights, products, and services that make quality care more accessible and affordable. Here, we focus on the health, happiness, and well-being of you and those we serve – we care.
What you do at McKesson matters. We foster a culture where you can grow, make an impact, and are empowered to bring new ideas. Together, we thrive as we shape the future of health for patients, our communities, and our people. If you want to be part of tomorrow’s health today, we want to hear from you.
Rx Savings Solutions, part of McKesson's CoverMyMeds organization, offers an innovative, patented software system that educates and empowers consumers to make the best healthcare choices at the lowest cost. Founded and operated by a team of pharmacists and software engineers, we support a collaborative, cost-saving solution for purchasing prescription drugs.
We currently have an opportunity for a Data Engineer (ETL / Databricks) to join our growing Business Operations data engineering team! This is a junior-level position on our team (ideally looking for 2-4 years of Data Engineering / ETL experience). This assists in planning, designing, troubleshooting, and documenting technical requirements for data flows between disparate operational systems and our data warehouse. Our ideal candidate will have Databricks experience, ETL tool experience (i.e. Talend, Informatica, SSIS or DataStage), experience with SQL, and general-purpose programming languages such as Python or Java for data engineering/ingestion work. This individual will assist in end-to-end development of the ETL processes, data analytics, review business requirement documents, and execute object and data models.
* Our ideal candidate must reside in the Columbus, OH area in order to work a hybrid schedule. The work will primarily be remote from home, with approximately 3 days in office per month.
*We are not able to offer sponsorship for employment visas at this time. This includes individuals currently on F1 OPT, STEM OPT, or any other visa status that would require future sponsorship. Candidates must be authorized to work in the United States on a permanent basis without the need for current or future sponsorship.
Responsibilities:
Build and maintain scalable data pipelines to ingest and automate customer-provided files using ETL tools
Interface with cross-functional technology teams to extract, transform, and load data from diverse sources
Explore and implement the latest AWS technologies to enhance data capabilities and operational efficiency
Investigate and resolve data-related issues, providing support and troubleshooting expertise
Collaborate across teams to understand business needs and propose innovative data solutions
Participate in code reviews and contribute to continuous improvement of data engineering practices
Ensure data quality, integrity, and security across all stages of the pipeline
Document data flows, technical specifications, and operational procedures
Support deployment and integration of data solutions into production environments
Stay current with industry trends and emerging technologies in data engineering and cloud computing
Requirements:
Bachelor's degree in Computer Science or related technical degree, or equivalent experience, and 2+ years of experience relative to the above responsibilities
2+ years of experience with Databricks and building data pipelines to ingest and automate files provided by customers
2+ years of experience with SQL queries, including various SQL commands
2+ years of experience Databricks or similar cloud-based data platforms
Experience with Python
Knowledge of Structured and Unstructured data
Possess understanding of BI concepts and be familiar with relational or multi-dimensional modeling concepts
Understanding of RDBMS best practices and performance tuning techniques
Experience with cloud technologies such as AWS services such as S3, CloudWatch, EC2, and passion for a role working in a cloud data warehouse.
Experience with version control systems like Git
Ability to work independently and collaboratively in a fast-paced, Agile environment
Strong problem-solving skills and attention to detail
Excellent communication and documentation skills
It would be nice if you had...
Experience with Agile and Scrum methodologies
Knowledge of Java or JavaScript
We are proud to offer a competitive compensation package at McKesson as part of our Total Rewards. This is determined by several factors, including performance, experience and skills, equity, regular job market evaluations, and geographical markets. The pay range shown below is aligned with McKesson's pay philosophy, and pay will always be compliant with any applicable regulations. In addition to base pay, other compensation, such as an annual bonus or long-term incentive opportunities may be offered. For more information regarding benefits at McKesson, please click here.
Our Base Pay Range for this position
$80,600 - $134,400McKesson is an Equal Opportunity Employer
McKesson provides equal employment opportunities to applicants and employees and is committed to a diverse and inclusive environment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability, age or genetic information. For additional information on McKesson’s full Equal Employment Opportunity policies, visit our Equal Employment Opportunity page.
Join us at McKesson!
Tags: Agile AWS Computer Science Data Analytics Databricks Data pipelines Data quality Data warehouse EC2 Engineering ETL Git Informatica Java JavaScript Pipelines Python RDBMS Scrum Security SQL SSIS STEM Talend Unstructured data
Perks/benefits: Competitive pay Equity / stock options Health care Salary bonus
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.