IT Tech Lead
WORK AT HOME, United States
OhioHealth
OhioHealth is a family of not-for-profit hospitals and healthcare facilities that has been serving central Ohio since 1891. Discover the difference WE can make.We are more than a health system. We are a belief system. We believe wellness and sickness are both part of a lifelong partnership, and that everyone could use an expert guide. We work hard, care deeply and reach further to help people uncover their own power to be healthy. We inspire hope. We learn, grow, and achieve more – in our careers and in our communities.
Job Description Summary:
Job Summary:Looking for an experienced Lead Data Engineer with background in Informatica ETL and Cloud technologies. In this senior role, tech lead will be responsible for overseeing the architecture, development, and maintenance of our data platform, ensuring data quality and efficiency.
Minimum Qualifications:
Bachelor's Degree: Computer and Information Science- Degree in Computer Science , Information, or a related field.
- Minimum of 12 – 15 years of experience in data engineering, with a focus on ETL and Cloud technologies (can be less than 10 years )
- Proficiency in Informatica ETL tools, SQL, and Azure Cloud platforms (preferably Azure but others acceptable)
- Experience in Informatica IDMC - Architect, Design and develop ETL processes on IDMC and Informatica suite of tools. Familiarity with MDM Architecture and Data Flow
- Experience in potential migration from existing data platforms to Databricks, Microsoft Fabric
- Must have working experience with Azure based Data Pipelining, Scheduling and Monitoring and pyspark with ability to debug troublesome pipelines. Must have hands on expertise dealing with data pipelines
- Strong working experience with Big Data technologies (Spark, Data Bricks) for Data integration, & processing (ingestion, transformation, curation, etc), preferably on Azure cloud, and a clear understanding of how the resources work and integrate with cloud and on-prem.
- High level of proficiency with database and data warehouse development, including replication, staging, ETL, stored procedures, partitioning, change data capture, triggers, scheduling tools, cubes, and datamarts.
- Experience working with backend languages such as Python.
- Strong computer literacy and proficiency in data manipulation using Analytics tools/platform like Databricks, Azure Fabric using Spark engine
- Expertise in at least one technology stack designing, developing, testing, and/or delivering complex software (i.e., Java, Python, Pyspark)
- Excellent debugging, troubleshooting, and analytical skills
- Strong analytical and problem-solving skills with the ability to own, troubleshoot and resolve complex data issues
- Collaborate with Architects and Managers to develop Metrics & KPI’s
- Identify any technical risks and forming contingency plans as soon as possible
- Good communication and collaboration skills to work effectively with cross-functional teams.
- Experience working in an Agile development environment preferred
- Experience building data pipelines to support ML workflows a plus
- Experience working with geographically distributed teams (different time zones)
Desired Attributes:
- Is adaptable to new technology
- Forward-thinking, with ability to be strategic when looking at future technologies
- Possesses a continuous-learner mindset
- Ability to estimate the financial impact of technology alternatives
- Ability to quickly comprehend the functions and capabilities of existing, new, and emerging technologies that enable and drive new business designs and models
- Demonstrated ability to work well with others and be respected as a leader
Key Responsibilities:
Subject Matter Expertise: Serve as the primary SME, within their respective functions, with deep knowledge of applications and platforms the team is responsible for.
ETL Process Design: Lead the design and implementation of ETL processes using various ETL tools and diverse data sources. Both batch and real-time data.
Cloud Integration: Develop and manage data solutions on cloud platforms (Azure)
Team Leadership: Provide technical leadership and mentorship to a team of data engineers, ensuring best practices and high-quality deliverables.
Performance Optimization: Optimize data pipelines and systems for performance, scalability, and reliability
Collaboration: Work closely with data analysts, data scientists, and other business stakeholders to understand data requirements and deliver effective solutions.
Documentation: Maintain comprehensive documentation of data architecture/design, pipelines, and processes
Continuous Improvement: Identify and implement improvements to data engineering practices and technologies
Work Shift:
DayScheduled Weekly Hours :
40Department
IS PHS AnalyticsJoin us!
... if your passion is to work in a caring environment
... if you believe that learning is a life-long process
... if you strive for excellence and want to be among the best in the healthcare industry
Equal Employment Opportunity
OhioHealth is an equal opportunity employer and fully supports and maintains compliance with all state, federal, and local regulations. OhioHealth does not discriminate against associates or applicants because of race, color, genetic information, religion, sex, sexual orientation, gender identity or expression, age, ancestry, national origin, veteran status, military status, pregnancy, disability, marital status, familial status, or other characteristics protected by law. Equal employment is extended to all person in all aspects of the associate-employer relationship including recruitment, hiring, training, promotion, transfer, compensation, discipline, reduction in staff, termination, assignment of benefits, and any other term or condition of employment
Remote Work Disclaimer:
Positions marked as remote are only eligible for work from Ohio.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Architecture Azure Big Data Computer Science Databricks Data pipelines Data quality Data warehouse Engineering ETL Informatica Java KPIs Machine Learning Pipelines PySpark Python Spark SQL Testing
Perks/benefits: Career development Health care Wellness
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.