Sr. Data Engineer

Rosemont, IL

IMO Health

From clinical terminology to streamlined workflows to data standardization, we enable insights that help improve patient care across the healthcare ecosystem.

View all jobs at IMO Health

Apply now Apply later

The Senior Data Engineer will support our software developers, database architects, analysts, and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. At IMO, our data is not merely a side effect or bonus; it is the core of our products and the mission critical deliverable for our clients. This person must be self-directed and comfortable supporting the data needs of multiple teams, systems, and products. The right candidate will be excited by the prospect of optimizing or improving our company’s data architecture to support our next generation of products and data initiatives. Join our growing Product Development / Terminology Data Engineering Organization as a Senior Data Engineer to help design, create, and support high-quality solutions that support 80% of US clinicians and expand the application of Data Engineering within IMO!  

WHAT YOU'LL DO:

  • Demonstrate understanding and awareness of the critical role terminology data plays in IMO’s products – use this to consistently inform your work 
  • Update, analyze, fix, enhance, and build IMO products through direct interaction with code and data 
  • Assemble, analyze, and interpret large and complex data sets using both technical skills and a solid understanding of IMO’s terminology data 
  • Construct infrastructure for optimal ETL of data from varied sources using SQL and AWS ‘big data’ technologies 
  • Identify and implement improvements to automate processes, optimize data delivery and performance, implement orchestration frameworks, and redesign data pipeline infrastructure for scalability and reliability 
  • Design data platform components for bulk, transactional, and streaming access 
  • Create and maintainoptimal data pipeline architecture 
  • Support application-specific availability, scalability, and monitoring of resources and costs 
  • Develop and document quality source code 
  • Maintain and improve database schema and data models 
  • Promote data quality awareness and execute data quality management procedures 
  • Work cooperatively within an Agile Scrum team to manage conflict and foster trust, commitment, and accountability 
  • Take ownership, be proactive, and anticipate impacts to take appropriate action 
  • Implement creative solutions to technical challenges and apply knowledge and learning from various disciplines 
  • Collaborate cross-functionally in a dynamic and agile environment to translate needs into requirements, assist with data/infrastructure, and partner on the creation of innovative products 
  • Seek out industry best practices and continuously develop new skills 
  • Make data-driven decisions 

WHAT YOU'LL NEED:

  • Relevant technical BA/BS degree and five years of experience, OR seven years of relevant professional experience 
  • Ability to build end-to-end data platforms and collaborate on architecting sound solutions 
  • Experienced developer in multiple languages, including object-oriented/functional scripting languages (Python); able to train up on additional languages as needed 
  • Hands-on experience with big data tools (e.g., Spark, Kafka); familiarity with building and optimizing complex data pipelines and architectures 
  • Proficient in AWS services (EC2, EMR, RDS) 
  • Strong SQL knowledge, with experience in complex query authoring, relational databases (PostgreSQL), and NoSQL databases (DynamoDB, MongoDB, Elasticsearch) 
  • Strong analytical, troubleshooting, and problem-solving skills 
  • Experienced in data modeling and logical/physical database design 
  • Comfortable working with large, disconnected datasets and building processes that support data transformation, structures, and metadata 
  • Familiar with agile development and CI/CD processes using tools such as Git and Terraform 
  • Experience with markup languages such as XML and HTML 
  • Comfortable performing root cause analyses to identify opportunities for improvement 
  • Familiarity with stream-processing systems (e.g., Storm, Spark-Streaming) and workflow management tools (e.g., Airflow, Luigi, Azkaban) 
  • Strong communication skills 
  • Enjoyment of challenges, eagerness to explore new approaches, and willingness to ask for help 
  • Interest and capacity to independently get up to speed with items in the “Nice To Have”

PREFERRED EXPERIENCE:

  • AWS Associate Certification – Data Engineer (preferred, not required) 
  • AWS Associate Certification – Solutions Architect 
  • Experience with ETL and BI tools (Talend, Tableau, Looker) 
  • Experience with data cataloging standards and building/maintaining them 
  • AWS Specialty Certification – Machine Learning 
  • AWS Foundational Certification – AI Practitioner 
  • Prior experience working with healthcare data 
  • Exposure to knowledge graph-related technologies and standards (Graph DB, OWL, SPARQL) 
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile Airflow Architecture AWS Azkaban Big Data CI/CD Data pipelines Data quality DynamoDB EC2 Elasticsearch Engineering ETL Git Kafka Looker Machine Learning MongoDB NoSQL Pipelines PostgreSQL Python RDBMS Scrum Spark SQL Streaming Tableau Talend Terraform XML

Perks/benefits: Career development

Region: North America
Country: United States

More jobs like this