Manager Data Engineering - Hybrid

Hartford CT- Home Office

The Hartford

Get business, home and car insurance from The Hartford. Choose from a broad selection of business insurance coverages and design the right solution for your company. The Hartford offers AARP members great ways to save on car and home insurance,...

View all jobs at The Hartford

Apply now Apply later

Manager Data Engineering - GE07AE

We’re determined to make a difference and are proud to be an insurance company that goes well beyond coverages and policies. Working here means having every opportunity to achieve your goals – and to help others accomplish theirs, too. Join our team as we help shape the future.   

         

Data and Analytics team within The Hartford’s Enterprise Data Office is seeking Manager of Data Engineering primarily responsible for Third Party assets ingestion and management for the Enterprise. In this role, you will be responsible for expanding and optimizing data pipeline and product architecture, as well as optimizing data flow and collection for cross functional teams. The ideal candidate is an experienced data pipeline builder and data wrangler who enjoys optimizing data systems and building them from the ground up. The Manager of Data Engineering will support our software developers, database architects, data analysts and data scientists on data initiatives and will ensure optimal data delivery architecture is consistent throughout ongoing projects. They must be self-directed and comfortable supporting the data needs of multiple teams, systems and products.

This role will have a Hybrid work arrangement, with the expectation of working in an office location (Hartford, CT) 3 days a week (Tuesday through Thursday).

Job responsibilities:

  • Accountable for building small or medium scale pipeline and data products. Rapidly architect, design, prototype/POC, implement, and optimize Cloud/Hybrid architectures to operationalize data.
  • Accountable for team development and influencing pipeline tool decisions.
  • Build and implement capabilities for continuous integration and continuous delivery aligned with Enterprise DevOps practices.
  • Accountable for data engineering practices (e.g., Source code management, branching, issue tracking, access, etc.) followed for the product.
  • Independently review, prepare, design and integrate complex (type, quality, volume) data, correcting problems and recommend data cleansing/quality solutions to issues 
  • Provide expert documentation and operating guidance for users of all levels. Document technical requirements and present complex technical concepts to audiences of varying size and level
  • Stay up to date on emerging data and analytics technologies, tools, techniques, and frameworks. Evaluate, recommend and influence all technology-based decisions for tools and frameworks for effective delivery.
  • Partner in the development of project and portfolio strategy, roadmaps and implementations.

Knowledge, Skills, and Abilities 

  • Good working Technical Knowledge (Cloud data pipelines and data consumption products).
  • Proven ability to work with cross-functional teams and translate requirements between business, project management and technical projects or programs.
  • Team player with transformation mindset. Ability to operate successfully in a lean and fast-paced organization, leveraging Scaled Agile principles and ways of working.
  • Develop and promote best practices for continuous improvement. Also troubleshoot and resolve problems across the technology stack.
  • Working knowledge and understanding of DevOps technology stack and standard tools/practices
  • Provides mentorship, and feedback to junior Data Engineers

Qualifications:

  • Bachelor’s degree and 5 years of Data Engineering experience.
  • Extensive experience in Python programming and Pyspark
  • Familiar with Object Oriented Design patterns
  • 2+ years of developing and operating production workloads in cloud infrastructure
  • Proven experience with software development life cycle (SDLC) and knowledge of agile/iterative methodologies and toolsets
  • Hands-on experience working on AWS Services like S3, EMR, Glue, Lambda and Cloud Formation
  • Hands-on experience on DevOps tools and practices for Continuous Integration and Deployment like GitHub, Jenkins, Nexus, Maven as well as CICD tools in AWS like Code Build and Code Pipeline
  • Knowledge and experience of working on SQL Queries and Unix scripting.
  • Ability to design, implement, and oversee ETL processes.
  • Excellent Troubleshooting Skills and performance tuning of ETL processes
  • Experience working on Snowflake data warehouse is nice to have.
  • Any Cloud certifications desired.

Compensation

The listed annualized base pay range is primarily based on analysis of similar positions in the external market. Actual base pay could vary and may be above or below the listed range based on factors including but not limited to performance, proficiency and demonstration of competencies required for the role. The base pay is just one component of The Hartford’s total compensation package for employees. Other rewards may include short-term or annual bonuses, long-term incentives, and on-the-spot recognition. The annualized base pay range for this role is:

$122,880 - $184,320

Equal Opportunity Employer/Females/Minorities/Veterans/Disability/Sexual Orientation/Gender Identity or Expression/Religion/Age

About Us | Culture & Employee Insights | Diversity, Equity and Inclusion | Benefits

Apply now Apply later
  • Share this job via
  • 𝕏
  • or
Job stats:  0  0  0

Tags: Agile Architecture AWS Data pipelines Data warehouse DevOps Engineering ETL GitHub Jenkins Lambda Maven Pipelines PySpark Python SDLC Snowflake SQL

Perks/benefits: Equity / stock options Insurance

Region: North America
Country: United States

More jobs like this