Lead, Data Engineer (Client Deployment) (United States)

United States - Remote

Demyst

Demyst enables data teams to manage all their external data. External data orchestration at scale, with pipelines that provide trusted external data.

View all jobs at Demyst

Apply now Apply later

OUR SOLUTION

At Demyst, we're transforming the way enterprises manage data, eliminating key challenges and driving significant improvements in business outcomes through data workflow automation. Due to growing demand, we're expanding our team and seeking talented individuals to help us scale.

Our platform simplifies workflows, eliminating the need for complicated platforms and expensive consultants. With top-tier security and global reach, we're helping businesses in banking and insurance achieve digital transformation. If you're passionate about data and affecting change, Demyst is the place for you.

THE CHALLENGE

Demyst is seeking a Lead Engineer with a strong data engineering focus to play a pivotal role in delivering our next-generation data platform to leading enterprises across North America. In this role, you will lead a team of data engineers with a primary focus on data integration and solution deployment. You will oversee the development and management of data pipelines, ensuring they are robust, scalable, and reliable. This is an ideal opportunity for a hands-on data engineering leader to apply technical, leadership, and problem-solving skills to deliver high-quality solutions for our clients.

Your role will involve not only technical leadership and mentoring but also actively contributing to coding, architectural decisions, and data engineering strategy. You will guide your team through complex client deployments, from planning to execution, ensuring that data solutions are effectively integrated and aligned with client goals.

Demyst is a remote-first company. The candidate must be based in the United States.

RESPONSIBILITIES

  • Lead the configuration, deployment, and maintenance of data solutions on the Demyst platform to support client use cases.
  • Supervise and mentor the local and distributed data engineering team, ensuring best practices in data architecture, pipeline development, and deployment.
  • Recruit, train, and evaluate technical talent, fostering a high-performing, collaborative team culture.
  • Contribute hands-on to coding, code reviews, and technical decision-making, ensuring scalability and performance.
  • Design, build, and optimize data pipelines, leveraging tools like Apache Airflow, to automate workflows and manage large datasets effectively.
  • Work closely with clients to advise on data engineering best practices, including data cleansing, transformation, and storage strategies.
  • Implement solutions for data ingestion from various sources, ensuring the consistency, accuracy, and availability of data.
  • Lead critical client projects, managing engineering resources, project timelines, and client engagement.
  • Provide technical guidance and support for complex enterprise data integrations with third-party systems (e.g., AI platforms, data providers, decision engines).
  • Ensure compliance with data governance and security protocols when handling sensitive client data.
  • Develop and maintain documentation for solutions and business processes related to data engineering workflows.
  • Other duties as required.

Requirements

  • Bachelor's degree or higher in Computer Science, Data Engineering, or related fields. Equivalent work experience is also highly valued.
  • 5-10 years of experience in data engineering, software engineering, or client deployment roles, with at least 3 years in a leadership capacity.
  • Strong leadership skills, including the ability to mentor and motivate a team, lead through change, and drive outcomes.
  • Expertise in designing, building, and optimizing ETL/ELT data pipelines using Python, JavaScript, Golang, Scala, or similar languages.
  • Experience in managing large-scale data processing environments, including Databricks and Spark.
  • Proven experience with Apache Airflow to orchestrate data pipelines and manage workflow automation.
  • Deep knowledge of cloud services, particularly AWS (EC2/ECS, Lambda, S3), and their role in data engineering.
  • Hands-on experience with both SQL and NoSQL databases, with a deep understanding of data modeling and architecture.
  • Strong ability to collaborate with clients and cross-functional teams, delivering technical solutions that meet business needs.
  • Proven experience in unit testing, integration testing, and engineering best practices to ensure high-quality code.
  • Familiarity with agile project management tools (JIRA, Confluence, etc.) and methodologies.
  • Experience with data visualization and analytics tools such as Jupyter Lab, Metabase, Tableau.
  • Strong communicator and problem solver, comfortable working in distributed teams.

Benefits

  • Operate at the forefront of the data management innoivation, and work with the largest industry players in an emerging field that is fueling growth and technological advancement globally
  • Have an outsized impact in a rapidly growing team, offering real autonomy and responsibility for client outcomes
  • Stretch yourself to help define and support something entirely new
  • Distributed team and culture, with fully flexible working hours and location
  • Collaborative, inclusive, and dynamic culture
  • Generous benefits and compensation plans
  • ESOP awards available for tenured staff
  • Join an established, and scaling data technology business

Demyst is committed to creating a diverse, rewarding career environment and is proud to be an equal opportunity employer. We strongly encourage individuals from all walks of life to apply.

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0

Tags: Agile Airflow Architecture AWS Banking Computer Science Confluence Databricks Data governance Data management Data pipelines Data visualization EC2 ECS ELT Engineering ETL Golang JavaScript Jira Jupyter Lambda Metabase NoSQL Pipelines Python Scala Security Spark SQL Tableau Testing

Perks/benefits: Career development Flex hours Startup environment

Regions: Remote/Anywhere North America
Country: United States

More jobs like this