Senior Data Engineer

Dallas, Texas, United States - Remote

Brado

Transform how your brand engages people on their most important journeys.

View company page

About us:

Brado is a digital marketing agency reinventing the way healthcare brands engage with people. Driven by insight, we offer precision engagement solutions that produce superior returns for our healthcare clients and better experiences for their healthcare customers. 

Our Values:

At Brado, we value the individual. We believe work and life can be synergistic and should not be at odds. The joy and renewal you get from each source must fuel the other. We have and will continue to cultivate a team who celebrates our diversity of thoughts, beliefs, backgrounds, and lifestyles. We are driven by our passion to do great work with great clients that are truly changing lives.

 The Role:
The Senior Data Engineer co-owns the data strategy and architects the right data platform to serve business needs. They lead the development of data pipelines and data products necessary to enable analytics teams to accomplish their goals.  They contribute to the vision for developing our modern data infrastructure in Data Bricks.  They work closely with fellow engineers, data scientists, and reporting and measurement specialists to establish best practices for creating systems and data products that the business will use.

Ideal candidates for this role will live in the St. Louis, MO or Dallas/Ft. Worth TX areas. While our day-to-day work is done remotely, our teams gather in person for intentional work.

Key Areas of Responsibility 

  • Drive automation efforts across the data analytics team utilizing Infrastructure as Code (IaC) using Terraform and MSFT Bicep, Configuration Management, and Continuous Integration (CI) / Continuous Delivery (CD) tools such as Jenkins.
  • Work with internal infrastructure teams on monitoring, security, and configuration of azure environment and applications as it relates to data infrastructure and Data Bricks.
  • Identify data needs for our clients, our marketing team, and data science team, understand specific requirements for metrics and analysis, and build efficient and scalable data pipelines to deliver efficient data-driven products.  
  • Design, develop and maintain marketing databases, datasets, pipelines, and warehouses to enable advanced segmentation, targeting, automation, and reporting.  
  • Facilitate data integration and transformation requirements for moving data between applications; ensuring interoperability of applications with database, data warehouse, and data mart environments.  
  • Assist with the design and management of our technology stack used for data storage and processing.  
  • Develop and implement quality controls and departmental standards to ensure quality standards, organizational expectations, and regulatory requirements.  
  • Contribute to the development and education plans on data engineering capabilities, systems, standards, and processes.  
  • Anticipate future demands of initiatives related to people, technology, budget and business within your department and design/implement solutions to meet these needs.  
  • Communicate results and business impacts of insight initiatives to stakeholders within and outside of the company.  

Requirements

  • 5 years of experience with modern data engineering projects and practices: designing, building, and deploying scalable data pipelines with 3 + years of experience deploying cloud native solutions.
  • 2+ years of experience using Data Bricks, lakehouse architecture and delta lake
  • Strong programming skills in Python, Java, or Scala, and their respective standard data processing libraries
  • 3 years of experience building data pipelines for AI/ ML models using PySpark or Python
  • Experience building data pipelines with modern tools such as Fivetran, dbt etc.  
  • At least 2 years of experience with Azure, SQL, Python, Docker/Kubernetes, CI/CD, Git
  • Experience with Spark, Kafka, etc.  
  • Experienced in integrating data from core platforms like Marketing Automation, CRM, and Analytics into a centralized warehouse.
  • Software development best practices with strong rigor in high quality code development, automated testing, and other engineering best practices
  • Familiarity with Azure services like Azure functions, Azure Data Lake Store, Azure Cosmos, Azure Databricks, Azure Data Factory etc.
  • Masters Degree or equivalent experience in Computer Science, Engineering, Statistics, Informatics, Information Systems or another quantitative field

Benefits

  • Health Care Plan (Medical, Dental & Vision)
  • Retirement Plan (401k, IRA)
  • Life Insurance (Basic, Voluntary & AD&D)
  • Paid Time Off (Vacation, Sick & Public Holidays)
  • Family Leave (Maternity, Paternity)
  • Short Term & Long Term Disability
  • Training & Development
  • Work From Home
Apply now Apply later
  • Share this job via
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Tags: Architecture Azure CI/CD Computer Science Data Analytics Databricks Data pipelines Data strategy Data warehouse dbt Docker Engineering FiveTran Git Java Kafka Kubernetes Machine Learning ML models Pipelines PySpark Python Scala Security Spark SQL Statistics Terraform Testing

Perks/benefits: 401(k) matching Career development Health care Insurance Medical leave Parental leave

Regions: Remote/Anywhere North America
Country: United States
Job stats:  13  3  0
Category: Engineering Jobs

More jobs like this

Explore more AI, ML, Data Science career opportunities

Find even more open roles in Artificial Intelligence (AI), Machine Learning (ML), Natural Language Processing (NLP), Computer Vision (CV), Data Engineering, Data Analytics, Big Data, and Data Science in general - ordered by popularity of job title or skills, toolset and products used - below.