Senior Data Engineer (Austin, San Antonio, or Dallas)

San Antonio, TX, United States

Apply now Apply later

Responsibilities

Since H-E-B Digital Technology’s inception, we’ve been investing heavily in our customers’ digital experience, reinventing how they find inspiration from food, make food decisions, and ultimately get food into their homes. This is an exciting time to join H-E-B Digital—we’re using the best available technologies to deliver modern, engaging, reliable, and scalable experiences to meet the needs of our growing audience. 

 

As a Senior Data Engineer, you’ll use an advanced analytical, data-driven approach to drive a deep understanding of our fast-changing business and answer real world questions. You’ll work with stakeholders to develop a clear understanding of data and data infrastructure needs, resolve complex data-related technical issues, and ensure optimal data design and efficiency. 

 

Once you’re eligible, you’ll become an Owner in the company, so we’re looking for commitment, hard work, and focus on quality and Customer service. “Partner-owned” means our most important resources—People—drive the innovation, growth, and success that make H-E-B The Greatest Omnichannel Retailing Company. 

 

Do you have a: 

HEART FOR PEOPLE… you’re willing to facilitate solutions with multiple engineers, provide upward communication, and mentor others? 

HEAD FOR BUSINESS… you consistently demonstrate and uphold the standards of coding, infrastructure, and process? 

PASSION FOR RESULTS… you’re capable of high-velocity contributions in multiple technical domains? 

 

We are looking for: 

  • 5+ years of experience related to data engineering 

 

What You'll Do   

  • Builds / supports more complex data pipelines, application programming interfaces (APIs), data integrations, data streaming solutions 

  • Builds large-scale batch and real-time data pipelines with big data processing frameworks 

  • Design data patterns that supports creation of datasets 

  • Designs / implements monitoring capabilities based on business SLA and data quality  

  • Designs / develops / maintains large data pipelines; diagnoses / solves production support issues  

  • Implement features to continuously improve data integration performance.   

  • Implement Infrastructure as code, security, and CI/CD for data pipelines. 

  • Engages / collaborates with external technical teams to ensure timely, high-quality solutions 

  • Build strong relationships with cross functional teams to accomplish impactful results. Work with teams such as Data Engineering, Application and Product Managers.    

Project You Will Impact   

  • Build APIs to deliver composite APIs to applications 

  • API service migration from on prem to cloud   

  • Improve data quality    

Who You Are   

  • 5+ years of hands-on experience related to data engineering in developing data pipelines and APIs   

  • 5+ years of experience with SQL and one or more of the following languages, Python or Java   

  • Proven experience with SQL, Spark, Databricks, AWS Lambda, S3, Data lake  

  • Prior experience of ingesting data from data lake to Elasticsearch/Opensearch 

  • Good knowledge of messaging systems like Kafka or GCP pubsub  or Tibco EMS 

  • Experience in infrastructure as a code using Terraform   

  • Experience in DevOps tools such as GitLab CI/CD, and Jenkins. 

  • Experience with orchestration tools such as Argo or Databricks workflow 

  • A solid understanding of Big Data and Hybrid Cloud infrastructure.    

  • Up to date on latest technological developments. Should be able to evaluate and propose new data pipelines pattern.   

  • You have an advanced understanding of SDLC processes.   

  • You have a comprehensive knowledge of CS fundamentals: data structures, algorithms, and design patterns.   

  • You have advanced knowledge of system architecture and design patterns.   

  • You can understand architecture, design, and integration landscape of multiple H-E-B systems.   

  • You have experience with common software engineering tools such as Git, JIRA, Confluence, etc.   

  • You have a high level of comfort in Lean Startup or Agile development methodologies.   

  • You have a related degree or comparable formal training, certification, or work experience.   

  • Excellent written, oral communication and presentation skills.   

  • Understanding of Data Engineering   

Bonus   

  • DevOps Certifications   

  • Cloud certifications   

DATA3232

  DATA3232

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Agile APIs Architecture AWS Big Data CI/CD Confluence Databricks Data pipelines Data quality DevOps Elasticsearch Engineering GCP Git GitLab Java Jenkins Jira Kafka Lambda OpenSearch Pipelines Python SDLC Security Spark SQL Streaming Terraform

Perks/benefits: Startup environment

Region: North America
Country: United States

More jobs like this