Data Engineer

Houston, TX

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Cotton Holdings Inc.

Cotton Holdings, Inc., is a leading infrastructure support services company for public and private entities throughout the United States and internationally.

View all jobs at Cotton Holdings Inc.

Apply now Apply later

Data Engineer

Department: Information Technology

Employment Type: Full Time

Location: Houston, TX


Description

Cotton Holdings, Inc., is a diversified holding company with subsidiaries that provide property restoration and recovery, construction, roofing, logistical support, temporary workforce housing, and culinary services to public and private entities worldwide. Cotton combines this diverse suite of services with top talent, innovative technology, and a large inventory of company-owned assets, to offer clients a total solutions package in support of disaster events and large development projects, including complex work environments.

Data Engineer – Azure | Databricks | Python Why this role mattersCotton Holdings runs on data—from ERP workflows to custom field apps—and we’re scaling fast. You’ll be the hands-on owner of batch and event-driven pipelines that land raw data in our Azure-based lakehouse and keep analytics humming for every business unit.

Key Responsibilities

What you’ll do
  • Ingest & integrate data from ERP, SaaS, and in-house systems using Prefect-orchestrated Python jobs and the occasional event trigger.
  • Model in Databricks: manage Delta tables, optimize storage/layout, and prep bronze → silver layers for downstream dbt models.
  • Harden pipelines: add data-quality tests (dbt + Great Expectations), GitHub Actions CI, and automated alerts that open Linear issues on failure.
  • Operate & improve: join our rotating on-call (roughly 1 week in 4) to triage incidents first, then drive “engineer-to-zero” root-cause fixes.
  • Collaborate with Analytics Engineers, BI developers, and business stakeholders to surface new data sources and performance wins.

Skills, Knowledge and Expertise

What you bring
  • 5+ years building production data pipelines in Python and SQL on a public cloud (Azure preferred).
  • Strong fluency with Databricks / Spark-SQL / Delta Lake concepts.
  • Experience orchestrating ETL in Prefect, Airflow, or similar; Git-based workflows and CI/CD.
  • Comfort owning SLAs, debugging jobs, and writing clear post-mortems.
  • Nice-to-have certs: Databricks DE, Microsoft Azure Data Engineer, or Python PCEP/PCAP.

Disclaimer

This Job Description indicates the general nature and level of work expected of the incumbent(s).  It is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities required of the incumbent.  Incumbent(s) may be asked to perform other duties as requested.    
#commercial
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: Airflow Azure CI/CD Databricks Data pipelines dbt ETL Git GitHub Pipelines Python Spark SQL

Perks/benefits: Team events

Region: North America
Country: United States

More jobs like this