Lead Data/AI Engineering
IND:AP:Hyderabad / Atria Building, Plot 17 - Adm: Atria Building, Plot No 17, India
AT&T
Shop deals on new phones, including iPhone 16 & Galaxy S25, unlimited data plans & AT&T Fiber. Get 24/7 support, pay bills, and manage your account online.Job Description:
Lead Data Engineer Job Description
About AT&T Chief Data Office
The Chief Data Office (CDO) at AT&T is responsible for leveraging data as a strategic asset to drive business value. The team focuses on data governance, data engineering, artificial intelligence, and advanced analytics to enhance customer experience, optimize operations, and enable innovation.
Candidates will:
- Work on cutting-edge Cloud Technologies, AI/ML, and data-driven solutions, be a part of a dynamic and innovative team driving digital transformation.
- Lead high-impact Agile initiatives with top talent in the industry.
- Get opportunity to grow and implement Agile at an enterprise level.
- Offered competitive compensation, flexible work culture, and learning opportunities.
Shift timing (if any): 12.30 to 9.30 IST(Bangalore)/1:00-10:00 pm (Hyderabad)
Work mode: Hybrid (3 days mandatory in office)
Location / Additional Location (if any): Bangalore, Hyderabad
Job Title / Advertise Job Title: Lead Data Engineer
Roles and Responsibilities
- Create product roadmap and project plan.
- Design, develop, and maintain scalable ETL pipelines using Azure Services to process, transform, and load large datasets into Cloud platforms.
- Collaborate with cross-functional teams, including data architects, analysts, and business stakeholders, to gather data requirements and deliver efficient data solutions.
- Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure.
- Work together with data scientists/architects and analysts to understand the needs for data and create effective data workflows.
- Exposure to Snowflake Warehouse.
- Big Data Engineer with solid background with the larger Hadoop ecosystem and real-time analytics tools including PySpark/Scala-Spark/Hive/Hadoop CLI/MapReduce/Storm/Kafka/Lambda Architecture
- Implementing data validation and cleansing techniques.
- Improve the scalability, efficiency, and cost-effectiveness of data pipelines.
- Experience in designing and hands-on development in cloud-based analytics solutions.
- Expert level understanding on Azure Data Factory Azure Data Lake, Snowflake, Pyspark is required.
- Good to have exp in full Stack Development background with Java and JavaScript/CSS/HTML.
- Knowledge of ReactJs/Angular is a plus.
- Designing and building of data pipelines using API ingestion and Streaming ingestion methods.
- Unix/Linux expertise; comfortable with Linux operating system and Shell Scripting.
- Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is desirable.
- PL/SQL, RDBMS background with Oracle/MySQL
- Comfortable with microServices, CI/CD, Dockers, and Kubernetes
- Strong experience in common Data Vault data warehouse modelling principles.
- Creating/modifying Dockers and deploying them via Kubernetes.
Additional Skills Required:
The ideal candidate should have at least 14+ years of experience in IT along in addition to the following:
- Having 10+ years of extensive development experience using snowflake or similar data warehouse technology
- Having working experience with dbt and other technologies of the modern datastack, such as Snowflake, Azure, Databricks and Python,
- Experience in agile processes, such as SCRUM
- Extensive experience in writing advanced SQL statements and performance tuning.
- Experience in Data Ingestion techniques using custom or SAAS tool
- Experience in data modelling and can optimize existing/new data models
- Experience in data mining, data warehouse solutions, and ETL, and using databases in a business environment with large-scale, complex datasets
Technical Qualifications:
Preferred:
- Bachelor's degree in Computer Science, Information Systems, or a related field.
- Experience in high-tech, software, or telecom industries is a plus.
- Strong analytical skills to translate insights into impactful product initiatives.
#DataEngineering
Weekly Hours:
40Time Type:
RegularLocation:
Hyderabad, Andhra Pradesh, IndiaIt is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Angular APIs Architecture Azure Big Data CI/CD Computer Science CX Databricks Data governance Data Mining Data pipelines Data warehouse dbt Engineering ETL Hadoop Java JavaScript Kafka Kubernetes Lambda Linux Machine Learning Microservices MySQL Oracle Pipelines PySpark Python RDBMS Scala Scrum Shell scripting Snowflake Spark SQL Streaming
Perks/benefits: Career development Competitive pay Flex hours
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.