Lead Data/AI Engineering
IND:KA:Bengaluru / Innovator Building, Itpb, Whitefield Rd - Adm: Intl Tech Park, Innovator Bldg, India
AT&T
Shop deals on new phones, including iPhone 16 & Galaxy S25, unlimited data plans & AT&T Fiber. Get 24/7 support, pay bills, and manage your account online.Job Description:
This position requires frequent collaboration with developers, architects, data product owners, and source system teams. The ideal candidate is a versatile professional with deep expertise spanning data engineering, software architecture, data analysis, visualization, BI tools, relational databases, and data warehouse architecture across traditional and cloud environments. Experience with emerging AI technologies, including Generative AI, is highly valued.
Key Roles and Responsibilities
- Lead the end-to-end design, architecture, development, testing, and deployment of scalable Data & AI solutions across traditional data warehouses, data lakes, and cloud platforms such as Snowflake, Azure, AWS, Databricks, and Delta Lake.
- Architect and build secure, scalable software systems, microservices, and APIs leveraging best practices in software engineering, automation, version control, and CI/CD pipelines.
- Develop, optimize, and maintain complex SQL queries, Python scripts, Unix/Linux shell scripts, and AI/ML pipelines to transform, analyze, and operationalize data and AI models.
- Incorporate GenAI technologies by evaluating, deploying, fine-tuning, and integrating models to enhance data products and business insights.
- Translate business requirements into robust data products, including interactive dashboards and reports using Power BI, Tableau, or equivalent BI tools.
- Implement rigorous testing strategies to ensure reliability, performance, and security throughout the software development lifecycle.
- Lead and mentor engineering teams, fostering collaboration, knowledge sharing, and upskilling in evolving technologies including GenAI.
- Evaluate and select optimal technologies for platform scalability, performance monitoring, and cost optimization in both cloud and on-premise environments.
- Partner cross-functionally with development, operations, AI research, and business teams to ensure seamless delivery, support, and alignment to organizational goals.
Key Competencies
- Extensive leadership and strategic experience in full software development lifecycle and enterprise-scale data engineering projects.
- Deep expertise in relational databases, data marts, data warehouses, and advanced SQL programming.
- Strong hands-on experience with ETL processes, Python, Unix/Linux shell scripting, data modeling, and AI/ML pipeline integration.
- Proficiency with Unix/Linux operating systems and scripting environments.
- Advanced knowledge of cloud data platforms (Azure, AWS, Snowflake, Databricks, Delta Lake).
- Solid understanding and practical experience with Traditional & Gen AI technologies including model development, deployment, and integration.
- Familiarity with big data frameworks and streaming technologies such as Hadoop, Spark, and Kafka.
- Experience with containerization and orchestration tools including Docker and Kubernetes.
- Strong grasp of data governance, metadata management, and data security best practices.
- Excellent analytical, problem-solving, and communication skills to articulate complex technical concepts and business impact.
- Ability to independently lead initiatives while fostering a collaborative, innovative team culture.
- Desired knowledge of software engineering best practices and architectural design patterns.
Required/Desired Skills
- RDBMS and Data Warehousing — 12+ years (Required)
- SQL Programming and ETL — 12+ years (Required)
- Unix/Linux Shell Scripting — 8+ years (Required)
- Python or other programming languages — 6+ years (Required)
- Cloud Platforms (Azure, AWS, Snowflake, Databricks, Delta Lake) — 5+ years (Required)
- Power BI / Tableau — 5 years (Desired)
- Generative AI (model development, deployment, integration) — 3+ years (Desired)
- Big Data Technologies (Hadoop, Spark, Kafka) — 3+ years (Desired)
- Containerization and Orchestration (Docker, Kubernetes) — 2+ years (Desired)
- Data Governance and Security — 3+ years (Desired)
- Software Engineering and Architecture — 4+ years (Desired)
Education & Experience
- Bachelor’s degree (BS/BA) in Computer Science, Scientific Computing, or a related field is desired.
- Relevant certifications in data engineering, cloud platforms, or AI technologies may be required or preferred.
- 13+ years of related experience is the minimum; however, the ideal candidate will have extensive experience as outlined above.
#DataEngineering
Weekly Hours:
40Time Type:
RegularLocation:
Bangalore, Karnataka, IndiaIt is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Architecture AWS Azure Big Data CI/CD Computer Science Data analysis Databricks Data governance Data warehouse Data Warehousing Docker Engineering ETL Generative AI Hadoop Kafka Kubernetes Linux Machine Learning Microservices ML models Pipelines Power BI Python RDBMS Research Security Shell scripting Snowflake Spark SQL Streaming Tableau Testing
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.