Databricks Architect

Noida / Hyderabad

ShyftLabs

ShyftLabs is not just a software company; we're your partners in propelling digital transformation at unprecedented speed. As experts, we specialize in crafting end-to-end solutions through our collaborative approach. With a deep-rooted...

View all jobs at ShyftLabs

Apply now Apply later

Position Overview:ShyftLabs is seeking an experienced Databricks Architect to lead the design, development, and optimization of big data solutions using the Databricks Unified Analytics Platform. This role requires deep expertise in Apache Spark, SQL, Python, and cloud platforms (AWS/Azure/GCP). The ideal candidate will collaborate with cross-functional teams to architect scalable, high-performance data platforms and drive data-driven innovation.
ShyftLabs is a growing data product company that was founded in early 2020 and works primarily with Fortune 500 companies. We deliver digital solutions built to accelerate business growth across various industries by focusing on creating value through innovation.

Job Responsibilities

  • Architect, design, and optimize big data and AI/ML solutions on the Databricks platform.
  • Develop and implement highly scalable ETL pipelines for processing large datasets.
  • Lead the adoption of Apache Spark for distributed data processing and real-time analytics.
  • Define and enforce data governance, security policies, and compliance standards.
  • Optimize data lakehouse architectures for performance, scalability, and cost-efficiency.
  • Collaborate with data scientists, analysts, and engineers to enable AI/ML-driven insights.
  • Oversee and troubleshoot Databricks clusters, jobs, and performance bottlenecks.
  • Automate data workflows using CI/CD pipelines and infrastructure-as-code practices.
  • Ensure data integrity, quality, and reliability across all data processes.

Basic Qualifications:

  • Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field.
  • 10+ years of hands-on experience in data engineering, with at least 5+ years in Databricks Architect and Apache Spark.
  • Proficiency in SQL, Python, or Scala for data processing and analytics.
  • Extensive experience with cloud platforms (AWS, Azure, or GCP) for data engineering.
  • Strong knowledge of ETL frameworks, data lakes, and Delta Lake architecture.
  • Hands-on experience with CI/CD tools and DevOps best practices.
  • Familiarity with data security, compliance, and governance best practices.
  • Strong problem-solving and analytical skills in a fast-paced environment.

Preferred Qualifications:

  • Databricks certifications (e.g., Databricks Certified Data Engineer, Spark Developer).
  • Hands-on experience with MLflow, Feature Store, or Databricks SQL.
  • Exposure to Kubernetes, Docker, and Terraform.
  • Experience with streaming data architectures (Kafka, Kinesis, etc.).
  • Strong understanding of business intelligence and reporting tools (Power BI, Tableau, Looker).
  • Prior experience working with retail, e-commerce, or ad-tech data platforms.
We are proud to offer a competitive salary alongside a strong insurance package. We pride ourselves on the growth of our employees, offering extensive learning and development resources.
Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Architecture Jobs

Tags: Architecture AWS Azure Big Data Business Intelligence CI/CD Computer Science Databricks Data governance DevOps Docker E-commerce Engineering ETL GCP Kafka Kinesis Kubernetes Looker Machine Learning MLFlow Pipelines Power BI Python Scala Security Spark SQL Streaming Tableau Terraform

Perks/benefits: Career development Competitive pay Startup environment

Region: Asia/Pacific
Country: India

More jobs like this