Senior Software Engineer

Bangalore

Calix

Calix is a leading provider of cloud and software platforms, systems, and services for internet service providers. Partner with Calix and grow your business.

View all jobs at Calix

Apply now Apply later

This position is based in Bangalore, India.

Calix is leading a service provider transformation to deliver a differentiated subscriber experience around the Smart Home and Business, while monetizing their network using Role based Cloud Services, Telemetry, Analytics, Automation, and the deployment of Software Driven Adaptive networks.

As part of a high performing global team, the right candidate will play a significant role as Calix Cloud Data Engineer involved in architecture design, implementation, technical leadership in data ingestion, extraction, and transformation domain.

Responsibilities and Duties:

  • Work closely with Cloud product owners to understand, analyze product requirements and provide feedback.
  • Architecture design and review of Cloud data pipeline, including data ingestion, extraction, and transformation services.
  • Implement and enhance support tools for monitoring and acting on data pipeline issues and interpret trends and patterns.
  • Technical leadership of software design in meeting requirements of service stability, reliability, scalability, and security
  • Guiding technical discussions within engineer group and making technical recommendations
  • Design review and code review with peer engineers
  • Guiding testing architecture for large scale data ingestion and transformations.
  • Customer facing engineering role in debugging and resolving field issues.

Qualifications:

  • 7-10 years of software engineering experience
  • 4+ years of development experience performing ETL and/or data pipeline implementations.
  • Organized and goal-focused, ability to deliver in a fast-paced environment.
  • Strong understanding of distributed systems and Restful APIs.
  • Experience in cloud-based big data projects (preferably deployed in GCP or AWS)
  • Hands on experience implementing data pipeline infrastructure for data ingestion and transformation near real time availability of data for applications, BI analytics, and ML pipelines.
  • Working knowledge of Data Lake technologies, data storage formats (Parquet, ORC, Avro) and query engines (BigQuery, Athena) and associated concepts for building optimized solutions at scale.  
  • Experience in designing data streaming and event-based data solutions (Kafka, Pub/Sub, Kinesis, or like)
  • Experience building data pipelines (DataProc, DBT, Flink, Spark or like)
  • Working experience with the cloud-based data warehouse like (BigQuery, RedShift, Azure SQL Data Warehouse, etc.)
  • Experience designing cost optimized solutions for large datasets using open-source frameworks.
  • Knowledge and experience designing solutions with cloud-native GCP Cloud services as well as deploying alternative solutions for appropriate use cases.
  • Expert level in one of the following programming languages or similar- Python, Java.
  • BS degree in Computer Science, engineering, or mathematics or equivalent experience.

Location:

  • Bangalore, India

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs Architecture Athena Avro AWS Azure Big Data BigQuery Computer Science Data pipelines Dataproc Data warehouse dbt Distributed Systems Engineering ETL Flink GCP Java Kafka Kinesis Machine Learning Mathematics Open Source Parquet Pipelines Python Redshift Security Spark SQL Streaming Testing

Region: Asia/Pacific
Country: India

More jobs like this