Software Development Engineer - II (Remote)
Uttar Pradesh, Noida, India
Bungee Tech
Software Development Engineer II (SDE-II)
Company Description:
At Bungee Tech, we help retailers and brands meet customers everywhere and, on every occasion, they are in. We believe that accurate, high-quality data matched with compelling market insights empowers retailers and brands to keep their customers in the centre of all innovation and value they are delivering.
We provide a clear and complete omnichannel picture of their competitive landscape to retailers and brands. We collect billions of data points every day and multiple times in a day from publicly available sources. Using high-quality extraction, we uncover detailed information on products or services, which we automatically match, and then proactively track for price, promotion, and availability. Plus, anything we do not match helps to identify a new assortment opportunity.
Empowered with this unrivalled intelligence, we unlock compelling analytics and insights that once blended with verified partner data from trusted sources such as Nielsen, paints a complete, consolidated picture of the competitive landscape.
- Building on the foundation of the SDE-I role, the SDE-II position at Bungee Tech takes on a greater level of responsibility and leadership. You'll play a crucial role in driving the evolution and efficiency of our data collection and analytics platform, capable of handling terabyte-scale data and billions of data points.
- Lead the design, development, and optimization of large-scale data pipelines and infrastructures using technologies like Apache Airflow, Spark, Kafka, and more.
- Architect and implement distributed data processing solutions to handle terabyte-scale datasets and billions of records efficiently across multi-region cloud infrastructure (AWS, GCP, DO).
- Develop and maintain real-time data processing solutions for high-volume data collection operations using technologies like Spark Streaming and Kafka.
- Optimize data storage strategies using technologies such as Amazon S3, HDFS, and Parquet/Avro file formats for efficient querying and cost management.
- Build and maintain high-quality ETL pipelines, ensuring robust data collection and transformation processes with a focus on scalability and fault tolerance.
- Collaborate with data analysts, researchers, and cross-functional teams to define and maintain data quality metrics, implement robust data validation, and enforce security best practices.
- Mentor junior engineers (SDE-I) and foster a collaborative, growth-oriented environment.
- Participate in technical discussions, contributing to architectural decisions, and proactively identifying improvements for scalability, performance, and cost-efficiency.
- Ensure application performance monitoring (APM) is in place, utilizing tools like Datadog, New Relic, or similar to proactively monitor and optimize system performance, detect bottlenecks, and ensure system health.
- Implement effective data partitioning strategies and indexing for performance optimization in distributed databases such as DynamoDB, Cassandra, or HBase.
- Stay current with advancements in data engineering, orchestration tools, and emerging cloud technologies, continually enhancing the platform’s capabilities.
- 4+ years of hands-on experience with Apache Airflow and other orchestration tools for managing large-scale workflows and data pipelines.
- Expertise in AWS technologies, Athena, AWS Glue, DynamoDB, Apache Spark, PySpark, SQL, and NoSQL databases.
- Experience in designing and managing distributed data processing systems that scale to terabyte and billion-scale datasets using cloud platforms like AWS, GCP, or Digital Ocean.
- Proficiency in web crawling frameworks, including Node.js, HTTP protocols, Puppeteer, Playwright, and Chromium for large-scale data extraction.
- Experience with monitoring and observability tools such as Grafana, Prometheus, Elasticsearch, and familiarity with monitoring and optimizing resource utilization in distributed systems.
- Strong understanding of infrastructure as code using Terraform, automated CI/CD pipelines with Jenkins, and event-driven architecture with Kafka.
- Experience with data lake architectures and optimizing storage using formats such as Parquet, Avro, or ORC.
- Strong background in optimizing query performance and data processing frameworks (Spark, Flink, or Hadoop) for efficient data processing at scale.
- Knowledge of containerization (Docker, Kubernetes) and orchestration for distributed system deployments.
- Deep experience in designing resilient data systems with a focus on fault tolerance, data replication, and disaster recovery strategies in distributed environments.
- Strong data engineering skills, including ETL pipeline development, stream processing, and distributed systems.
- Excellent problem-solving abilities, with a collaborative mindset and strong communication skills.
At Bungee Tech, you’ll be at the forefront of innovation in the data engineering space, working with cutting-edge technologies and a talented team. If you're passionate about building scalable systems, handling large-scale distributed data, and solving complex data challenges, we’d love to have you on board.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow Architecture Athena Avro AWS AWS Glue Cassandra CI/CD Data pipelines Data quality Distributed Systems Docker DynamoDB Elasticsearch Engineering ETL Flink GCP Grafana Hadoop HBase HDFS Jenkins Kafka Kubernetes Node.js NoSQL Parquet Pipelines Playwright PySpark Security Spark SQL Streaming Terraform
Perks/benefits: Startup environment
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.