Senior Data Engineer
Delhi
⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️
HighLevel
HighLevel is the all-in-one sales & marketing platform that agencies can white-label and resell to their clients!Our PeopleWith over 1,500 team members across 15+ countries, we operate in a global, remote-first environment. We are building more than software; we are building a global community rooted in creativity, collaboration, and impact. We take pride in cultivating a culture where innovation thrives, ideas are celebrated, and people come first, no matter where they call home.
Our ImpactAs of mid 2025, our platform powers over 1.5 billion messages, helps generate over 200 million leads, and facilitates over 20 million conversations for the more than 2 million businesses we serve each month. Behind those numbers are real people growing their companies, connecting with customers, and making their mark - and we get to help make that happen.
About the Role:We are seeking a talented and motivated tech data engineer to join our team who will be responsible for designing, developing, and maintaining our data infrastructure and developing backend systems and solutions that support real-time data processing, large-scale event-driven architectures, and integrations with various data systems. This role involves collaborating with cross-functional teams to ensure data reliability, scalability, and performance. The candidate will work closely with data scientists, analysts and software engineers to ensure efficient data flow and storage, enabling data-driven decision-making across the organisation.
Requirements:
- 4+ years of experience in software development
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field
- Strong Problem-Solving Skills: Ability to debug and optimize data processing workflows
- Programming Fundamentals: Solid understanding of data structures, algorithms, and software design patterns
- Software Engineering Experience: Demonstrated experience (SDE II/III level) in designing, developing, and delivering software solutions using modern languages and frameworks (Node.js, JavaScript, Python, TypeScript, SQL, Scala or Java)
- ETL Tools & Frameworks: Experience with Airflow, dbt, Apache Spark, Kafka, Flink or similar technologies.
- Cloud Platforms: Hands-on experience with GCP (Pub/Sub, Dataflow, Cloud Storage) or AWS (S3, Glue, Redshift)
- Databases & Warehousing: Strong experience with PostgreSQL, MySQL, Snowflake, and NoSQL databases (MongoDB, Firestore, ES)
- Version Control & CI/CD: Familiarity with Git, Jenkins, Docker, Kubernetes, and CI/CD pipelines for deployment
- Communication: Excellent verbal and written communication skills, with the ability to work effectively in a collaborative environment
- Experience with data visualization tools (e.g. Superset, Tableau), Terraform, IaC, ML/AI data pipelines and devops practices are a plus
Responsibilities:
- Software Engineering Excellence: Write clean, efficient, and maintainable code using JavaScript or Python while adhering to best practices and design patterns
- Design, Build, and Maintain Systems: Develop robust software solutions and implement RESTful APIs that handle high volumes of data in real-time, leveraging message queues (Google Cloud Pub/Sub, Kafka, RabbitMQ) and event-driven architectures
- Data Pipeline Development: Design, develop and maintain data pipelines (ETL/ELT) to process structured and unstructured data from various sources
- Data Storage & Warehousing: Build and optimize databases, data lakes and data warehouses (e.g. Snowflake) for high-performance querying
- Data Integration: Work with APIs, batch and streaming data sources to ingest and transform data
- Performance Optimization: Optimize queries, indexing and partitioning for efficient data retrieval
- Collaboration: Work with data analysts, data scientists, software developers and product teams to understand requirements and deliver scalable solutions
- Monitoring & Debugging: Set up logging, monitoring, and alerting to ensure data pipelines run reliably
- Ownership & Problem-Solving: Proactively identify issues or bottlenecks and propose innovative solutions to address them
#LI-Remote #NJ1
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Airflow APIs Architecture AWS CI/CD Computer Science Dataflow Data pipelines Data visualization dbt DevOps Docker ELT Engineering ETL Flink GCP Git Google Cloud Java JavaScript Jenkins Kafka Kubernetes Machine Learning Microservices MongoDB MySQL Node.js NoSQL Pipelines PostgreSQL Python RabbitMQ Redshift Scala Snowflake Spark SQL Streaming Superset Tableau Terraform TypeScript Unstructured data
Perks/benefits: Career development Team events
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.