Data Engineer

Bangalore Office

⚠️ We'll shut down after Aug 1st - try foo🦍 for all jobs in tech ⚠️

Apply now Apply later

Key Responsibilities :

 

1. Design & Develop Scalable Data Pipelines: Architect and build efficient data ingestion pipelines that empower the business with timely and actionable insights to drive strategy and operations.

 

2. Collaborate with Cross-Functional Teams: Work closely with technical and business teams to gather requirements and translate them into technical specifications and solutions.

 

3. Optimize Data Architecture: Continuously refine and improve the performance, reliability, and scalability of data pipelines to ensure seamless integration of data across systems.

 

4. Ensure Data Integrity: Perform rigorous data quality checks and uphold best practices to maintain the accuracy, consistency, and integrity of data.

 

5. Build ETL Solutions: Develop ETL frameworks to collect, transform, and integrate data from diverse sources such as Kafka, Scylla, PostgreSQL, MongoDB, APIs, and various file formats.

 

6. Adopt Best Practices: Implement best-in-class engineering practices around reporting and analysis, ensuring data integrity, testing, validation, and comprehensive documentation.

 

 

Basic Qualifications :

 

1. Bachelor’s degree in Computer Science, Information Systems, or a related technical field, or equivalent work experience.

2. 3+ years of hands-on experience with cloud-based data technologies, including message queues, event grids, relational databases, NoSQL databases, data warehouses, and big data technologies (e.g., Spark).

3. Proficiency in Spark (Java, Python, SQL): Expertise in developing and optimizing Spark-based applications for large-scale data processing.

4. Advanced SQL Skills: Ability to create complex queries, manage database structures, and ensure optimal performance.

5. Experience with Docker and Kubernetes: Familiarity with deploying applications using modern containerisation and orchestration tools.

6. DevOps and CI/CD: Solid understanding of modern DevOps practices, including Git, continuous integration, and continuous deployment pipelines.

7. Agile Development Experience: Comfortable working in an Agile environment, particularly using Scrum methodology.

 

Preferred Qualifications

1. API Frameworks & OOP: Experience with API frameworks and object-oriented programming to integrate services and improve data workflows.

2. Business Acumen: Ability to collaborate closely with leadership to deliver innovative solutions tailored to evolving business needs.

3. Strong Communication Skills: Excellent verbal and written communication abilities, with a knack for explaining complex technical concepts to both technical and non-technical stakeholders.

4. Ability to leverage tools like langchain, llama index ..etc to build llm based applications.

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  1  0  0
Category: Engineering Jobs

Tags: Agile APIs Architecture Big Data CI/CD Computer Science Data pipelines Data quality DevOps Docker Engineering ETL Git Java Kafka Kubernetes LangChain LLaMA LLMs MongoDB NoSQL OOP Pipelines PostgreSQL Python RDBMS Scrum Spark SQL Testing

Region: Asia/Pacific
Country: India

More jobs like this