Subcontractor
INDIA - MUMBAI - BIRLASOFT OFFICE, IN
Birlasoft
At Birlasoft we combine the power of domain, enterprise, and digital technologies to reimagine business potential. Surpassing expectations, breaking convention!Job Title: AWS DevOps Engineer
Job Summary:
We are seking a highly skilled and motivated AWS DevOps Engineer to join our dynamic team. The ideal candidate will have a strong background in data engineering, cloud computing, and a passion for building scalable data solutions. As an AWS Data Engineer, you will be responsible for designing, developing, and maintaining data pipelines and infrastructure on the AWS platform.
Key Responsibilities:
Spreading the DevOps culture across business units by implementing on-commit deployment and automated testing solutions.
Developing systems using the latest technologies to streamline the release management process into AWS.
Obtaining an understanding of product offerings and helping improve the customer experience.
Ensuring application monitoring and metrics are captured for all deployed assets.
Enforcing quality and security requirements in release pipelines.
Identifying areas of improvement in the environment and making recommendations on improvements1.
Design, develop, and maintain scalable data pipelines and ETL processes using AWS services such as Glue, Lambda, and Redshift.
Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver high-quality data solutions.
Implement data integration and transformation processes to ensure data quality and consistency.
Optimize and tune data pipelines for performance and cost-efficiency.
Monitor and troubleshoot data pipeline issues to ensure data availability and reliability.
Develop and maintain documentation for data pipelines, processes, and infrastructure.
Stay up-to-date with the latest AWS services and best practices in data engineering.
Leverage Apache Flink for real-time data processing and analytics, ensuring low-latency data handling.
Employ Apache Kafka for stream processing and integrating data from various sources into the data pipelines.
Qualifications:
Bachelor's degree in Computer Science, Information Technology, or a related field.
Proven experience as a Data Engineer with a focus on AWS services.
Strong proficiency in SQL and experience with data modeling and database design.
Experience with AWS services such as S3, Glue, Lambda, Redshift, and RDS.
Proficiency in programming languages such as Python or Java.
Knowledge of data warehousing concepts and ETL processes.
Strong problem-solving skills and attention to detail.
Excellent communication and collaboration skills.
Preferred Qualifications:
AWS Certified Data Analytics – Specialty or AWS Certified Solutions Architect.
Experience with big data technologies such as Hadoop, Apache Flink, Spark, or Kafka.
Familiarity with data visualization tools such as Tableau or Power BI.
Experience with DevOps practices and tools such as Docker, Kubernetes, and CI/CD pipelines.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: AWS Big Data CI/CD Computer Science CX Data Analytics Data pipelines Data quality Data visualization Data Warehousing DevOps Docker Engineering ETL Flink Hadoop Java Kafka Kubernetes Lambda Pipelines Power BI Python Redshift Security Spark SQL Tableau Testing
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.