AWS Data Engineer - Associate Consultant
Bangalore, Karnataka, India
KPMG India
KPMG is a global network of professional firms providing Audit, Tax and Advisory services.Mandatory Skills
- Bachelor’s or higher degree in Computer Science or a related discipline; or equivalent (minimum 2+ years’ work experience).
- Experience in development of data pipelines and processing of data at scale using technologies like EMR, Kinesis, Lambda, Glue, Athena, Redshift, Step Functions.
- Experience in design and development of applications using Python (must have) or Java.
- Experience in Big Data technologies and tools such as PySpark/Spark, Hadoop, Hive, Kafka etc.
- Experience with building data pipelines in streaming and batch mode.
- Experience in optimizing big data pipelines on AWS.
- Good understanding of data warehousing concepts and Expert level skills in writing and optimizing SQL.
- Experience in development using message queues, stream processing, highly available & fault tolerant applications
- Minimum 1 year of experience in using AWS services like EC2, Cloud IAM, VPC, S3 etc. and good understanding of architectural best practices.
- Proficiency in using SDKs for interacting with native AWS services
- Good understanding of big data design patterns and performance tuning
- Experience with Git and CI/CD pipelines to deploy cloud applications
- Experienced in roles conducting requirements gathering, writing user stories, creating application design and using design patterns
- Excellent communication skills with the ability to influence client business and IT teams
- Experience with Agile software development
- Ability to work independently and across multiple teams
Primary Roles and Responsibilities
An AWS Data Engineering is responsible for designing, building, and maintaining the data infrastructure for an organization using AWS cloud services. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The AWS Data Engineering will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency.
Preferred Skills
- Bachelors / Master’s degree in computer science, Data Science, engineering, mathematics, information systems, or a related technical discipline
- 1+ years of work experience with ETL, Data Modelling.
- Experience or familiarity with newer analytics tools such as AWS Lake Formation, Sagemaker, DynamoDB, Lambda, Elasticsearch.
- Experience with Data streaming service e.g Kinesis Kafka
- Ability to develop experimental and analytic plans for data modeling processes, use of strong baselines, ability to accurately determine cause and effect relations
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: Agile Athena AWS Big Data CI/CD Computer Science Data pipelines Data Warehousing DynamoDB EC2 Elasticsearch Engineering ETL Git Hadoop Java Kafka Kinesis Lake Formation Lambda Mathematics Pipelines Privacy PySpark Python Redshift SageMaker Security Spark SQL Step Functions Streaming
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.