002AVM - Data Engineer (Remote)
Tamil Nadu, Coimbatore, India
Augusta Hitech
- Name of the position: Data Engineer
- Location: Remote
- No.of resources needed for this position: 01
- Mode : Contract
- Years of experience: 10+ Years
We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. The ideal candidate should have a robust background in the various phases of ETL data applications, including data ingestion, preprocessing, and transformation. This role is perfect for someone who thrives in a fast-paced environment and is passionate about leveraging data to drive business success.
Key Responsibilities:
- Design and implement efficient data ingestion pipelines to collect and process large volumes of data from various sources.
- Hands-on experience with AWS Database Migration Service for seamless data migration between databases.
- Develop and maintain scalable data processing systems, ensuring high performance and reliability.
- Utilize advanced data transformation techniques to prepare and enrich data for analytical purposes.
- Collaborate with cross-functional teams to understand data needs and deliver solutions that meet business requirements.
- Manage and optimize cloud-based infrastructure, particularly within the AWS ecosystem, including services such as S3, Step-Function, EC2, and IAM.
- Experience with cloud platforms and understanding of cloud architecture.
- Knowledge of SQL and NoSQL databases, data modeling, and data warehousing principles.
- Familiarity with programming languages such as Python or Java.
- Implement security and compliance measures to safeguard data integrity and privacy.
- Monitor and tune the performance of data processing systems to ensure optimal efficiency.
- Stay updated with emerging trends and technologies in data engineering and propose adaptations to existing systems as needed.
- Proficient in AWS Glue for ETL (Extract, Transform, Load) processes and data cataloging.
- Hands-on experience with AWS Lambda for serverless computing in data workflows.
- Knowledge of AWS Glue Crawler Kinesis RDS for batch / real-time data streaming
- Familiarity with AWS Redshift for large-scale data warehousing and analytics.
- Skillful in implementing data lakes using AWS Lake Formation for efficient storage and retrieval of diverse datasets.
- Experience with AWS Data Pipeline for orchestrating and automating data workflows.
- In-depth understanding of AWS CloudFormation for infrastructure as code (IaC) deployment.
- Proficient in AWS CloudWatch for monitoring and logging data processing workflows.
- Familiarity with AWS Glue DataBrew for visual data preparation and cleaning.
- Expertise in optimizing data storage costs through AWS Glacier and other cost-effective storage solutions.
- Hands-on experience with AWS DMS (Database Migration Service) for seamless data migration between different databases.
- Knowledge of AWS Athena for interactive query processing on data stored in Amazon S3.
- Experience with AWS AppSync for building scalable and secure GraphQL APIs.
Qualifications:
- A minimum of 10 years of experience in data engineering or a related field.
- Strong background in big data application phases, including data ingestion, preprocessing, and transformation.
Education:
- Bachelor’s degree in Computer Science, Engineering, Information Technology, or a related field. A Master’s degree is preferred.
* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰
Tags: APIs Architecture Athena AWS AWS Glue AWS Glue DataBrew Big Data CloudFormation Computer Science Data Warehousing EC2 Engineering ETL GraphQL Java Kinesis Lake Formation Lambda NoSQL Pipelines Privacy Python Redshift Security SQL Streaming
More jobs like this
Explore more career opportunities
Find even more open roles below ordered by popularity of job title or skills/products/technologies used.