AWS Data Engineer (Data Ingestion, ETL and Cloud Services)

Bengaluru - BCIT, India

Synechron

Synechron is an innovative global consulting firm delivering industry-leading digital solutions to transform and empower businesses.

View all jobs at Synechron

Apply now Apply later

Position Overview: We are seeking an experienced AWS Data Engineer with a strong foundation in AWS services and data ingestion using APIs. This individual will be instrumental in architecting and building scalable data solutions, leveraging AWS technologies such as S3, Redshift, Glue, and EC2 to support our data infrastructure needs.

Key Responsibilities:

Data Pipeline Development:

  • Develop and maintain scalable and reliable data pipelines to ingest data from various APIs into the AWS ecosystem.
  • Utilize AWS Glue for ETL processes, ensuring data is clean, well-structured, and ready for analysis.

Data Storage Management:

  • Manage data storage solutions using S3 buckets, ensuring best practices in data organization and security.
  • Implement data lake and warehousing strategies to support analytics and business intelligence initiatives.

Data Warehousing and Compute:

  • Utilize AWS Redshift for data warehousing tasks, optimizing data retrieval and query performance.
  • Utilize EC2 instances for custom applications and services that require compute capacity.

Collaboration and Compliance:

  • Collaborate with cross-functional teams to understand data needs and deliver solutions that align with business goals.
  • Ensure compliance with data governance and security policies.

Software Requirements:

  • Solid experience in AWS services, especially S3, Redshift, Glue, and EC2.
  • Proficiency in data ingestion and integration, particularly with APIs.
  • Strong understanding of data warehousing, ETL processes, and cloud data storage.
  • Experience with scripting languages such as Python for automation and data manipulation.
  • Familiarity with infrastructure as code tools for managing AWS resources.

Technical Skills:

AWS Services:

  • Proficiency in AWS services such as S3, Redshift, Glue, and EC2.
  • Knowledge of additional AWS services like EMR, Glue, and Kafka is a plus.

Data Engineering:

  • Strong experience in data ingestion and integration using APIs.
  • Proficiency in data warehousing and ETL processes.

Scripting and Automation:

  • Experience with scripting languages such as Python for automation and data manipulation.
  • Familiarity with infrastructure as code tools.

Experience:

  • 4-7 years of experience in data engineering and AWS services.
  • Proven experience in developing and maintaining data pipelines and data storage solutions.
  • Experience in data modeling and data analysis is preferred.
  • Experience in testing and ensuring data quality, as there are no dedicated testing engineers in the team.

Day-to-Day Activities:

  • Develop and maintain data pipelines to ingest data from various APIs.
  • Manage and optimize data storage solutions using AWS S3 and Redshift.
  • Configure and use AWS Glue for ETL processes.
  • Utilize EC2 instances for custom applications and services.
  • Collaborate with cross-functional teams to understand data needs.
  • Ensure compliance with data governance and security policies.
  • Perform data modeling and analysis as needed.
  • Conduct testing to ensure data quality and reliability.

Qualifications:

  • Bachelor’s degree in Computer Science, Information Technology, or a related field.
  • AWS certification is mandatory.
  • Solid experience in AWS services, especially S3, Redshift, Glue, and EC2.
  • Proficiency in data ingestion and integration, particularly with APIs.
  • Strong understanding of data warehousing, ETL processes, and cloud data storage.
  • Experience with scripting languages such as Python for automation and data manipulation.
  • Familiarity with infrastructure as code tools.

Soft Skills:

  • Excellent problem-solving skills and ability to work in a dynamic environment.
  • Strong communication skills for effective collaboration and documentation.
  • Ability to work independently and as part of a team.
  • Strong attention to detail and commitment to delivering high-quality solutions.

S​YNECHRON’S DIVERSITY & INCLUSION STATEMENT
 

Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more.


All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law.

Candidate Application Notice

Apply now Apply later

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs AWS AWS Glue Business Intelligence Computer Science Data analysis Data governance Data pipelines Data quality Data Warehousing EC2 Engineering ETL Kafka Pipelines Python Redshift Security Testing

Perks/benefits: Team events

Region: Asia/Pacific
Country: India

More jobs like this