Junior Data Engineer

Bangalore, KA, IN

Apply now Apply later

About the Role:

As a Junior Data Engineer, you will be responsible for implementing data pipelines and analytics solutions to support key decision-making processes in our Reinsurance business. You will become part of a project that is leveraging cutting edge technology that applies Big Data and Machine Learning to solve new and emerging problems for Swiss Re.

                          You will be expected to gain a full understanding of the reinsurance data and business logic required to

deliver analytics solutions.

 

Key responsibilities include:

  • Work closely with Product Owners and Engineering Leads to understand requirements, formulate solutions and evaluate the implementation effort.
  • Develop and maintain scalable data transformation pipelines
  • Implement analytics models and visualizations to provide actionable data insights
  • Evaluate new capabilities of the analytics platform, develop prototypes and assist in drawing conclusions about the applicability to our solution landscape.
  • Collaborate within a global development team to design and deliver solutions.

 

About the Team:

Data & Analytics Reinsurance is a key tech partner for our Reinsurance divisions, supporting in the transformation of the data landscape and the creation of innovative analytical products and capabilities. A large globally distributed team working in an agile development landscape, we deliver solutions to make better use of our reinsurance data and enhance our ability to make data driven decisions across the business value chain.

 

About You:

Are you eager to disrupt the industry with us and make an impact? Do you wish to have your talent recognized and rewarded? Then join our growing team and become part of the next wave of data innovation.

Key qualifications include:

 

  • Bachelor's degree level or equivalent in Computer Science, Data Science or similar discipline
  • At least 1-3 years of experience working with large scale software systems
  • Proficient in Python/PySpark
  • Proficient in SQL (Spark SQL preferred)
  • Experience working with large data sets on enterprise data platforms and distributed computing (Spark/Hive/Hadoop preferred)
  • Palantir Foundry exposure is a plus.
  • Experience with JavaScript/HTML/CSS a plus
  • Experience working in a Cloud environment such as AWS or Azure is a plus
  • Strong analytical and problem-solving skills
  • Enthusiasm to work in a global and multicultural environment of internal and external professionals
  • Strong interpersonal and communication skills, demonstrating a clear and articulate standard of written and verbal communication in complex environments

 

 

About Swiss Re

 

Swiss Re is one of the world’s leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world.

Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability.

 

 

Keywords:  
Reference Code: 130731 

 

 

Apply now Apply later
  • Share this job via
  • 𝕏
  • or

* Salary range is an estimate based on our AI, ML, Data Science Salary Index 💰

Job stats:  7  3  0
Category: Engineering Jobs

Tags: Agile AWS Azure Big Data Computer Science Data pipelines Engineering Hadoop JavaScript Machine Learning Pipelines PySpark Python Spark SQL

Perks/benefits: Career development Flex hours

Region: Asia/Pacific
Country: India

More jobs like this