Internship - Data Engineer

Saint-Sauveur, France

Syngenta Group

A leading agriculture company helping to improve global food security by enabling millions of farmers to make better use of available resources.

View all jobs at Syngenta Group

Apply now Apply later

Company Description

Syngenta Seeds is one of the world’s largest developers and producers of seed for farmers, commercial growers, retailers, and small seed companies. Syngenta seeds improve the quality and yields of crops. High-quality seeds ensure better and more productive crops, which is why farmers invest in them. Advanced seeds help mitigate risks such as disease and drought and allow farmers to grow food using less land, less water, and fewer inputs.


Syngenta Seeds brings farmers more vigorous, stronger resistant plants, including innovative hybrid varieties and biotech crops that can thrive even in challenging growing conditions.


Syngenta Seeds is headquartered in the United States.

Job Description

The advancement and placement team has a very important role throughout Syngenta. When developing new varieties, the amount of data collected is quite big and has to be managed properly. Collaborating with the data scientists in the team as well as the data owners, you will play a key role in facilitating the data processing and ensuring that the infrastructures and solutions are fulfilling all of the needs of the users.

Responsibilities

  • Build up data model and data pipeline to enable automated analysis.
  • Conduct QC into the data process
  • Implement unit tests
  • Enhance existing infrastructures and propose new solutions if needed
  • Cooperate with different actors with multiple backgrounds

Qualifications

  • Currently enrolled in a Bachelor's or Master's degree program in Computer Science/ Data Engineering/Big Data/Data Science
  • With data engineering background, can build the backbone of data infrastructure and contribute to acquiring, transforming, and cleaning data at scale.
  • Ability to build, test, and maintain robust database pipeline architectures
  • Ability to Integrate various data sources (databases, APIs, streams)
  • Able to develop and optimize data processing jobs using AWS services or similar solutions
  • Ability to collaborate and understand the needs of data owners as well translating it to simple views
  • Strong knowledge in Python and especially pandas, numpy, pytest,
  • Some experience with application development is preferred
  • Familiarity with version control systems (Git) and code repositories (GitHub, GitLab)
  • Understanding of data modeling, ETL processes, and data quality principles
  • Excellent problem-solving and debugging skills
Apply now Apply later
Job stats:  0  0  0
Category: Engineering Jobs

Tags: APIs Architecture AWS Big Data Computer Science Data quality Engineering ETL Git GitHub GitLab NumPy Pandas Python

Region: Europe
Country: France

More jobs like this